added
string | created
string | id
string | metadata
dict | source
string | text
string | version
string |
---|---|---|---|---|---|---|
2021-08-24T01:15:44.921Z
|
2021-08-21T00:00:00.000
|
237267215
|
{
"extfieldsofstudy": [
"Mathematics"
],
"oa_license": "CCBY",
"oa_status": "HYBRID",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s00220-023-04816-4.pdf",
"pdf_hash": "5e7bf829a552774ac1f461125157145d66cded4f",
"pdf_src": "ArXiv",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44259",
"s2fieldsofstudy": [
"Mathematics"
],
"sha1": "5e7bf829a552774ac1f461125157145d66cded4f",
"year": 2023
}
|
pes2o/s2orc
|
Nonuniqueness of Solutions to the Euler Equations with Vorticity in a Lorentz Space
For the two dimensional Euler equations, a classical result by Yudovich states that solutions are unique in the class of bounded vorticity; it is a celebrated open problem whether this uniqueness result can be extended in other integrability spaces. We prove in this note that such uniqueness theorem fails in the class of vector fields u with uniformly bounded kinetic energy and vorticity in the Lorentz space L1,∞\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$L^{1, \infty }$$\end{document}.
Introduction
Let us consider the 2-dimensional Euler equation where u : [0, 1] × T 2 → R 2 is the velocity of a fluid and p : [0, 1] × T 2 → R the pressure.This system can be equivalently rewritten as the two dimensional Euler system in vorticity formulation, which is a transport equation for the vorticity ω = curl(u), i.e.
In the latter formulation it is clear that L p norms of the vorticity are formally conserved for any p ∈ [1, ∞].
For p > 1, this was used in [11] to prove existence of distributional solutions starting from an initial datum with vorticity in L p .A similar existence result is much more involved for p = 1, and it was obtained by Delort [10] (see also [11,12]), improving the existence theory up to measure initial vorticities in H −1 (this latter condition guarantees finiteness of the energy) whose positive (or negative) part is absolutely continuous.As regards uniqueness, the classical result of Yudovich [15,16] (see also the proof in [17]) states that, given an initial datum ω 0 ∈ L ∞ , there exists a unique bounded solution to (2) starting from ω 0 .However, the classical problem raised by Yudovich about the sharpness of his result is still open.Let u 0 be an initial datum in L 2 with curl u 0 in some function space X.Is the solution of the Euler equation in vorticity formulation unique in the class L ∞ (X)?
The main result of this paper provides a negative answer when X is the Lorentz space L 1,∞ .
Recently, there have been formidable attempts to disprove this conjecture for X = L p , none of which has by now fully solved it.Vishik [22,23] proposed a complex line of approach to this problem, which however has the price of showing nonuniqueness only with an additional degree of freedom, namely a forcing term in the right-hand side of the equation (2) in the integrability space L 1 (L p ).The nonuniqueness suggested 1 MSC classification: 35F50 (35A02 35Q35).Keywords: Euler Equation, vorticity formulation, convex integration, uniqueness.
by this work is of symmetry breaking typeand, in contrast with the ideas of this paper, his nonuniqueness stems from the linear part of the equation, by carefully choosing an initial datum that sees the instability directions of a linearized operator.
A second attempt has been pursued by Bressan and Shen [2], based on numerical experiments which share the symmetry breaking type of nonuniqueness of Vishik.Their work is a first step in the direction of a computer assisted proof.
Our approach is instead of different nature and stems from the convex integration technique.The latter was introduced by De Lellis and Székelyhidi [9] in the context of nonlinear PDEs, inspired by the work of Nash on isometric embeddings [20], which found striking applications in recent years to different PDEs (see for instance [5-7, 14, 18, 19] and the references quoted therein).As such, our proof would probably be less constructive with respect to the strategies of [22,23] and [2], where an initial datum for which nonuniqueness is expected is described fairly explicitly as well as the mechanism for the creation of two different singularities.Conversely, the latter approaches see the drawbacks described above and are by no means "generic" in the initial data, whereas it is known (see for instance [8,21]) that convex integration methods yield not only the lack of uniqueness/smoothness for certain specific initial data, but also that solutions are typical (in the Baire category sense).
1.1.Strategy of proof.The guiding thread of this construction is an iterative procedure, where one starts from a solution (u 0 , p 0 , R 0 ) of the Euler equations with an error term in the right-hand side, namely and iteratively corrects this error by adding a fastly oscillating perturbation to the approximate solution.
The nonlinear interaction of this perturbation with itself generates a resonance which allows for the cancellation of the previous error; the other terms are mainly seen as new error terms, with smaller size with respect to the previous error.More precisely, we define the new solution (u 1 , p 1 , R 1 ) by setting where λ ≫ 1 is a higher frequency with respect to the typical frequencies in u 0 , w is called building block of the construction and enjoys suitable integrability properties, a is a slowly varying coefficient.The cancellation of error happens because the low frequency term in a 2 w λ ⊗ w λ satisfies This forces us to require that On the contrary, we wish to control the quantity Du 1 X and for this end we need Dw λ X arbitrarily small.This imposes us a restriction on the space X since the Sobolev inequality in Lorentz spaces (see [1]) states that giving that ∇w λ L 1,2 = λ ∇w L 1,2 λ w L 2 ∼ λ ≫ 1 , when applied with p = 1 and q = 2.In particular, with the current method of proof (and in particular with the current way to cancel the error in the iteration), X = L 1 or X = L 1,2 are not allowed; only X = L 1,q for q > 2 could be obtained.To avoid technicalities, we present the proof with X = L 1,∞ .
The main novelty in the proof of Theorem 1.1 regards the construction of a new family of building blocks.They are designed as a bundle of almost solutions to Euler, suitably rescaled and periodized in order to saturate the L 1,∞ norm.To this aim we take advantage of intermittent jets, introduced in [4], and we bundle them in a similar spirit to the atomic decomposition of Lorentz functions.A challenge is to keep different building blocks disjoint in space-time, since we work in two dimensions and since each component of the bundle has its own characteristic speed.We refer the reader to Section 4 for the precise construction and more explanations on our choice of building blocks.
Remark 1.2.The proof of Theorem 1.1 is flexible enough, due to the exponential convergence of the iterative sequence, to give ω ∈ L 1,q for some q ≫ 1.A technical refinement of the current proof, based on Remark 4.4, would give q > 4.
Acknowledgments.EB was supported by the Giorgio and Elena Petronio Fellowship at the Institute for Advanced Study.MC was supported by the SNSF Grant 182565.The author wish to thank Camillo De Lellis for interesting discussions on the theme of the paper.
Iteration and Euler-Reynolds system
We consider the system of equations (3) in [0, 1] × T, where R is a traceless symmetric tensor.
As already remarked, our solution to (1) is obtained by passing to the limit solutions of (3) with suitable constraints on u and R. The latter are built by means of an iterative procedure based on the following.Proposition 2.1.There exists M > 0 such that the following holds.For any smooth solution (u 0 , p 0 , R 0 ) of (3), there exists another smooth solution (u 1 , p 1 , R 1 ) of (3) such that Proof of Theorem 1.1 given Proposition 2.1.Fix λ > 0. We start the iteration scheme with Applying iteratively Proposition 2.1 with t 0 = 1/2 we build a sequence {(u n , p n , R n ) : n ∈ N} of smooth solutions to (1) such that, for any n ≥ 0, it holds and , where u satisfies the assumptions of Theorem 1.1.To prove that Du ∈ C 0 (L 1,∞ ), a bit of extra care is needed since only the weak triangle inequality f ∞ holds true.However, the latter is enough for our purposes The remaining part of this note is devoted to the proof of Proposition 2.1.In Section 4 we introduce the building blocks of our construction, in Section 5 we use them to define the perturbation u 1 − u 0 , finally in Section 6, we introduce the new error term R 1 and show that it can be made arbitrarily small.
Preliminary lemmas
3.1.Lorentz spaces.For every measurable function f : (see e.g.[13]) and we define the Lorentz space L r,q with r ∈ [1, ∞), q ∈ [1, ∞], as the space of those functions f such that f L r,q < ∞.Note that, in spite of the notation, • L r,q is in general not a norm but for (r, q) = (1, ∞) the topological vector space L r,q is locally convex and there exists a norm ||| • ||| r,q which is equivalent to • L r,q in the sense that the inequality C −1 |||f ||| r,q ≤ f L r,q ≤ C|||f ||| r,q holds.3.2.Improved Hölder inequality.We recall the following improved Hölder inequality, stated as in [18,Lemma 2.6] (see also [3,Lemma 3.7]).If λ ∈ N and f, g : T 2 → R are smooth functions, then we have ( When T 2 g = 0, then Here Sym 2 denotes the space of symmetric matrices in and that DR 0 is a Calderon-Zygmund operator, in particular it holds ) Notice that ( 7) and ( 8) allow showing that where v λ (x) := v(λx) for some λ ∈ N. The latter is immediate for p in the case p = 1 and p = ∞ we need to take advantage of the Sobolev embedding theorem: To prove (10) we use ( 9) and ( 6): Remark 3.2.The operator R can be also defined on scalar functions f : and arguing as in Lemma (3.1) we can easily show that div Proof.Set T (A) := R(∇a • div A).By duality, it suffices to show that , where T * and R + 0 denote the adjoint of T and R 0 , respectively.To this aim we employ the Sobolev embedding and the fact that DT * R * 0 (B) maps L p into L p for any p ∈ (1, ∞):
Building blocks
In this section we introduce the building blocks of our construction.They will be employed in Section 5 to define the principal term of u 1 − u 0 in Proposition 2.1.
(iv) the following estimates hold i is the principal term, it has zero mean, high frequency λ ≥ ε −1 , is controlled in the relevant norms (cf.(iv)), and satisfies the fundamental property (iii): the quadratic interaction W p i ⊗ W p i produces the lower order term , is used to cancel the error R 0 out.To achieve the crucial bound DW p i L 1,∞ we design the principal term as where K, n 0 ≫ 1 are big parameters and ξ i is one of the four directions appearing in the statement of Proposition 4.1.In a first stage, we build W p i (x, t) for a fixed parameter i, ignoring the issue that, for different parameters, such functions will not have disjoint support as requested in Proposition 4.1 (v); only in Section 4.6 we make sure to suitably time-translate them, making substantial use of their special structure, to guarantee that Proposition 4.1 (v) holds .The vector fields W k (x, t), k = n 0 + 1, . . ., n 0 + K, are the 2-dimensional counterpart of the intermittent jets introduced in [4].They have L 2 norm equal to 1, and are supported on disjoint balls of radius 2 −k r, for some r ≪ 1, which move in direction e i with speed µ2 k , where µ ≫ 1.The fast time translation is used to make W k "almost divergence free" and "almost solutions to the Euler equation".In more rigorous terms, it means that there exist vector fields W p k , Q k , that are smaller than W k satisfying div (W k + W p k ) = 0 and . The vector fields W p i and Q i are defined bundling together W p k and Q k as we did in (12).Another important property we need is that W i ⊗W j = 0 when i = j.It is ensured by (iv) in Proposition 4.1, which builds upon a delicate combinatorial lemma presented in section 4.6.
We finally explain the role of the matrix A i in our construction.Let us begin by noticing that the principal term W p i has big time derivative, being fast translating in time.Hence, the term ∂ t W p i cannot be treated as an error.To overcome this difficulty we impose an extra structure on W p i and W c i .We construct them in order to have the identity ∂ t (W p i + W c i ) = div (A i ), for some symmetric matrix A i which has small L 1 -norm.The latter can be added to the new error term R 1 .
General notation. Given a velocity field
Let us fix r ⊥ ≪ r ≪ 1 and k ∈ N. We adopt the following convention: given any ρ : R → R supported in (−1, 1) we write With a slight abuse of notation we keep denoting by ρ k r ⊥ , ρ k r : T → R their periodized version.
Time correction. Let us now set
and observe that The time corrector is defined as The proof of Lemma 4.2 is a simple computation, so we omit it.It implies the following, summing on k and remininding that then terms in the sum in (14) have disjoint support, (in particular, this says that the principal part is much smaller than the corrector), and
We claim the following: suppose that for a certain t > 0 and k ∈ {n 0 , . . ., n 0 +K} we have suppW k for every h ∈ {n 0 , . . ., n 0 + K}.The previous claim excludes the simultaneous presence at any t > 0 of the support of W k (ξ 1 ) (•, t) and the support of W h (ξ 2 ) (•, t + t 0 ) in B R (0), thereby concluding the proof of the lemma.We now prove the claim.Let us fix a time t such that suppW k ) is moving at constant speed µ2 k along the tube on the torus, there exists t such that |t − t| ≤ Rµ −1 2 −k and suppW k (ξ 1 ) (•, t) = suppW k (ξ 1 ) (x, 0).At time t we have information about the position of suppW h (ξ 2 ) (•, t + t 0 ); more precisely, we have that because the ratio between the (constant) velocity of suppW k (ξ 1 ) (•, t) and the velocity of suppW k (ξ 2 ) (•, t) is of the form 2 j for some j ∈ {−K, ..., K}.
In the union in the right-hand side of ( 23), thanks to the upper bound on t 0 , the choice n = 0 identifies the ball of the (finite) union at minimal distance from the origin for every k.By the lower bound on t 0 and the fact that the minimal velocity is µ2 n 0 , we get that this distance is greater than 2 n 0 −7K .At time t the distance between suppW h (ξ 2 ) (•, t + t 0 ) and B R (0) is therefore bigger than This concludes the proof of the claim.
,K,n 0 and A i+1 := A ξ,K,n 0 satisfy (v) in Lemma 4.1.We refer the reader to Lemma 4.5 for the construction of A ξ,K,n 0 .Properties (i) and (ii) in Lemma 4.1 are now immediate from (15), (16) and Lemma 4.5.We are left with the proof of (iii) and (iv) in Lemma 4.1.To do so we have to choose appropriately the parameters λ, µ, K, r ⊥ and r .Let δ < 1/2 to be chosen later in terms of ε > 0, we set leaving r ⊥ ≪ r ≪ 1 free.From Lemma 4.3, Lemma 4.5, ( 17), ( 18) and ( 19) we deduce The conclusions (iii) and (iv) in Lemma 4.1 follow by choosing first δ small enough so that Cδ ≤ ε, and after r ⊥ ≪ r ≪ 1 so that r ⊥ r ≤ ε and
For ε > 0 to be chosen later, we consider the functions W p i , W c i , Q i , A i from Proposition 4.1.We define the new velocity field as the sum of the previous one, a principal perturbation, a divergence corrector and a temporal corrector 1 , where We refer the reader to Remark 3.2 for the definition of R.
From now on, in order to simplify our notation, for any function space X and any map f which depends on t and x, we will write f X meaning f L ∞ (X) .
5.1.Estimate on u 1 − u 0 L 2 and on u 1 − u 0 L 1 .By the triangular inequality, and we estimate the right-hand side separately as where in the second line we used the improved Holder inequality ( 5) and (iii) in Proposition 4.1.From Remark 3.2 we deduce Finally we employ (iv) in Proposition 4.1 to get we estimate the right-hand side separately as where we employed (iv) in Proposition 4.1.Using that DR is a Calderon-Zygmund operator we deduce
New error
We define R 1 in such a way that which, by subtracting the equation for u 0 , is equivalent to We are going to define 1 , where the various addends are defined in the following paragraphs, and show that ) .The proof of Proposition 2.1 will follow by choosing ε small enough.
thanks to (25) it holds Using that for some pressure term P , it is immediate to verify the identity Since R and R 0 send L 1 to L 1 (cf.Lemma 3.1 and Remark 3.2), we have that From (iv) in Proposition 4.1 we get .
By employing (11) we bound 6.3.Quadratic error terms.Let us set and show that (26) holds.In view of ( 27), ( 24) and (28) it amounts to check that div (R The latter easily follows by noticing that, as a consequence of (ii) in Proposition 4.1, one has Let us finally prove that R (q) 1 L 1 ≤ εC(t 0 , R 0 C 2 ).We begin by observing that 1 L 2 , u 1 −u 0 L 2 and Lemma 3.1 we deduce
|
v3-fos-license
|
2021-10-30T15:14:53.117Z
|
2021-10-28T00:00:00.000
|
240205207
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://doi.org/10.1155/2021/7278853",
"pdf_hash": "ef16b036500816410eef388aa94fa87481b7a608",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44260",
"s2fieldsofstudy": [
"Medicine",
"Psychology"
],
"sha1": "4c674d0b09d0453cd7df645ab1a6a8952fc2a38d",
"year": 2021
}
|
pes2o/s2orc
|
Changes and Influencing Factors of Cognitive Impairment in Patients with Breast Cancer
Objective. To investigate the changes in cognitive function and its influencing factors in patients with breast cancer after chemotherapy, to provide a scientific basis for further cognitive correction therapy.Methods. In this study, general information on age, marital status, and chemotherapy regimen was collected from 172 breast cancer chemotherapy patients. 172 patients with breast cancer undergoing chemotherapy were investigated by convenience samplingmethod, and the subjects were tested one-onone using the Chinese version of the MATRICS Consensus Cognitive Battery (MCCB) computer system. Results. ,e mean value of standardized t-value of cognitive function and its abnormal dimensions in breast cancer patients undergoing chemotherapy were MCCB total cognition (66.3%, 36.99± 13.06, abnormal), working memory (73.3%, 36.84± 10.25), attention and alertness (70.3%, 37.20± 12.50), social cognition (65.1%, 39.54± 10.17), and visual memory (61.6%, 42.19± 9.38). A comparison of cognitive function among breast cancer chemotherapy patients with different demographic characteristics showed that differences in place of residence, educational level, monthly income, timing of chemotherapy, chemotherapy regimen, and chemotherapy times may be associated with abnormal cognitive function. Further multiple linear regression analysis was performed and the results showed that there was a linear regression between literacy, number of chemotherapy sessions, monthly personal income, and cognitive function. Conclusion. Cognitive impairment is common in patients with breast cancer after chemotherapy. Nurses should pay attention to the cognitive function changes and intervention of patients with breast cancer after chemotherapy, to prevent the changes of cognitive function and promote the rehabilitation of patients.
Introduction
Breast cancer is already the most common malignant tumor in women with the highest incidence [1]. In 2018, it is estimated that there are about 2.1 million new cases of breast cancer worldwide, accounting for 25% of all new cases of malignant tumors [2]. In recent years, the incidence and mortality of female breast cancer in China has increased year by year, and the disease burden of breast cancer patients has also increased year by year. At present, surgery combined with postoperative chemotherapy is still the first choice for the treatment of breast cancer, but chemotherapy drugs are used in the treatment of diseases. At the same time, there are many adverse reactions, which brings great pain to patients e occurrence of CRCI not only affects the frequency of social interaction and the efficiency of work for breast cancer patients but also has a serious impact on the patient's ability to perform daily activities, which can be physically and mentally devastating and have a negative impact on family harmony. Currently, most of the studies on cognitive impairment in China focus on elderly patients, stroke patients, and other populations, and there are fewer studies on cognitive impairment and its influencing factors in breast cancer chemotherapy patients [7].
is study, therefore, investigates breast cancer chemotherapy patients, using a neuropsychological test as a research tool and a computerized measurement platform to enable the implementation of computer-assisted data for brain function tests and the implementation of validated cognitive function sets for patients in hospitals.
is is to provide a clearer understanding of the current status of CRCI in breast cancer chemotherapy patients and to analyze its influencing factors, to provide a scientific basis for further cognitive remediation treatment.
Object.
Convenience sampling was used to investigate the subjects who met the inclusion criteria in the First and ird Affiliated Hospitals of Jinzhou Medical University from October 2018 to March 2020. Inclusion criteria: (1) patients with histopathologically diagnosed breast cancer and undergoing chemotherapy; (2) no hearing, vision, language, and other dysfunctions and having certain expression and reading ability; and (3) voluntarily participating in the research of this subject. Exclusion criteria: (1) patients with advanced cachexia; (2) patients with cognitive impairment prior to receiving chemotherapy treatment; (3) patients with obvious anxiety, depression, and other mental illnesses; (4) patients taking drugs related to cognitive function; (5) patients with intracranial abnormalities and intracranial metastases on MRI or CT examination of the head; (6) combined with severe heart, liver, kidney, brain, and hematopoietic system diseases. 206 breast cancer chemotherapy patients participated in this study, of whom 34 did not complete this study, for a final sample size of 172. e general information is shown in Table 1 Later, Beijing Huilongguan Hospital organized experts to conduct tests, standardization, and computerization. In all MCCB subtests, except for the visual memory test, which is subjectively scored by the rater, other subtests are automatically scored by computer programs. As long as the main tester operates according to the regulations, there is no need for the main tester to score by time. e MCCB is an individual cognitive function test, which requires one-to-one testing between the examiner and the subject. e examiner needs to have certain qualifications and undergo a rigorous training before the evaluation. According to the results of the cognitive function test, the number of standard deviations compared with the norm is used to determine the degree of cognitive deficits.
Survey Method.
is study was reviewed and approved by the Ethics Committee of Jinzhou Medical University. Convenience sampling was used to investigate the subjects who met the inclusion criteria in the First and ird Affiliated Hospitals of Jinzhou Medical University from October 2018 to March 2020 and solicited the test subjects. After the participants and their family members agree, the researchers explained the test procedures and requirements in detail to the subjects and conducted one-to-one computer system tests on the subjects.
Statistical Methods.
e data was checked by two persons and entered into SPSS 21.0, and statistical analysis was carried out. e statistical data were expressed as rates and composition ratios (n, %) using the chi-square test, while the measurement data were expressed as mean-± standard deviation (mean ± SD), using t-test between two groups and one-way variance (F) test between multiple groups. Pearson's model was used for correlation analysis and the multiple linear regression model for multifactor analysis; P < 0.05 was considered statistically significant. Table 3.
Comparison of Cognitive Function of Breast Cancer Patients Undergoing Chemotherapy with Different Demographic
Characteristics.
e results of the comparison of cognitive function in breast cancer chemotherapy patients with different demographic characteristics showed that differences in place of residence, literacy, monthly income, timing of chemotherapy, chemotherapy regimen, and chemotherapy times may be associated with abnormal cognitive function (p < 0.05). See Table 4 for details.
Multiple Linear Regression Analysis of Cognitive Function in Patients with Breast Cancer Chemotherapy.
e MCCB score of patients with breast cancer chemotherapy was used as the dependent variable, and the statistically significant variables in the univariate analysis were used as independent variables to perform multiple linear stepwise regression analysis (αin = 0.05, αout = 0.10). e analysis revealed that Evidence-Based Complementary and Alternative Medicine 3 education level, chemotherapy times, and personal monthly income (see Table 5 for assignments) were risk factors for cognitive function in breast cancer chemotherapy patients (p < 0.05), as shown in Table 6.
Analysis of the Status Quo of Cognitive Function in Breast
Cancer Patients Undergoing Chemotherapy. Tables 2 and 3 of the results of this study show that the total standardized T score of MCCB for cognitive function in breast cancer patients is 10 to 63 points, and the average T score is 36.99 ± 13.06. e result is abnormal, and the cognitive function of breast cancer patients undergoing chemotherapy is impaired. It shows that the cognitive function of breast cancer patients undergoing chemotherapy is generally impaired, which is basically consistent with previous studies. Huang's [8] study showed that 19% to 78% of breast cancer chemotherapy patients experienced varying degrees of decline in cognitive function. Most studies [9] have shown that the cognitive function of breast cancer patients undergoing chemotherapy is impaired, which has attracted the attention of clinical medical staff. In clinical practice, medical staff should pay attention to the cognitive status of patients, analyze its influencing factors, and conduct cognitive interventions for patients to improve the quality of life of patients. [10]. e results of this study show that there are significant differences in cognitive function among patients with different education levels. Patients with higher education levels have better cognitive functions. e possible reason is that patients with higher educational levels can better communicate with medical staff, strive for more social support, and reduce their negative emotions. Patients with a low level of education have more conservative thinking, less communication with others, and greater psychological pressure. is requires medical staff in clinical practice to give corresponding cognitive psychological interventions according to the education level of the patients and to provide the patients with the best quality care.
Analysis of the Relationship between Different Monthly
Income and Cognitive Function. Personal monthly income (F = 9.085, p < 0.001): the average standardized T-scores of cognitive function of patients from low to high personal monthly income groups were (32.47 ± 8.44), (30.61 ± 12.27), (41.32 ± 13.87), and (41.50 ± 11.34), indicating that the higher the monthly income, the better the cognitive function. At the same time, the level of education and personal monthly income are synergistically related. Generally, the higher the level of education means the higher the monthly income. Patients living in cities have better cognitive functions than those living in rural areas. e possible reason is that patients living in cities have more convenient access to disease information, can participate in more social activities, relieve their negative emotions, and further improve their cognitive function.
Analysis of the Relationship between Chemotherapy (Timing of Chemotherapy, Chemotherapy Regimen, Chemotherapy Times) and Cognitive Function of Breast Cancer
Patients.
e results of this study showed that the average cognitive standardized T scores of patients undergoing preoperative and postoperative chemotherapy were 33.89 ± 13.14 and 38.76 ± 13.68, respectively, indicating that the cognitive function of patients undergoing preoperative chemotherapy was worse than that of patients undergoing postoperative chemotherapy (t = −2.205, p � 0.029); patients with TP (paclitaxel, cisplatin) chemotherapy regimens have the lowest average cognitive standardized T score, and [11] studied the effect of three different chemotherapy regimens on the cognitive function of breast cancer patients, and the results showed that the EC-T regimen (epi-rubicin+cyclophosphamide sequential docetaxel) is more [12]. However, some studies [13] have shown during the 0-3 chemotherapy Evidence-Based Complementary and Alternative Medicine cycles, the patient's cognitive ability gradually declines but some improve in cognitive ability during cycles 4-7 and above. e possible reason for the difference in results is related to the different measurement tools and sample size in this study, and future studies should be conducted with larger samples to further check the research hypothesis.
Analysis of the Relationship between Cancer Factors (Pathological Type, Cancer Stage) and Cognitive Function.
e results of this study showed that the average standardized T-score of cognitive function of patients with carcinoma in situ (39.55 ± 16.76) was higher than that of patients with invasive cancer (36.93 ± 13.52) and metastatic cancer (36.73 ± 11.99), indicating that the cognition function of patients with mild pathological types is better. According to different clinical stages, the average cognitive function T scores from high to low are stage I (40.12 ± 17.61), stage II (39.16 ± 13.75), stage III (36.48 ± 11.29), and stage IV (32.94 ± 12.26), indicating the condition of the disease was lighter, the function was better. At present, most research is carried out after patients undergoing chemotherapy or surgery. A few scholars pay attention to the cognitive function of patients before treatment. Studies abroad [14] found that patients have symptoms of cognitive decline before surgery. And the incidence rate is as high as 40%. Chemotherapy-related cognitive dysfunction has different manifestations in different patients, pathological types, and cancer stages and can appear at different stages of cancer treatment. Studies by scholars have also shown that the percentages of cognitive impairment in breast cancer patients before, during, and after chemotherapy are 40%, 75%, and 60%, respectively. erefore, breast cancer itself can also affect the cognitive status of breast cancer patients. Clinical medical staff can take different cognitive correction nursing measures according to the patient's different pathological types, different clinical stages, and whether they have metastasis and should take more targeted prevention and treatment of cognitive dysfunction in breast cancer patients undergoing chemotherapy.
Analysis of the Relationship between the Treatment of Breast Cancer Patients (whether Surgery, Operation Method) and Cognitive Function.
e results of this study showed that the average T-scores of breast cancer patients who underwent surgery and those who did not undergo surgery were (36.86 ± 13.27) and (39.07 ± 15.76), respectively, indicating that the cognitive function of patients who underwent surgery was worse than that of patients who did not undergo surgery. From the point of view of the standardized T-score of cognitive function, the cognitive function of patients with breast-conserving surgery (39.67 ± 12.00) or without surgery (39.07 ± 15.76) is better than that of radical mastectomy (33.84 ± 12.99) or patients with modified radical mastectomy (38.21 ± 13.87) . Reference [15] showed that the visuospatial function, visual memory, and verbal learning of patients with breast cancer chemotherapy were significantly lower than those of patients with surgery alone. At the same time, studies have shown that radiotherapy and other treatments have a superimposing effect, further aggravating the decline of cognitive function. A study by Huehnchen et al. [16] found that radiotherapy can also cause cognitive impairment in patients, and simultaneous radiotherapy and chemotherapy can cause more severe cognitive impairment. One study [17] assessed cognitive function assessment on 60 patients with early breast cancer before surgery, after surgery, before chemotherapy, and after chemotherapy and found that patients at each stage had cognitive dysfunction, indicating that surgery and chemotherapy may both cause cognitive dysfunction. e abovementioned shows that surgery can add to the psychological and physical trauma of breast cancer patients and that different surgical procedures and postoperative radiotherapy can have an impact on cognitive function. In clinical practice, medical staff must pay special attention to patients undergoing radical mastectomy and radiotherapy and, through effective communication and information support, instruct patients to face their illness correctly and avoid the psychological pressure of negative thoughts on patients, improve patients' cognitive function and the quality of life.
Summary
So far, no drugs are clear and effective for the recovery of cognitive function after chemotherapy. And because of the side effects of neurostimulant drugs, the current research focuses more on cognitive behavioral psychotherapy and other new therapies such as traditional Chinese medicine. Cognitive behavior therapy uses cognitive and behavioral methods to change patients' inappropriate cognition and correct patients' unhealthy behaviors [18]. Chen Xiaomin [19] showed that computer virtual rehabilitation training can improve CRCI, by simulating real game scenes with high interaction with patients, providing personalized treatment plans for different patients to help them improve their cognitive functions. is is consistent with the view of Chai
Evidence-Based Complementary and Alternative Medicine
Lijun [20]. Studies have shown that traditional Chinese medicine intervention, reconciliation of qi and blood, and nourishing heart and acupuncture can effectively alleviate and improve the cognitive dysfunction of breast cancer patients after chemotherapy [21,22]. e research of Tong Taishan [23] also supported this view and further pointed out that acupuncture therapy is mainly manifested in the recovery of patients' subjective cognition, memory, and visual space ability. In addition, high-and low-frequency conversion music therapy and physical exercise can effectively improve the cognitive function of patients with breast cancer chemotherapy and improve the quality of life of patients [24,25]. In breast cancer chemotherapy patients, with the increase in the number of chemotherapy and the prolongation of the chemotherapy cycle, the cognitive function of the patients is impaired in varying degrees. Clinical medical staff should communicate with patients more, so that patients have a comprehensive understanding of disease-related knowledge, reduce patients' psychological pressure, increase confidence in overcoming the disease, and ultimately promote patients' physical and mental health and improve their quality of life. In the process of CRCI treatment for breast cancer patients, more attention is paid to nondrug treatment, cognitive behavioral psychotherapy, and psychological care and rehabilitation training of patients are paid attention to. Pay attention to the integration of Chinese and Western medicine, give full play to the unique advantages of our country's traditional Chinese medicine industry in the health industry, and strengthen international cooperation in multi-center national and global cooperative researches.
Data Availability e data can be obtained from the author upon reasonable request.
Ethical Approval
is study has been approved by the ethics committee of Jinzhou Medical University.
|
v3-fos-license
|
2018-12-07T11:35:00.393Z
|
2001-01-01T00:00:00.000
|
55526170
|
{
"extfieldsofstudy": [
"Biology"
],
"oa_license": "CCBY",
"oa_status": "HYBRID",
"oa_url": "https://newprairiepress.org/cgi/viewcontent.cgi?article=6694&context=kaesrr",
"pdf_hash": "9657e946ae099a9b0952618fcb6c52b13e118022",
"pdf_src": "Anansi",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44261",
"s2fieldsofstudy": [
"Agricultural And Food Sciences",
"Medicine"
],
"sha1": "9657e946ae099a9b0952618fcb6c52b13e118022",
"year": 2001
}
|
pes2o/s2orc
|
Influence of dietary niacin on starter pig performance
Two experiments were conducted using 415 weanling pigs (175 in Exp. 1, 240 in Exp. 2) to determine the influence of dietary niacin inclusion on starter pig performance. Pigs were fed a control diet with no added niacin or the control diet with 25, 50, 75 or 100 g/ton of added niacin. From d 0 to 8, increasing dietary niacin increased ADG and ADFI up to 50 g/ton of added niacin. Overall, pigs fed increasing levels of niacin tended to have improved ADG. These results suggest feeding 50 g/ton of added dietary niacin to complex nursery pig diets to improve growth performance.; Swine Day, Manhattan, KS, November 15, 2001
Introduction
Niacin has long been accepted as an essential vitamin for swine diets; however, the optimal level of inclusion receives considerable debate.According to a 1997 survey of vitamin inclusion rates, the average inclusion rate for niacin was 39 g/ton.The average of the 25% of the companies with the highest inclusion rate was 61 g/ton.The average of the lowest 25% of the companies was 23 g/ton.Vitamin requirements of pigs are influenced by many factors including the health status, previous nutrition, vitamin levels in other ingredients in the diet, and level of metabolic precursors in the diet.The most recent data published in the U.S. on niacin inclusion in starter diets found a linear increase in ADG through 90 g/ton.Due to the paucity of data concerning niacin requirements of nursery pigs and the wide range in supplementation rates in the commercial industry, we conducted this experiment to determine the influence of niacin level in nursery diets on starter pig performance.
Procedures
In Exp. 1, 175 weanling pigs (10.7 lb and 12 ± 2 days of age) were used in a 35d growth study.Pigs were housed in an environmentally regulated nursery in 4 × 4 ft pens at the Kansas State University Segregated Early Weaning facility.Pigs were provided ad libitum access to feed and water.Pigs were blocked by initial weight in a randomized complete block design.There were 7 replicate pens per treatment and each pen contained 5 pigs.
The trial was divided into four phases based on diet complexity (Table 1).The four phases were fed from d 0 to 4, d 4 to 8, d 8 to 22, and d 22 to 35.The first two diets were pelleted at the Kansas State University Grain Science Feed Mill using a 5/32 die and conditioned to 140°F.The last two diets were fed in meal form.Pigs were weighed and feed disappearance was determined to calculate ADG, ADFI, and F/G.In Exp. 2, 240 pigs (initially 10.8 lb, 12 ± 2 d) were housed in a research facility on a commercial grower farm in NE Kansas.
There were 8 pigs per pen (5 × 5 ft) with 6 pens per treatment, and pigs were allowed ad libitum consumption of feed and water.Pens of pigs were randomly assigned to dietary treatments, similar to that in Exp. 1. Pigs were also fed similar diets and similar data were collected as in Exp. 1. Data from both trials were pooled and analyzed as a randomized complete block design with pen as the experimental unit using the GLM procedure of SAS.The model included linear and quadratic contrasts for increasing dietary niacin levels.
Results and Discussion
From d 0 to 8, pigs fed increasing levels of niacin had improved ADG (quadratic, P<0.05) and ADFI (quadratic, P<0.02) with pigs fed 50 g/ton additional niacin having the greatest improvement in ADG (Table 3).From d 8 to 22, increasing niacin tended to improve F/G (quadratic, P<0.11).Overall (d 0 to 38), pigs fed increasing levels of niacin tended to have improved ADG (quadratic, P<0.12).There were no differences (P>0.21) in overall ADFI or feed efficiency.
The niacin in corn is unavailable to the pig, while niacin from soybean meal, is highly available.Excess dietary tryptophan can also be converted to niacin by pigs.The diets fed in this experiment were formulated to exceed the pigs' requirement for tryptophan (Table 2).When calculating the available niacin from the basal diet including the potential niacin from tryptophan, the diets fed from d 0 to 22 were similar to the niacin requirement estimate (NRC, 1998).However, because feed intake was very low from d 0 to 8, increasing niacin improved growth performance.Therefore, these results suggest that nursery pigs require 25 to 50 g/ton of added niacin to improve growth performance.b Fed in meal form.c Provided 55 ppm carbadox from d 0 to 24 and 28 ppm from d 24 to 38. d Cornstarch was replaced by niacin (wt/wt) to provide supplemental niacin levels of 0, 25, 50, 75, 100 g/ton.e Vitamin premix provided 11,000 USP units vitamin A, 1,650 USP units vitamin D 3 , 40 IU vitamin E, 4.4 mg vitamin K, 44 µg B 12 , 10 mg riboflavin, and 33 mg pantothenic acid per kg of diet.f Trace mineral premix provided 165 mg zinc, 165 mg iron, 40 mg manganese, 17 mg copper, 0.30 mg iodine, and 0.30 mg selenium per kg of diet.
Table 3 . Effects of Increasing Niacin on Nursery Pig Growth Performance a
Values represent means of two trials (7 replicates with 5 pigs/pen in Exp. 1, and 6 replicates with 8 pigs/pen in Exp. 2) that have been pooled.
a b No treatment × trial interactions (P>0.20).
|
v3-fos-license
|
2020-09-03T09:11:32.024Z
|
2020-08-27T00:00:00.000
|
225234775
|
{
"extfieldsofstudy": [
"Materials Science"
],
"oa_license": "CCBY",
"oa_status": "HYBRID",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s00107-020-01587-w.pdf",
"pdf_hash": "952c9b8cb81279f23822b42a5389b01716b44afd",
"pdf_src": "Adhoc",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44262",
"s2fieldsofstudy": [
"Materials Science",
"Engineering"
],
"sha1": "6b7b642c17a7ff7683cc1cb00e77c6954cb55361",
"year": 2020
}
|
pes2o/s2orc
|
Experimental assessment of failure criteria for the interaction of normal stress perpendicular to the grain with rolling shear stress in Norway spruce clear wood
The anisotropic material behavior of wood, considered as a cylindrically orthotropic material with annual rings, leads to several different failure mechanisms already under uniaxial stresses. Stress interaction becomes important in the engineering design of structural elements and is often predicted by failure criteria based on uniaxial properties. The prediction quality of failure criteria has been assessed with longitudinal shear stress interaction, though less is known on rolling shear stress in interaction with stress perpendicular to the grain. The study aims at investigating the corresponding mechanical behavior of Norway spruce (Picea abies) clear wood by validating failure envelopes for stress combinations in the cross-sectional plane, based on experimental investigations. For this purpose, a test setup that controls the stress interaction and loading of clear wood along pre-defined displacement paths needed to be developed. Experimentally defined failure states could then be compared to failure surfaces predicted by the phenomenological failure criteria. Material behavior was quantified in terms of stiffness, strength, and elastic and post-elastic responses on dog-bone shaped specimens loaded along 12 different displacement paths. A comparison with failure criteria for two nominal compressive strain levels showed that a combination of failure criteria would be required to represent the material behavior and consider the positive effect of compressive stresses on the rolling shear strength. The findings of this work will contribute to studying local stress distribution of structural elements and construction details, where stress interactions with rolling shear develop.
Introduction
Multiaxial and complex stress states can arise in timber structures depending on the position and direction of the applied force with regards to the wood grain direction. This requires special attention due to the material's anisotropy as a consequence of its heterogeneous and porous microstructure (Kollmann et al. 2012). In particular on the material scale, where the local material orientations are commonly different from the global specimen geometry and loading, multiaxial stress states occur. The annual ring structure in a radial (R)-tangential (T) cross-section of wood that is loaded by uniform compression perpendicular to the grain leads to a non-uniform stress state and combination of normal stresses in the R and T direction with rolling shear. Compression perpendicular to the grain, such as a macroscopic phenomenon in timber engineering, and the influence of boundary conditions have been studied extensively by Hall (1980), Hoffmeyer et al. (2000), Blass and Görlacher (2004), Bleron et al. (2011), Leijten et al. (2012) and Gehri (1997). Phenomenological models for the engineering design have been developed by Madsen et al. (1982), Van der Put (2008), EN 1995EN -1-1 (2004 and Lathuilliere et al. (2015). However, on the local material scale, the interaction of stresses in the principal material directions should be considered, though reliable test data and validated design formulas are missing. The interaction of stresses perpendicular to the grain with rolling shear also has a significant role in the failure of cross-laminated, engineered wood-based products (Ehrhart and Brandner 2018). The very low value of the rolling shear modulus, which is as low as one-tenth to one-twentieth of the perpendicular to the grain modulus of elasticity, is a consequence of the anatomical and inhomogeneous, fibrous structure of wood.
The design of structural elements made from engineering wood-based products, connections, beam elements with 1 3 holes and notches, etc. demands special attention in terms of material behavior under stress interaction. A failure envelope that can predict the strength of wood for such combinations of stresses is essential for reliable design. It should even consider the compression-tension asymmetry in combination with shear stresses (Steiger and Gehri 2011). Threedimensional anisotropic phenomenological failure criteria have been proposed for this purpose. However, these merely describe the phenomenon of failure, but neither the material behavior nor the failure mechanism (Cabrero et al. 2012;Kasal and Leichti 2005). Most of these phenomenological failure criteria were developed for composite materials based on isotropic failure criteria. The validation of these anisotropic failure criteria for stress interaction in natural orthotropic materials such as wood has attracted less attention, especially for the combinations with rolling shear stresses.
The aim of this paper is to investigate the mechanical behavior of clear wood under stresses perpendicular to the grain in interaction with rolling shear stresses. More specifically, the objective of this study is to validate failure envelopes for this stress combination in Norway spruce (Picea abies), based on experimental investigations. For this purpose, it was necessary to develop a test setup that allows controlling the stress interaction and loading of clear wood along pre-defined displacement paths. The experimentally defined failure states could then be compared to failure surfaces predicted by the phenomenological failure criteria. Previous experimental, analytical, and theoretical research related to stress interaction with shear stresses is reviewed in Sect. 2, before the experimental setup, and the materials used in this study are described in Sect. 3. Section 4 presents and discusses the results and Sect. 5 concludes the paper.
Experimental testing and analytical equations for stress interactions in wood
Most data from the material properties determined through experiments and reported in the literature are related to the uniaxial material behavior of wood, though very limited research has been carried out to study the mechanical behavior of wood under biaxial or more complex stress states. The failure criteria for materials are mainly defined based on uniaxial strength properties, though to validate the stress interaction prediction of the failure criteria, biaxial testing of wood is required. Biaxial testing requires more advanced testing equipment to control and quantify the stress and strain state of the specimen. The literature reviewed here relates to both biaxial test setups and uniaxial shear testing devices, which are discussed and have been used to develop the test series in the R-T plane performed in this work.
Several uniaxial test setups for rolling shear testing of clear wood are available and have been developed to establish a uniform shear stress state in the most critical part of a test specimen. These test setups include the Iosipescu shear test (Iosipescu 1967;Dumail et al. 2000), the Arcan shear test (Arcan et al. 1978), the two-rail shear fixture (Melin 2008), the single cube apparatus (Hassel et al. 2009), and the off-axis shear fixture according to EN 408 (2010), which is a 14° inclined compressive test where the force is applied at an inclination of 14° to the shear plane or longitudinal axis of the specimen. Salmén (2004, 2005) investigated the scope of using the Wyoming version of the Iosipesco and Arcan shear fixtures by adding some extra features to obtain an interaction of stresses through a uniaxially applied force. Two rotating plates were added to the lower and upper parts of the Wyoming version of the Iosipesco shear fixture to change the ratio between shear stress and normal stress. They conducted a feasibility study of their in-house modified Iosipescu device, for pure shear and combined shear with compression interaction on orthotropic, medium density fiberboard, and Norway spruce solid wood. The experiments were carried out on 90° notched specimens with a depth of 3.50 mm, for small displacements, within the elastic limit.
A U-shaped fixture was added to the Arcan shear device to prevent movement as well as rotation of the specimen in the third direction (Stenberg 2002). Magistris and Salmén (2005) investigated the scope of using this in-house modified Arcan device for a comparatively thick wood specimen in the case of combined loading. After finding the device suitable for combined loading in wood, they used the setup to study the deformation pattern of wet wood fibers (Norway spruce) in the longitudinal (L)-tangential (T) plane at an elevated temperature of 50 °C and 90 °C, under compression, shear, and combined compression with shear loading. The device was even used for repeated loading to study the energy consumption required to collapse the wood cells under different loading combinations and repeated loading. The specimen size was 2 × 40 × 15 mm 3 (R, T, and L directions).
Even though, the setups developed by Magistris and Salmén (2004) were found suitable for biaxial testing in proof-of-concept tests, they did not allow for the direct control and quantification of the displacements and forces in two orthogonal directions, which makes the derivation of a failure envelope difficult. Spengler (1982) performed a study on Norway spruce glued-laminated timber (glulam) specimens that were subjected to the combination of stress perpendicular to the grain with shear in L-R plane. He used L-shaped steel plates to apply loading on a rectangular shaped specimen in two perpendicular directions. The specimens had a length of 220 mm, a width of 80 mm to 140 mm and a thickness of 22 to 33 mm, and were adhesive-bonded with the steel plates.
3
Fig. 1 Specimen's shape and dimensions a Q-type, b V-type, c T-type, d D-type, e illustration of displacement paths for uniaxial and biaxial testing; f specimen's origin; g principal material axes; test orientations with corresponding stresses and strains h R-orientation, i T-orientation 1 3 The setup was comparably simple, but required gluing of the specimens. Without a continuous force transmission through the adhesive bond, however, it would lead to stress concentrations in the corner points, which could be problematic particularly when studying the R-T plane with lower stiffness properties and different Young's modulus to shear modulus ratios.
Phenomenological failure criteria generally describe a surface in the six-dimensional stress space represented by mathematical expressions. Most of the anisotropic strength criteria are based on isotropic yield criteria (Cabrero et al. 2012), of which only a few are developed for wood. This work focuses on the in-plane interaction of stresses perpendicular to the grain with rolling shear stress, and the corresponding failure criteria can be illustrated in a 2D representation.
In the case of uniaxial stress, a material fails when the maximum normal stress or shear stress reaches the corresponding strength value. When considering stresses parallel to the anatomical directions of wood, the maximum stress criterion in the R-T plane can be expressed as where RR and TT are the normal stress components of the stress tensor ij , in the corresponding material directions, f R is the strength in the radial direction, f T is the strength in the tangential direction, RT is the rolling shear stress, and f v,RT is the rolling shear strength of wood.
For in-plane stress states in orthotropic materials, this uniaxial strength criterion can simply be extended to one, which considers orthotropic material properties and linear interaction (Aicher and Klöck 2001). For stresses in the R-T plane of wood this reads as The widely used Hankinson (1921) formula to determine the strength, f , of wood loaded under an angle to the grain was derived from a linear strength criterion. The Hankinson's formula can be expressed as where f L is the strength parallel to the grain and f RT is the strength perpendicular to the grain. f RT is used in engineering applications as an effective strength perpendicular to the grain, which could be replaced by the strengths in the radial or tangential direction. In general, n = 2 is used for compressive strength and n = 1.5 is used for tensile strength (Mascia and Simoni 2013). The simplest quadratic criterion resulting in an ellipsoidal failure surface for in-plane stress states in R-T plane can be written as and would allow for exploitation of a larger stress space compared to the linear interaction criterion given in Eq. (4).
The von Mises strength criterion is based on the maximum distortional energy theory. It assumes that the hydrostatic part of the stress tensor does not contribute to the yielding of the material. The von Mises criterion is given as This criterion includes an interaction term between the normal stresses that is not included in the quadratic failure criterion in Eq. (6). This interaction term affects the shape of the failure envelope. The von Mises criterion is applicable to materials that exhibit metal-like plasticity, but not to wood. Hill (1950) extended the von Mises strength theory by considering that a material behaves anisotropic when plasticity occurs. For plane stress states, the criterion reads as Later on, Azzi and Tsai (1965) adopted this criterion to the case of transversely isotropic composite materials. This criterion is known as Tsai-Hill criterion and for plane stress states, it can be expressed as This criterion has been applied to a number of applications in timber engineering problems (Cabrero et al. 2012;Mascia and Simoni 2013).
One of the first strength criteria for wood was formulated by Norris (1962), who postulated that failure of the material would happen if any one of the following three equations are satisfied, The only difference between the von Mises criterion in Eq. (7) and the Norris criterion in Eq. (10) is the factor, which multiplies with the shear term. Hence, it gives the same surface as von Mises in a normal stress plane, but a different surface when considering the interaction between normal and shear stress. All of the above-mentioned anisotropic failure criteria consider the tensile and compressive strength of a material to be the same, which is not the case for wood. Note, for plane stress states for the herein considered combination of normal and shear stress, Quadratic, Tsai-Hill, Hill, and Norris failure criteria reduce to the same limit curve. Hoffman (1967) proposed a failure criterion based on Hill's criterion, accounting for the difference between the tensile and compressive strength of the material, f c,R for RR < 0 , f t,R for RR > 0 and f c,T for TT < 0 , f t,T for TT > 0 . It was originally formulated as a quadratic function (Schellekens and De Borst 1990) with nine independent variables. By substituting the material coefficients determined from experiments, Hoffman's failure criterion for plane stress can be written as where the index t indicates the tensile strength properties and c indicates the compressive strength properties. This criterion has been widely used in the ductile failure in metals, as well as for brittle failure in fibrous materials like wood (Mascia and Simoni 2013).
Tsai and Wu (1971) proposed a considerably more versatile tool to handle multi-axial stress states, and thus the combination of normal stresses with shear stresses, in terms of strength tensors in polynomial form. This criterion is an invariant to coordinate transformation. It can be represented in index notation as where F ij is the second order strength tensor and F ijkl is the fourth order strength tensor. The equation can be written in expanded form for plane stress conditions in the R-T plane as where the coefficients are defined as Compared to the other failure criteria discussed above, the Tsai-Wu criterion includes an interaction coefficient, F RRTT , that is independent of uniaxial strength values. A prior biaxial experiment is required to determine the value of this interaction term. In other failure criteria like Hill, Tsai-Hill, Norris, and Hoffman, the stress interaction term was defined from uniaxial strength values only. For stability conditions and to get a closed failure surface the following condition has to be fulfilled which consequently limits the interaction term to −1 ≤ F * RRTT ≤ 1. Kasal and Leichti (2005) mentioned that this term can be defined in several ways depending on the testing procedure of generating the biaxial stress state. Hence, different researchers have used different methods to account for this interaction. According to Kasal and Leichti (2005), another interaction term involving shear stress should also be considered in Eq. (15). They state the difficulties of determining this interaction coefficient as a reason for omitting additional shear interaction terms. Eberhardsteiner (2013) carried out numerous biaxial experiments on clear wood of Norway spruce in the L-R plane, which were subjected to two orthogonal normal stresses for the different load to grain angles. Cruciform Norway spruce test specimens were displacement-loaded along different displacement paths, prescribing different ratios of the orthogonal displacements. Tests were performed with different grain angles in the test specimen, which led to numerous investigated stress states with shear in the L-R plane. The point corresponding to the initial maximum stress value (Mackenzie-Helnwein et al. 2003) of any of the global normal stresses was considered as material failure. The obtained failure envelope revealed an elliptical surface that agrees well with the elliptical failure criteria, though this kind of phenomenological failure criterion was unable to distinguish between different failure modes. The experiments included even pure shear stress in the L-R plane, which was obtained for a grain angle of 45° and a displacement ratio of 1:1. Based on these experiments, Mackenzie-Helnwein et al. (2003) defined a multi-surface orthotropic failure criterion in plane-stress that considered four surfaces for four failure modes; namely for tensile or brittle failure mode in the fiber direction, brittle tensile failure mode in perpendicular to the grain direction, parallel to the grain compression failure mode, and ductile failure mode in compression perpendicular to the grain. The biaxial experimental data from Eberhardsteiner (2013) was further used by Cabrero et al. (2012) to validate some of the established phenomenological anisotropic failure criterion for wood in the L-R plane. The study showed that none of the failure criteria could predict a full failure envelope, but failure envelopes rather depend on biaxial stress-state, in terms of the combination of tensile stress, compressive stress or both. The best suited criteria in one stress-state could be the worst in another stress-state. For the first quadrant with tension parallel and perpendicular to the grain combination, the best-suited criteria were Cowin, Norris, and Tsai-Wu, whereas in the fourth quadrant, comprising of compression parallel to the grain and tension perpendicular to the grain, Norris and Cowin were the worst-fitting criteria. However, it remains unclear whether similar conclusions would be found in the R-T plane, which is the aim of this work. Mascia and Simoni (2013) conducted a study of failure criteria, such as Hill, Tsai-Hill, Tsai-Wu, Hoffman, and Norris by comparing them to uniaxial and biaxial experimental tests carried out by Todeshini, cited in Mascia and Simoni (2013), on two Brazilian wood species, Pinus elliotti and Goupia glabra. The experiments were conducted to combine the compression parallel to the grain with the perpendicular to the grain stress, shear, and off-axis tensile test. The investigation showed that Tsai-Hill and Hoffman criteria fitted more adequately than other criteria for both species. They mentioned that the failure curves generated by Tsai-Wu, Tsai-Hill, and Hoffman differ significantly in the third (combined compression) and fourth (compression with tension) quadrant, but they were similar in the first (combined tension) and second quadrant (tension with compression). However, regarding failure surface for the combination of stress perpendicular to the grain with rolling shear, the investigation is limited to two quadrants, since with the combination of rolling shear with compression or with tension perpendicular to the grain, no distinction between positive and negative shear stresses is made. Steiger and Gehri (2011) used the experimental results from Spengler (1982) and other sources, as well as their own shear tests on glulam beams to validate the SIA 265 design equation (SIA 265 2012), defining the strength for the combination of stress perpendicular to the grain with L-R shear stress. They state that the tension perpendicular to the grain and shear strength, as well as their interaction, are influenced by the size of the stressed-volume, as seen from the determination of the shear stiffness and strength of glulam beams. Good correlation was observed between the biaxial experiments by Spengler (1982) and the SIA 265 design equation. The latter is based on the assumption that i the applicable shear stress is equal to the shear strength when the stress perpendicular to the grain is zero; j shear stress reduces with increasing tensile stresses perpendicular to the grain and becomes zero when the tensile strength perpendicular to the grain is reached; k shear stress can be increased above the pure shear strength, up to maximum applicable shear stress at the compressive strength perpendicular to the grain. A further increase in loading will induce crushing failure due to compression perpendicular to the grain.
The design equation is based on an elliptical failure criteria and is given below in R-T plane, for the range of −f c,RT ≤ RT ≤ f t,RT , with f c,RT as the strength perpendicular to the grain in compression and f t,RT as the strength perpendicular to the grain in tension.
where RT is the stress perpendicular to the grain ( RT = t,RT in case of tensile stresses perpendicular to the grain and RT = − c,RT in case of compressive stresses perpendicular to the grain), which in engineering applications as a simplification is not distinguished in radial and tangential directions but in design standards is rather specified as a tensile stress perpendicular to the grain t,90 or compressive stress perpendicular to the grain c,90 . Note that this design equation is intended for application in timber engineering and commonly used with design values, with regards to material uncertainties. (23)
Norway spruce clear wood test specimens
The mechanical behavior of Norway spruce (Picea abies) clear wood under stress perpendicular to the grain with rolling shear interaction was studied by means of an experimental setup. The material originates from the Experimental Forest and Research station in Asa, Sweden. The log of a Norway spruce tree was cut at 1.30 m and 2.70 m from the ground and sawn into pieces with cross-sectional dimensions of 75 × 75 mm 2 , see Fig. 1. The wood pieces were positioned so that one of the edges follows the same annual ring at the outer side of the log (Fig. 1) and four pieces with similar annual ring patterns could be cut out.
The objective was to obtain homogeneous material properties and similar annual ring structure in all specimens without any knots or defects. The wood pieces were then dried in a research oven at Luleå University of Technology with a targeted equilibrium moisture content of 12%. In the development of the test setup, geometry and setup effects on material behavior under compression, rolling shear and stress interaction in the R-T plane were studied. For this purpose, different shapes of specimens and different force imposing systems were investigated, as described in the following. Four different specimen geometries and shapes were cut out from the boards after drying, see Fig. 1, Q quadratic specimens with a dimension of 50 × 50 mm 2 (width × height), V V-notched specimens with a dimension of 50 × 60 mm 2 , T trapezoidal-notched specimens with a dimension of 50 × 60 mm 2 , D dog-bone shaped specimens with a dimension of 50 × 60 mm 2 .
All specimens had a length of 20 mm (in the longitudinal direction of wood), except for D-shaped specimens, which were 10 mm thick in the area of interest. Specimens were prepared with two orthogonal material orientations for testing with normal stresses in the radial (QR, VR, TR, DR) and the tangential (QT, VT, TT, DT) directions. Detailed notations and notch dimensions of the specimens are shown in Fig. 1. The prepared specimens were stored in a climate chamber at 20 °C room temperature and 65% relative humidity. Note that for the geometry and setup effect study, Q-type specimens were prepared from material at the east part of the stem, while, V-and T-type specimens were from the west side of the stem. The average density was 492 kg/m 3 for Q-type and 481 kg/m 3 for V-and T-type specimens. D-type specimens for loading in radial and tangential directions were prepared from material at the south and north side of the stem. The average density for the specimens of series DR was 490 kg/m 3 and DT was 508 kg/m 3 . The average moisture content was 12.80%, which was measured by means of the oven drying method. A total number of 104 specimens were tested of which 72 tests were performed on D-shaped specimens to compare experimental findings with analytical failure criteria. A total of 38 uniaxial tests were performed for rolling shear, uniaxial compression and tension perpendicular to the grain, to study the geometry and force imposing setup effects.
Uniaxial and biaxial mechanical testing
The mechanical test setup was developed within the test frame 322 of manufacturer MTS, which was equipped with two servo-hydraulic actuators (MTS Model 661.20F) with a capacity of 100 kN for the vertical orientation and 50 kN for the horizontal orientation.
Two different test setups, with basically two different force imposing systems, were used, see Fig. 2. Spengler's biaxial test setup (Spengler 1982) was considered as a basis to develop the setup, due to its very simple configuration that allows straight-forward testing, particularly when using it without adhesive bonding. The study on effects of the shape of specimens as described above was performed with this setup, consisting of two L-shaped steel blocks connected to the moveable sledges of the test frame, see Fig. 2a, d. The sledge below the test specimen is moveable in the horizontal direction, whereas the cross-head above the specimen is moveable in the vertical direction. Both are almost rigid, particularly when considering the weak stiffness of the test specimen. Q-, V-, and T-type specimens were then simply put into the test setup, which had 10 mm high steel plates on both sides of the bottom and the top of the L-shaped blocks to laterally support the specimens. There was a smooth surface on the side supports. The force imposing system with L-shaped steel blocks is very simple, but could obviously lead to stress concentration at the corners. The wooden specimens could have been adhesively bonded to the steel supports in order to get a more distributed force transmission. However, this would have been very cumbersome and difficult to realize in this biaxial setup. Thus, mechanical grips were developed to more homogeneously introduce loading to the test specimens. For this purpose, the side support plates were replaced with mechanical grips using spiked steel plates with a height of 15 mm, see Fig. 2c, e. The side support was fixed to the loading device while it was attached to the bolts and a fixed steel support on the other side of the test specimen. These mechanical grips allowed for the specimen to be grasped, while the pyramid texture of the steel plates gave a rough surface with high friction and good connection between the loading device and the wooden specimen. Dog-bone shaped specimens were tested in this device. The effects of the force imposing device on the stress and displacement state of the specimen could be studied by comparing the mechanical grips with the L-shaped steel block. Moreover, for this setup, an external multi-axial load cell of type GTM-00037 (Gassmann Thelss Messtechnik) with a capacity of 5 kN in the vertical and the horizontal directions, the two directions evaluated in this study, was used to assess the accuracy of the larger in-built load cells in the loading range of the tests. A maximum difference of up to 12% between the in-built load cells and the external load cell was found, while the average difference for all tests was 6.5%. The maximum difference was observed in the quasi-elastic part of the test at comparatively low forces, whereas the difference was less for higher forces. Only the results of the external load cell data are provided in the results section.
Experiments were carried out in displacement control mode along several displacement paths, as illustrated in Fig. 1e. Rolling shear combined with tension and compression stresses are denoted by ST and SC, followed by a number that is related to the angle with respect to the shear plane. A displacement rate of 2 or 1 mm/min was applied in case of compression and biaxial tests and 0.50 or 1 mm/ min in rolling shear tests. No influence of these displacement rates on the overall force-displacement behavior was observed. In the case of combined loading, with a certain ratio of vertical to horizontal displacement, displacements were applied simultaneously with an overall displacement rate equal to 1 or 2 mm/min. In addition, two unloading sequences were applied, one in the quasi-elastic and one in the elasto-plastic domain. The first unloading cycle was performed at a force of 1 kN, while the second unloading cycle was applied at a displacement of 5 mm for the compression and biaxial tests on D-type specimens. At the beginning and the end of each unloading cycle, the force (for the first unloading cycle) or displacement (for the second unloading cycle) was kept constant for 5 s to reduce the influence of time-dependent effects on the unloading stiffness. The investigated displacement paths for the corresponding test Fig. 1. Two or three specimens were tested in each loading path. Corresponding forces and displacements were measured by the internal actuator of the MTS test frame and the external load cell.
Note that the lateral boundary condition plays an important role, which is most obvious in uniaxial testing. Displacement controlled testing as outlined above means constraining the lateral displacement, which as a consequence of the Poisson effect leads to a biaxial stress state. In addition to the displacement path testing, shear tests and compression tests with unconstrained lateral boundary conditions were performed. For unconstrained testing, the vertical or horizontal force was limited to a maximum absolute value of 50 N, while the lateral displacements were not constrained. The latter test data was then compared to results from displacement-constrained loading to assess the effects on the determination of uniaxial mechanical properties.
A digital image correlation (DIC) system (Aramis, Gesellschaft für Optische Messtechnik mbH, Braunschweig, Germany) was used externally to measure the strain fields on the surface of the test specimens, as well as, through point markers, the displacements of the loading devices, during the experiments. This data allowed a comparison of the internal displacement measurement of the test frame with the optically determined displacement states. For DIC measurements, the specimens were sprayed with a very thin black speckle pattern on a very thin white base layer. The desired point size of the speckle pattern was 23 pixels (P). Two 12 MP cameras were used to continuously capture images at a rate of 1 Hz (one picture per second) in the elastic and the beginning of the elasto-plastic part, followed by a rate of 0.50 Hz (one picture every 2 s) during the remaining test period. The field of view for the DIC was chosen to approximately 190 × 140 mm 2 . A facet size of 19 P together with a grid spacing of 15 P (parameters have been set based on recommendations of the supplier and a preliminary study) resulted in a distance of approximately 1.20 mm between the measurement points. A noise study was carried out before each experiment to check the suitability of combining the speckle pattern, illumination, and camera settings. Displacements measured by the control system of the test frame were compared to displacements of point markers measured by DIC. The maximum difference was less than 10%.
Data evaluation and comparison with failure criteria
The direct outputs of the tests were the displacement of the loading device as well as the load cell data, which were then used for the calculation of nominal engineering strains and stresses. The tensile or compressive strains, RR and TT , were calculated as the vertical displacement of the force imposing device divided by the unsupported height of the specimen, which was 50 mm for Q-type specimens, and 60 mm for V-, T-, and D-type specimens with L-profile, while for the gripped plate it was 20 mm for Q-and 30 mm for V-, T-, and D-type specimens. Rolling shear strain, RT = 2 RT was determined as the horizontal displacement divided by the unsupported height of the specimens given above. For comparison reasons, average strains in the center of the specimens were even determined as the average strains on the surface of the specimens measured with DIC. The normal stresses, RR and TT , were determined as the vertical force divided by the initial minimum cross-section; and the rolling shear stress, RT , was calculated as the horizontal force divided by the initial minimum cross-section in the center of the specimens. Corresponding cross-sectional areas amounted to 1000 mm 2 for Q-shaped specimens, 800 mm 2 for V-and T-shaped specimens, and 300 mm 2 for D-shaped specimens.
The gradient from half of the first unloading path of the stress-strain curve, see Fig. 3, was considered when calculating the Young's moduli of elasticity in the radial and tangential directions, E R and E T , and the rolling shear modulus, G RT . The unloading path was chosen to determine the elastic material parameters, since elasticity is defined as the mechanically recoverable energy stored in the loaded sample (Bader et al. 2016).
To compare the test data with failure criteria, stress states at specific strain states were plotted in the RR -RT and TT -RT stress planes. Stress points were considered at 1% and 2% compressive strains. These stress points were chosen to study the onset and the development of plastic failure of the material. These stress points were then compared to previously suggested failure criteria discussed in Sect. 2. The predictions of failure criteria were calculated by using the uniaxial strength properties as determined in uniaxial tests. Uniaxial compressive strength was determined as the stress at 1% compressive strain, while the maximum stress was considered as strength in tensile and rolling shear tests. In the case of failure criteria that do not distinguish between Fig. 3 Stress-strain relationship, loading sequences and calculation of modulus of elasticity and shear modulus from the first unloading path tensile and compressive strength, the compressive strength was considered for calculations. To assess the suitability of phenomenological failure criteria, the prediction capability, R of these failure criteria were calculated instead of fitting the criteria. R-value equal to 1 means failure prediction is perfect. R > 1 means the criterion underestimates and R < 1 means the criterion overestimates the failure. R is calculated from the experimental stress and material strength values.
The value R, for example in the case of quadratic criteria, can be calculated as which for TT = 0 in the considered stress interaction gives 4 Results and discussion
Test setup effects
During the development of a suitable and efficient setup for biaxial testing in the R-T plane, test setup effects such as geometry and boundary or force imposing effects were studied. Loading was applied by means of two different devices, one with direct compression contact in L-shaped steel blocks and one with mechanical grips. Note that L-blocks restricted the horizontal (global x-direction) deformation of the test specimens in the corners, which led to shear stress even under pure compressive force. Moreover, compression tests were conducted with constrained horizontal displacement, which yielded a global shear force. The L-block setup developed a higher shear force than mechanical grips, which was particularly pronounced in biaxial loading. Mechanical grips, on the other side, allowed for deformation of the specimens in the horizontal direction, but led to stronger constraints in the load introduction area, where even deformation in the transverse (global z-direction) direction was constrained. Another important difference between the loading setup was that the mechanical grips reduced the unsupported height of the specimen.
Test specimens were tested in two different orientations, where either the radial or the tangential direction was parallel to the global y-direction, see Fig. 1. In radial compression, cell layers are compressed and the void volume is continuously reduced until the cells are fully compressed and the cell layer material is fully densified (Mackenzie-Helnwein et al. 2003). In tangential compression, however, buckling of the latewood cell layers is the predominant failure mechanism (Bodig 1965;Tabarsa 1999), and thus, the height of the specimen is of importance for the overall force carrying capacity. Due to these differences in the material response in the two orthogonal directions, setup effects were more pronounced for testing in the tangential direction than in the radial direction. This is clearly visible in Fig. 4a, which shows stress-strain relationships as determined by compression tests on Q-type specimen, using L-shaped (represented by solid lines) and gripped loading setup (represented by dashed lines). Differences in the initial behavior due to further deformation in the contact case compared to the mechanical gripping, become obvious, while the overall shape of the curves and the stress levels are rather similar. For compression testing in the tangential direction, however, considerable differences in the stress levels have been observed with higher stresses in the case of L-profiles, which is a consequence of boundary effects. This was obvious when comparing Q-and D-type specimens under tangential compression, as discussed in the next subsection. All tests except 'L-QR' tests were stopped at a strain level of about 0.30.
Rolling shear tests were performed by either constraining displacement or force in the global vertical direction. The vertical force was restricted up to 100 N for Q-, V-, and T-shaped specimens, and 50 N for D-shaped specimens, and thus limited to nominal compressive stresses of 0.10 N/mm 2 for Q-shaped specimens and 0.17 N/mm 2 for D-shaped Fig. 4 Stress-strain relationships from uniaxial compression testing of quadratic (Q) and dog-bone shaped (D) test specimens, with different force imposing systems (L/G), in radial (R) and tangential (T) directions, for the assessment of a test setup effects, b geometry effects specimens. Therefore, not only could the effects of the force imposing system be assessed, but also the loading protocol. Figures 5 and 6 show the stress paths of shear tests in the RR -RT and TT -TR stress planes. This indicates the two possible material orientations of specimens in shear testing, which both lead to shear stresses in the R-T plane. Force and displacement-constrained testing is indicated by force and disp. The figures illustrate the effects of the loading protocol and highlight the development of compressive stresses of more than 1 N/mm 2 in cases of displacement constrained testing, which partly lead to higher shear strength. Q-, V-, and T-shaped specimens were tested with direct contact and mechanical grips, while D-shaped specimens were tested with mechanical grips only. The effects of the force imposing system on shear strength seem to be insignificant or overlaid by variation of the shear strength.
In the case of biaxial loading, direct contact through the L-blocks led to complex surface strain distributions, such as tensile stress in the two opposite sides of the L-plates with stress concentration near the notch, and thus cracks developed undesired failure modes. Moreover, differences in the stress paths were observed for the same displacement loading path. Mechanical grips yielded comparably lower shear stress, whereas force imposed by contact led to higher nominal shear stress but lower nominal normal stress. This effect was pronounced for compression in the radial direction. However, the post-elastic behavior of the material was similar for the two test setups. Hardening in biaxial testing with the radial compression led to strongly increased rolling shear stresses, while biaxial testing with tangential compression led to considerably increased compression stresses and rolling shear stress softening.
The assessment showed that mechanical grips led to a more homogeneous force transmission and avoided stress concentrations that occurred in the corners of the L-shaped contact device. The influences of the specimen's shape combined with the force imposing system on the stress state in the specimens and on mechanical properties derived from testing are assessed next.
Geometry effects
A balance in the shape and size of the specimens for biaxial testing is obviously required because shear testing would require flat specimens to reduce the eccentricity in the shear loading, whereas compression testing would require higher specimens to avoid any boundary effects. The influence of specimen shape on the shear behavior was mentioned in the previous section, where the unsupported height of the specimen was found to affect the mechanical response when testing in the tangential direction. The height of the specimens greatly influenced the development of higher nominal stress in compressive testing as well. This is illustrated in Fig. 4b. A reason for the difference is that the nominal compressive stress was calculated using the minimum cross-sectional area. For tangential testing, however, the material above and below the reduced cross-section contributes to the force distribution, as seen in the strain fields, which will be discussed later. Another possible reason is that the average density of the T-specimens is higher than the density of the R-specimens, which can yield higher compressive stress. No compression tests were carried out on V and T-shaped specimens. Therefore, no comparison is shown for these two types of specimens. The effect of the specimen's shape was also included in Figs. 5 and 6, where there were no considerable differences for rolling shear testing. The stress state under shear testing was more closely assessed by means of rolling shear strain distributions across the specimens, as measured by digital image correlation. As with stress plane figures, strain sections are evaluated for specimens tested in the two material orientations, RT and TR. The strain sections in Figs. 7 and 8 clearly show the influence of the annual ring structure, since a more homogeneous strain level was found for R-oriented specimens, while alternating shear strain levels were visible in T-oriented specimens. Figures 7 and 8 also show the differences between specimen shapes as well as some variation when testing similar specimens but with different loading protocols. Shear strains were taken at a nominal shear stress level of 1 N/mm 2 . Note that all shear tests with force constrained loading were accomplished by using mechanical grips. D-shaped specimens showed the least strain concentrations in the area of interest and least edge concentrations, compared to other shapes.
Rolling shear strain-stress relationships for D-shaped specimens are shown in Fig. 9 for RT-(R) and TR-(T) orientations. When the shear plane is parallel to the radially stacked annual rings, RT-orientation led to an almost perfect brittle failure, whereas when the shear plane crosses several annual rings, TR-orientation testing led to a more progressive brittle failure through the development of several cracks. The latter failure mechanisms occurred at comparably higher stress levels. However, as a consequence of the homogeneous orthotropic material, no difference was considered between rolling shear strength in RT-and TRorientation compared to failure criteria.
The strain field under biaxial testing is assessed next for SC-45 testing. Figure 10 shows the normal strain and shear strain fields for all investigated specimen shapes and testing in the two orthogonal directions R and T. Strain fields represent the state at a nominal compressive stress of 3 N/mm 2 and corresponding shear stresses between 0.11 and 1.48 N/mm 2 . Because D-shaped specimens were tested with mechanical grips and all other shapes with L-shape steel plates, considerably lower shear stresses developed in the D-shaped specimens. The DIC assessment revealed comparatively uniform surface strain distributions for all geometries, except for some stress concentration at the edges. The reduced cross-section in the notched area in V-, T-, and D-shaped specimens helped to yield a concentrated strain distribution and initiate a crack in this part. However, this was not always achieved in Q-type specimens of biaxial testing with compression in the radial direction. Biaxial testing with compression in the tangential direction highlighted the sample shape effect on the tangential strain-stress response. The nominal tangential compressive stress increased with the increased notch size. Activation of the material above and below the notched area is well visible in the corresponding shear strain fields.
Finally, the material properties derived from nominal strain-stress relationships measured on different specimen shapes are compared. The Young's moduli in the radial and the tangential directions, and the rolling shear moduli were calculated from unloading paths as described in Sect. 3.3, which yielded values shown in Table 1. For testing under radial compression, minor differences were observed for the different types of specimens and force imposing devices, as well as when comparing tensile and compressive stiffness. The corresponding strengths between 4 and 5 N/mm 2 were measured. A slightly higher variation was found for testing under tangential compression. The rolling shear stiffness and strength, however, showed higher variations for different shapes of specimens, and thus, different stressed volume. As a consequence, Q-type specimens yielded higher rolling shear stiffness and strength than V-, T-, and D-type specimens. The high influence of stressed volume on shear strength was mentioned by Steiger and Gehri (2011). The rolling shear modulus was between 50 and 60 N/mm 2 for almost all setups and specimen shapes with notches. This Stress paths for displacement controlled testing in the rolling shear stress vs. normal stress plane for DR test series corresponds well with values given in previous scientific works (e.g. Dumail et al. 2000;Hassel et al. 2009 and material standard EN 338 2009). Dog-bone shaped specimens exhibited the lowest rolling shear strength value, which is expected as a consequence of the reduced cross-section and less effect of the curvature of the annual ring structure. The highest rolling shear stiffness and strength were found for Q-type specimens, which showed some curvature of the annual ring structure, even if the test specimens were cut from the outermost part of the tree. The influence of the sawing pattern on the rolling shear modulus is well-known in wood science, see for example Aicher and Dill-Langer (2000). Because dog-bone shaped specimens have been further used for biaxial testing, the following average material properties have been considered for the assessment of biaxial failure criteria.
Uni-axial compression and rolling shear testing of dogbone shaped specimens resulted in typical material behavior. The radial compression tests yielded linear elastic behavior, followed by a stress plateau with slightly increasing stress (Bodig 1963;Tabarsa 1999). However, tangential compression resulted in a non-linear behavior and a stress peak, a consequence of latewood layer buckling (Tabarsa 1999). Modulus of elasticity in the radial direction was higher than in the tangential direction, and agrees well with previous findings (e.g. Madsen et al. 1982;Hall 1980;Gehri 1997;Farruggia and Perré 2000;Hoffmeyer et al. 2000;Kristian 2009;Zhong et al. 2015). E R was 1.50 times higher than E T , which is in agreement with Zhong et al. (2015), but comparatively lower than the findings of Gehri (1997), Farruggia and Perré (2000), Hoffmeyer et al. (2000) and Kristian (2009), who reported a factor of around 2.
The compressive strength in the radial direction that was determined higher than in the tangential direction is in agreement with Gehri (1997), Hall (1980) andHoffmeyer et al. (2000). However, when considering the stress peak in the tangential direction, f c,T was found to be higher than f c,R , see Table 1. Since the value in the tangential direction was dependent on specimen shape and height, it may not reflect the real material strength. It is emphasized that Table 1 only summarizes uniaxial tests for compression and tension perpendicular to the grain as well as rolling shear, where a total of 38 tests were performed.
The tensile strength in the tangential direction was higher than in the radial direction, with values of f t,T = 3.28 N/mm 2 and f t,R = 2.75 N/mm 2 , which is in contradiction to Kristian (2009), who reported f t,R = 4.90 N/mm 2 and f t,T = 2.80 N/mm 2 . By considering the brittle behavior under tensile loading, tests on a higher number of specimens could give better insight into tensile strength.
The determined rolling shear modulus of G RT = 55 N/mm 2 is in good agreement with findings by Dumail et al. (2000) and Hassel et al. (2009), while the rolling shear strength, f v,RT = 1.54 N/mm 2 , is slightly lower than the value of 1.60 N/mm 2 reported by Kristian (2009), Dumail et al. (2000 and Hassel et al. (2009).
Biaxial testing and assessment of failure criteria
Biaxial testing of dog-bone shaped specimens was performed along 12 displacement paths, including combinations of rolling shear with compression and tension perpendicular to the grain. The corresponding stress paths are shown for normal stresses in the radial (Fig. 11) and tangential (Fig. 12) material directions. Note that the datasets shown in Figs. 11 and 12 consist of 36 tests each for DR and DT specimens, i.e., a total of 72 tests, performed on DR and DT specimens. Brittle failure was observed for the combination of tensile stresses with rolling shear, while ductile behavior was observed for the combination of compressive stresses with rolling shear. A transition zone from brittle failure under pure rolling shear to a ductile behavior in combination with compressive stresses was found for displacement paths SC-10 and SC-20. By comparing stress interaction paths in the radial and tangential directions, it is interesting that comparably higher shear stresses were observed for tangential compression than for radial compression, which means a stiffer shear behavior for this combination. This is a consequence of the displacement controlled testing and the ratio between Young's modulus and rolling shear stiffness that is lower in the tangential direction; cf. stiffness properties given in Sect. 4.2.
Another interesting observation was the different postelastic behavior of clear wood in the two orthogonal orientations. When combined with radial compressive stresses, rolling shear stresses increased more than compressive stresses after the elastic limit was reached, see Fig. 11. However, when combined with tangential compressive stresses, rolling shear stresses decreased after the elastic limit, see Material strength properties were derived to assess the suitability of failure criteria. The strength was simply set equal to the maximum stress in case of brittle material failure under tension, shear and tension-shear stress combinations. For ductile failure in compression and shear-compression interaction, stress points at 1%, and 2% compressive strains were evaluated instead. This is a typical procedure that is also specified in the material testing standard (EN 408 2010), which prescribes certain levels of permanent strain, though absolute strain was used here.
Material strength properties for strain levels of 1% and 2% compressive strain were then compared to failure envelopes predicted by Hill, Hoffman and the SIA 265 design equation, see Figs. 13 and 14. Rolling shear strength and tensile strength were kept constant for the two strain states, whereas compressive strength levels were changed to follow the evolution of failure surfaces. In Figs. 13 and 14, Hill-1, Hoffman-1, SIA 265-1 curves denote the failure surfaces by Hill, A stronger difference in the compression-shear interaction area was observed, due to increased stresses at higher strain levels. The suitability of the failure criteria is assessed by the prediction capability value R, given in Table 2. From the mean R-values in Table 2, none of the failure criteria yielded good prediction in both tension and compression interaction states in both material orientations. Overall, Hoffman's criterion shows better prediction in DRspecimens than in DT-specimens. When considering both stress states, the average R-value for Hoffman's criterion was found to be 0.95, in comparison to Hill's criterion, which yielded a R-value of 0.85 for 1% compressive strain. Yet, in the DT-series, Hill's criterion yielded better prediction than Hoffman when considering both stress states (compressive and tensile), while Hoffman was found better in compressive stress states. However, both criteria showed a small difference in compressive stress states, since the shape of the failure envelope is very similar. The difference in tensile stress states is high, which is a reason to set the tensile strength equal to the compressive strength for non-consideration of material tensile strength by Hill's criterion. The latter was chosen here to assess the prediction quality of Hill's criterion in compressive stress states. It could then possibly be combined with a brittle failure criterion for tensile stress states.
The experimental data given in Figs. 11 and 14 shows that the combination of rolling shear with compressive forces perpendicular to the grain influences the strength positively (Steiger and Gehri 2011;SIA 265 2012) in failure up to a certain compressive stress level. The positive effect of compressive stress on rolling shear strength was confirmed by Mestek (2011) as well. Regarding the assessment of failure criteria, both Hill and Hoffman's criteria underestimate the positive effect of shear loading. Comparatively, Hoffman's criterion yielded a closer limit curve than Hill's criterion. This is a consequence of the possibility to account for different tensile and compressive strengths and the corresponding shift of the elliptical curve. The SIA 265 design equation gives, however, a good prediction for this transition zone. It would require a combination with a failure criterion for the compressive failure of wood in the radial or tangential direction, and thus, a multi-surface failure criterion.
Failure mechanism in biaxial testing of dog-bone shaped specimens
The failure patterns observed in the experiments are shown in Fig. 15. Radial compressive stress yielded ductile behavior with strength hardening, where wooden cells were progressively compressed and the material densified, leading to an increase in stress. Rolling shear, tensile, and combined tensile force with rolling shear in specimens tested in the R-orientation led to brittle failure modes. A cascading type of brittle failure occurred under pure shear in this case, since cracking neither produced a smooth failure surface, nor followed that same annual ring. However, the radial tensile force and combined tensile force with rolling shear led to pure brittle failure with straight and smooth surfaces. Under combined compressive force with rolling shear, mixed failure modes, i.e., in between ductile and brittle failure modes, occurred, depending on the stress path (displacement ratios of normal stress to shear stress). Tangential compressive stress led to buckling of the cells, causing damage to earlywood cells. A similar behavior was observed under biaxial stresses of combined compressive stress with rolling shear. Tensile, rolling shear, and combined tensile with rolling shear stress in this testing orientation yielded brittle failure. An almost straight crack was
Conclusion
A test setup to study the mechanical behavior of clear wood, under a combination of normal stresses perpendicular to the grain with rolling shear stress was developed. This was challenging due to the requirements of generating pure rolling shear, compressive, and tensile stress states as well as combined stress states in one experimental setup. The determined strain fields from experiments with a DIC system confirmed rather uniform and homogeneous strain development, and suitable failure modes were observed with the biaxial test setup. An investigation of the force imposing setup and specimen shape effects demonstrated the need for a continuous force transfer and force distribution effects in the specimen, which were most pronounced for testing in the tangential direction. Dog-bone shaped specimens were chosen to assess the biaxial failure criteria since the volume of interest and the failure region are well-defined. Differences in material behavior in the radial and tangential directions were observed in the experimental study. Modulus of elasticity was found higher in the radial direction than in the tangential direction. Minor differences were even observed for the two orientations in rolling shear testing. Uniaxial material properties and strength, in tension, compression, and shear were in good agreement with previous studies.
Testing along 12 displacement paths with different ratios of tensile/compressive and shear displacements, covered the stress space in the transverse plane of wood well. A small transition zone from brittle failure in tension and shear to ductile failure in shear-compression combinations was observed. Moreover, the combination of rolling shear stress with compressive stress led to an increase in the rolling shear strength, before the shear strength was reduced at higher compressive stress levels. This phenomenon is also observed in combination with longitudinal shear stresses in wood. Finally, the experimental data was compared with Hill's and Hoffman's failure criteria and the SIA 265 design equation for longitudinal shear interaction with stresses perpendicular to the grain. Overall, Hoffman's failure criterion yielded the highest prediction quality with experimentally evaluated failure stresses, in specimens with radial compression. For tangential compression, however, Hill's failure criterion gave less error than Hoffman's criterion, due to higher variation in experimental results. A positive effect on rolling shear with compression perpendicular to the grain was observed in experiments, though it was not well predicted by Hill's or Hoffman's failure criteria. The SIA 265 design equation is better suited for this phenomenon in the transition from shear to compression. Thus, a more complex mathematical function or a combination of criteria in a so-called multi-surface failure criterion would be better suited to include the positive influence of rolling shear in the failure of wood for such stress interaction.
The findings of the experimental campaign demonstrate the challenge in determining material properties, which obviously are often rather system properties than material characteristics. Therefore, the combination of the experimental data with numerical modeling for the development of a material model that suitably represents the elasto-plastic macroscopic material behavior would give further insight into the suitability of the test setup. Corresponding numerical models of specimens in the test setup have been developed and the results will be presented in another article. The potential of the test setup can be utilized to build a sound database for engineering design and future model validation by investigating the material behavior at other moisture contents relevant for engineering applications and testing of further wood species.
|
v3-fos-license
|
2021-02-28T06:16:48.700Z
|
2020-09-23T00:00:00.000
|
232064789
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.nature.com/articles/s41467-021-21599-1.pdf",
"pdf_hash": "8e6d69dade035cb98e0c225eb92592f9c3e9254b",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44265",
"s2fieldsofstudy": [
"Materials Science",
"Engineering"
],
"sha1": "d2898b64c56eae1d6cfd62007d776ad744c6520f",
"year": 2021
}
|
pes2o/s2orc
|
Electrical switching of high-performance bioinspired nanocellulose nanocomposites
Nature fascinates with living organisms showing mechanically adaptive behavior. In contrast to gels or elastomers, it is profoundly challenging to switch mechanical properties in stiff bioinspired nanocomposites as they contain high fractions of immobile reinforcements. Here, we introduce facile electrical switching to the field of bioinspired nanocomposites, and show how the mechanical properties adapt to low direct current (DC). This is realized for renewable cellulose nanofibrils/polymer nanopapers with tailor-made interactions by deposition of thin single-walled carbon nanotube electrode layers for Joule heating. Application of DC at specific voltages translates into significant electrothermal softening via dynamization and breakage of the thermo-reversible supramolecular bonds. The altered mechanical properties are reversibly switchable in power on/power off cycles. Furthermore, we showcase electricity-adaptive patterns and reconfiguration of deformation patterns using electrode patterning techniques. The simple and generic approach opens avenues for bioinspired nanocomposites for facile application in adaptive damping and structural materials, and soft robotics.
This manuscript contributed by Jiao et al. suggests a straightforward strategy to spatially control the mechanical properties of a composite material composing of cellulose nanofibrils (CNFs) as the fillers and copolymers with hydrogen bonding agent UPys as the matrix that is coated with carbon nanotubes to generate heat under direct current. The elevated temperature results in softening of the hybrid material with distinct mechanical performances such as breaking stress and stress relaxation, when compared to that at room temperature. Sophisticated control of the mechanics is achieved by using selective gating of the electrode patterns prepared by spray coating. Although embedding nano/micron wires has been used for electro-heating in various polymeric materials such as shape memory polymers and liquid crystalline elastomers/networks, combination of electrothermal effect and supromolecular polymer materials, especially the spatially heating and programmed mechanics, is informative for the design of adaptive systems. The fabrication of the hybrid material and characterization/demonstration are completed. I think this work could be considered for publication in this journal. Other comments/suggestions are listed below.
--The tailored mechanics of the hybrid material is mainly attributed to the breaking and reformation of self-complemented UPy dimers. But in page 6 the authors mentioned the interaction at the CNF/polymer interface and CNF delinking. These interactions and the change with temperature should be characterized, which might be significant for the mechanical reinforcement of the fillers to the polymer matrix.
--Thermal-induced elastomeric-to-melt transition is mentioned at several places of the main text. If this is true, the material should experience plastic deformation during the stress relaxation test (Fig 2b). After turning off the electro-heating, the stress should not rise again. In Fig 2d the material at high temperature up to 120 oC can still maintain loading stress. These results suggest the material might be in a rubber state. This point should be considered.
--How does the content of CNF influence the mechanical properties of the hybrid materials?
--DSC is used to characterize the Tg of the copolymers, which are lower than -40 oC. But the materials are mechanically robust with extremely high Young's modulus. The state of polymer matrix should be carefully reconsidered. If it is in a soft rubber state, the hybrid material should be not so strong. I suggest extending the temperature range in DSC measurement to above 100 oC to check if there is thermal transition at ~60 oC. At room temperature the material with dense hydrogen bonds might be in a glassy state due to the presence of a large amount of UPy dimers. A key question is whether this concept provides stronger effect than heating of a polymer composite based on amorphous inorganic particles, or short (<1mm) fibers, where an amorphous thermoplastic polymer matrix goes from glassy to rubbery state ? Perhaps the nano fiber network structure and applications to films is the key, but a micro composite example may be helpful to clarify the advantage with the concept, in which way would such a material be inferior? For the patterning, and for films, it is apparent that a nanocomposite provides advantages, but is it feasible for thicker structures? What are some estimated thickness limitations, and how could they be overcome? There appears to be not so much materials science in this study but rather an emphasis of the "invention". Perhaps the materials science key should be the matrix. Has this concept been used before? Can the nanostructure, and mechanisms for the softening in the composite be given a stronger focus? Referee #1: This manuscript contributed by Jiao et al. suggests a straightforward strategy to spatially control the mechanical properties of a composite material composing of cellulose nanofibrils (CNFs) as the fillers and copolymers with hydrogen bonding agent UPys as the matrix that is coated with carbon nanotubes to generate heat under direct current. The elevated temperature results in softening of the hybrid material with distinct mechanical performances such as breaking stress and stress relaxation, when compared to that at room temperature. Sophisticated control of the mechanics is achieved by using selective gating of the electrode patterns prepared by spray coating. Although embedding nano/micron wires has been used for electro-heating in various polymeric materials such as shape memory polymers and liquid crystalline elastomers/networks, combination of electro-thermal effect and supromolecular polymer materials, especially the spatially heating and programmed mechanics, is informative for the design of adaptive systems. The fabrication of the hybrid material and characterization/demonstration are completed. I think this work could be considered for publication in this journal. Other comments/suggestions are listed below.
1. The tailored mechanics of the hybrid material is mainly attributed to the breaking and reformation of self-complemented UPy dimers. But in page 6 the authors mentioned the interaction at the CNF/polymer interface and CNF delinking. These interactions and the change with temperature should be characterized, which might be significant for the mechanical reinforcement of the fillers to the polymer matrix.
In page 6, we state that the inclusion of UPy motifs leads to promoted interactions in the polymer phase as well as at the CNF/polymer interface, allowing stiffening and strengthening in mechanical properties of nanocomposites. In principle, the UPy motifs share the ability to form hydrogen bonds with the different CNF surface groups.
While we believe that the thermo-reversible de-linking is most efficient in the matrix, we cannot exclude that interactions also occur and change at the interface -this is why we point to them. However, we suggest that the bulk phase is the major player because the thermal transition in the composite coincides rather well with the transition found in pure bulk. Unfortunately, changes at the interface cannot be characterized, e.g. spectroscopically, as the abundance of different interactions leads to non-specific or non-selective bonding. We had attempted this before (eg. Using FTIR). It is also not possible to go into the direction of single fiber pullout measurements, as the CNFs are nanoscale. In summary, although the reviewer raises an interesting point, the requested data can to the best of our understanding not be obtained.
We do however not think that this is a negative point for the overall understanding of the material system concept. We added a comment that the interactions may also change at the interface, but that this can unfortunately not be analyzed (page 6).
2. Thermal-induced elastomeric-to-melt transition is mentioned at several places of the main text. If this is true, the material should experience plastic deformation during the stress relaxation test (Fig 2b). After turning off the electro-heating, the stress should not rise again. In Fig 2d the material at high temperature up to 120 oC can still maintain loading stress. These results suggest the material might be in a rubber state. This point should be considered.
The elastomeric-to-melt transition exclusively refers to the polymer (please see also new Sup. Fig. 4 for photographs); the bioinspired nanocomposite can of course not enter a melt phase due to the high fractions of reinforcements (50 wt%). We checked the position in the manuscript again and believe to be accurate in this statement.
Here we are dealing with highly-reinforced nanocomposites with 50 wt% of CNFs. The polymers are nanoconfined in the CNF network. When the polymers are molten at high temperature, the CNF network is still holding the overall structures, thus maintaining the loading stresses. The polymer melt provides nanoscale lubrication for CNF network leading to strong relaxation. Once the electro-heating is turned off, the stress increases again, which is linked to the reassociation of the hydrogen bonds and potentially thermal contraction during cooling. We added this on page 9.
How does the content of CNF influence the mechanical properties of the hybrid materials?
The inclusion of CNFs within the nanocomposites leads to substantial stiffening and strengthening, and a less ductile behavior. The details have been reported in previous articles and in summarized in previous reviews. (Adv. Funct. Mater. 2019, 1905309;Acc. Chem. Res. 2020, 2742-2748J. Mater. Chem. A, 2017,5, 16003-16024). We added a sentence to the MS to guide the reader to this literature (page 6).
4. DSC is used to characterize the Tg of the copolymers, which are lower than -40 oC. But the materials are mechanically robust with extremely high Young's modulus. The state of polymer matrix should be carefully reconsidered. If it is in a soft rubber state, the hybrid material should be not so strong. I suggest extending the temperature range in DSC measurement to above 100 oC to check if there is thermal transition at ~60 oC. At room temperature the material with dense hydrogen bonds might be in a glassy state due to the presence of a large amount of UPy dimers.
We believe there is a misunderstanding in polymer characterization and bioinspired nanocomposite characterization.
DSC shows a Tg at -46 °C (polymer bulk); polymer bulk rheology shows a dissociation of the UPy dimers at around 62 °C (crossover of G' and G'') which is associated from the rubbery state to the melt. Note that rheology in Sup. Fig. goes to 120 °C, and would also reveal any further transitions. This is the polymer level characterization. Due to the limited bonding strength of non-flanked UPy units (not flanked by urea, so they cannot crystallize into higher ordered structures, see work by Sijbesma and Meijer), they cannot crystallize, and simple supramolecular dissociation is too weak to be observed in DSC. Mechanical characterization is more sensitive, as it measures a different observable. That is why we use these complementary techniques. As requested by the reviewer performed DSC up to a 100 °C, but the line remains of similar slope in above the Tg as expected. (in Supplementary Fig. 2e).
Due to the inherent dynamics of UPy/UPy dimers, and the use of a low Tg backbone material, the material at room temperature is an elastomer and not a glass (Tg ≪ RT). We further added photographs to new Sup. Fig. 4. The polymer is elastomer at room temperature; when heated it turns into a melt (rheology clearly shows that).
The high Young's modulus in the composite arises from the inclusion of 50 wt% CNF in the bioinspired nanocomposites. Once in the nanocomposite state, the elastomer to melt transition of the polymer only translates into better lubrication of the CNF network and allows for easier movement and softening and toughening. The bioinspired nanocomposite cannot melt. Fig 4 should experience plastic deformations. It should be meaningful if the material can restore to the original state, so that it can be repeatedly tailor the mechanical properties. Is it possible to form a lightly crosslinked network by some covalent bonds? In addition, scale bars should be added to Fig 4. No, this is not possible, because the bioinspired nanocomposite is not a rubber and cannot be transformed into a composite rubber at such high fractions of entangling nanoscale reinforcements (50 wt% CNF). The use of high fractions of reinforcements is an essential criterion for bioinspired nanocomposites (Acc. Chem. Res. 2020, 2742-2748. The plastic deformation in bioinspired nanocomposites or CNF/polymer nanopapers occurs by realignment and frictional sliding of the CNF nanofibrils, not by exclusive polymer deformation. Hence such materials based on entangling nanofibrils cannot recover the original state, to our opinion no matter what kind of molecular engineering would be done to the soft matrix. The CNF network cannot relax back to its original position after inelastic deformation and disentanglement on a slow colloidal length scale.
The hybrid materials in
We added the scale bars in Fig. 4. We added the method in SI (Supplementary Fig. 6). Fig 3, temperature sweep is carried out at a strain of 30%. Is this strain in the linear region of the material?
In supplementary
Yes, the strain is in the linear region of the polymer. We added the amplitude sweep of EG-UPy29 at temperature before and after thermal transition in Supplementary Fig. 3.
|
v3-fos-license
|
2020-08-13T10:10:23.561Z
|
2020-08-10T00:00:00.000
|
232061162
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://bmcendocrdisord.biomedcentral.com/track/pdf/10.1186/s12902-021-00687-9",
"pdf_hash": "b610c124c668efcfbe2e3cb9a838090ce3ab33a6",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44270",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"sha1": "e5055d32c8599ddd205b8dfd4a36ac78dc31907f",
"year": 2021
}
|
pes2o/s2orc
|
Relationship between insulin sensitivity and gene expression in human skeletal muscle
Background Insulin resistance (IR) in skeletal muscle is a key feature of the pre-diabetic state, hypertension, dyslipidemia, cardiovascular diseases and also predicts type 2 diabetes. However, the underlying molecular mechanisms are still poorly understood. Methods To explore these mechanisms, we related global skeletal muscle gene expression profiling of 38 non-diabetic men to a surrogate measure of insulin sensitivity, i.e. homeostatic model assessment of insulin resistance (HOMA-IR). Results We identified 70 genes positively and 110 genes inversely correlated with insulin sensitivity in human skeletal muscle, identifying autophagy-related genes as positively correlated with insulin sensitivity. Replication in an independent study of 9 non-diabetic men resulted in 10 overlapping genes that strongly correlated with insulin sensitivity, including SIRT2, involved in lipid metabolism, and FBXW5 that regulates mammalian target-of-rapamycin (mTOR) and autophagy. The expressions of SIRT2 and FBXW5 were also positively correlated with the expression of key genes promoting the phenotype of an insulin sensitive myocyte e.g. PPARGC1A. Conclusions The muscle expression of 180 genes were correlated with insulin sensitivity. These data suggest that activation of genes involved in lipid metabolism, e.g. SIRT2, and genes regulating autophagy and mTOR signaling, e.g. FBXW5, are associated with increased insulin sensitivity in human skeletal muscle, reflecting a highly flexible nutrient sensing. Supplementary Information The online version contains supplementary material available at 10.1186/s12902-021-00687-9.
Background
Insulin resistance (or low insulin sensitivity) in skeletal muscle is a key feature of the pre-diabetic state and a predictor of type 2 diabetes (T2D) [1,2]. It is also observed in individuals with hypertension, dyslipidemia, and cardiovascular diseases [3]. Insulin resistance (IR) in skeletal muscle has been attributed to different pathological conditions such as mitochondrial dysfunction [4], impaired glycogen synthesis [5], and accumulation of diacylglycerol with subsequent impairment of insulin signaling [6]. One hypothesis that has been put forward is a re-distribution of lipid stores from adipose tissue to non-adipose tissues (e.g. skeletal muscle, liver and the insulin-producing β-cells), the so-called overflow or ectopic fat distribution hypothesis. In support, studies have reported a strong correlation between intramuscular triacylglycerol (IMTG) content and IR [7,8]. However, and in contrast, endurance-trained athletes have been shown to be highly insulin sensitive despite having large IMTG depots [9,10]. One possible explanation for this discrepancy is that it is not the IMTG content per se that is important for the development of IR, but rather the relationship between IMTG content and muscle oxidative capacity. A reduced oxidative capacity in skeletal muscle from T2D individuals [11,12], and in lean, insulin resistant offspring of T2D patients [13] has been found, supporting the hypothesis that IR in skeletal muscle is associated with dysregulation of intramyocellular fatty acid metabolism. Interestingly in a cohort of elderly twins, IMTG content seems to have a greater influence on hepatic as opposed to peripheral IR [14]. Furthermore, an association between mitochondrial dysfunction and decreased expression of autophagy-related genes in skeletal muscle from severely insulin resistant patients with T2D has previously been shown [15]. Conversely, enhancing autophagy in mice leads to an anti-ageing phenotype, including leanness and increased insulin sensitivity [16].
The aim of this study was therefore to investigate molecular mechanisms, e.g. IMTG content, associated with insulin sensitivity in skeletal muscle by relating global skeletal muscle gene expression with a surrogate measure of insulin sensitivity, i.e. homeostatic model assessment of insulin resistance (HOMA-IR).
Human participants and clinical measurements
Results from two separate clinical studies (studies A and B) are reported here.
Study A
To identify genes correlated to insulin sensitivity in skeletal muscle, we studied 39 non-diabetic men from Malmö, Sweden [17,18]. Briefly, the Malmö Exercise Intervention cohort consists of 50 sedentary but otherwise healthy male subjects from southern Sweden. They all have European ancestry and 24 of them have a firstdegree family member with T2D. Muscle biopsies were collected from 39 of the subjects. The mean age and body mass index (BMI) were 37.71 ± 4.38 years and 28.47 ± 2.96 kg/m 2 , respectively, and the mean 1/the homeostatic model assessment-insulin resistance (HOMA-IR) was 0.69 ± 0.25 (Supplementary Table S1).
Study B
To replicate the findings from study A, we studied an additional 10 healthy young non-diabetic men without any family history of diabetes, from a previously described study [19]. The mean age and BMI were 25.33 ± 0.99 years and 24.57 ± 1.86 kg/m 2 , respectively, and the mean 1/HOMA-IR was 1.17 ± 0.36 (Supplementary Table S2). Here, we included baseline gene expression profile data (i.e. only before bed rest) from part of a larger study on the influence of physical inactivity in healthy and prediabetic individuals [19].
None of the study participants were directed to avoid extreme physical exercise and alcohol intake for at least 2 days before the studies [20]. The participants were asked to fast for 10-12 h before examination days. Fasting blood samples and anthropometric data were obtained from all participants. All participants underwent an oral glucose tolerance test (OGTT; 75 g) and glucose tolerance was classified in accordance with World Health Organization criteria [21]. Homeostasis model assessment -insulin resistance (1/HOMA-IR = 22.5 / (fasting plasma insulin (μU/ml) x fasting plasma glucose (mmol/l))) was calculated for all participants in both studies and used as a surrogate measure of insulin sensitivity [22,23]. The muscle biopsies were obtained from the vastus lateralis muscle under local anesthesia in individuals participating in all studies using a modified Bergström needle [24,25].
We excluded data from two participants (one from each studies A and B) with extreme values of insulin sensitivity (more than 1.5 * interquartile range) for further analysis. Both studies were approved by local ethics committees and all participants gave their informed consent for participation.
RNA extraction and hybridization
Muscle biopsies were taken from the right vastus lateralis muscle under local anesthesia (Lidocaine 1%), using a 6 mm Bergström needle (Stille AB, Sweden). In both studies, biopsies were immediately stored in RNAlater (Ambion, Austin, TX) and after overnight incubation at 4°C snap frozen at − 80°C until further processing. The double staining method was used for capillary staining. Myofibrillar ATPase histochemistry was performed by preincubation at pH 4.4, 4.6, and 10.3 to identify muscle fiber types [18]. Computer image analysis was performed using BioPix IQ 2.0.16 software (BioPix AB, Sweden). RNA was extracted using Tri reagent (Sigma-Aldrich, St. Louis, MO) followed by RNeasy Midi kit (Qiagen, Düsseldorf, Germany). The RNA was further concentrated by RNeasy MiniElute (Qiagen, Düsseldorf, Germany) and SpeedVac (DNA 120 SpeedVac, Thermo Savant, Waltham, MA).
For study A, synthesis of biotin-labeled cRNA and hybridization to the Affymetrix Custom Array NuGO-Hs1a520180 GeneChip (http://www.nugo.org) were performed according to the manufacturer's recommendation. This GeneChip contains 23,941 probesets for interrogation, including known genes and expressed sequenced tags. Images were analyzed using the GeneChip Operating System (GCOS; Affymetrix) software. For each array, the percentage present call was greater than 40.
For study B, targets were hybridized to the one-color (Cy3, green) Agilent Whole Human Genome Oligo Microarray (G4112F (Feature Number version)) which contains 44,000 60-mer oligonucleotide probes representing 41,000 unique genes and transcripts. Probe labeling and hybridization were performed according to manufacturer's recommendation. Images were analyzed using the Agilent Feature Extraction Software (version 9.5).
Quantitative real-time PCR (QPCR)
A technical replication of the key findings from the microarray data, as well as expression analysis of key genes to be correlated with insulin stimulated glucose update, was conducted using QPCR. Reverse transcription was performed on 250 ng RNA (from 36 subjects in study A) or 200 ng RNA (from 7 subjects in the Muscle SATellite cell (MSAT) cohort) using the QuantiTect Reverse Transcription kit (Qiagen). QPCR was performed on a ViiA 7 real-time PCR system (Thermo Fisher Scientific) with 2 ng cDNA in 10 μl reactions and TaqMan Expression PCR Master Mix with duplex assays according to the manufacturer's instructions. Samples were analyzed in triplicates on the same 384 well plate with 3 endogenous controls (POL2A (Hs00172187_m1), HPRT1 (4326321E, VIC-MGB) and PPIA (4326316E, VIC-MGB)) for both studies A and B. The expression levels were calculated and normalized by geometric averaging of the endogenous controls as previously described [26]. Assays: SIRT2 (Hs00247263_m1), FBXW5 (Hs00382591_ g1) and CPT1B (Hs00189258_m1). Endogenous control assays: POLR2A (Hs00172187_m1), HPRT1 (4326321E, VIC-MGB) and PPIA (4326316E, VIC-MGB) for the 7 subjects in the Muscle SATellite cell (MSAT) cohort.
Isolation and cultivation of human muscle satellite cells
Muscle satellite cells were isolated from 7 subjects from an ongoing unpublished MSAT study. Subjects were male with a mean age of 35.6 ± 10.6 years, a mean BMI of 25.1 ± 3.6 kg/m 2 and a mean fasting plasma glucose value of 5.2 ± 0.2 mmol/L. Muscle biopsies were obtained from the vastus lateralis muscle under local anesthesia in individuals participating in all studies using a modified Bergström needle. Biopsies were minced into small pieces with scissors and digested in a digestion solution (Ham's F-10 Nutrient mix (Gibco®, #31550015), Trypsin-EDTA (0.25%) (HyClone, SV30031.01), Collagenase IV (1 mg/ml) (Sigma, C5138), BSA (5 mg/ml) (Sigma, A2153)) at 37°C for a total of 15-20 min. After this, cells were passed through a 70 μm cell strainer and centrifuged at 800 g for 7 min. The pellet was washed and resuspended in growth medium (Ham's F-10 Nutrient Mix, GlutaMAX™ Supplement (Gibco®, #41550021), FBS (20%) (Sigma, F7524), Antibiotic/Antimycotic Solution (Gibco®, #15240062)) and cells were pre-plated on a culture dish and incubated for 3 h at 37°C and 5% CO 2 to allow fibroblast to attach to the plate. After this, the suspended cells were transferred to a flask pre-coated with matrigel (Corning #356234) and were incubated for 4 days at 37°C and 5% CO 2 in growth medium. Medium was then changed every other day. After about a week, cells were detached using TrypLE (TrypLE™ Express, no phenol red (Gibco®, #15090046)) and re-plated on the same flask to allow even distribution of cells over the surface.
Quantification of mtDNA content
DNA was isolated from the muscle biopsies by phenol/ chloroform/isoamyl alcohol extraction according to the manufacturer's recommendation (Diagenode, Belgium). Concentration and purity were measured using a Nano-Drop ND-1000 spectrophotometer (A 260 /A 280 > 1.6 and A 260 /A 230 > 1.0) (NanoDrop Technologies, Wilmington, DE, USA). QPCR was carried out using an Applied Biosystems 7900HT sequence detection system with 5 ng genomic DNA in 10 μl reactions and TaqMan Expression PCR Master Mix according to the manufacturer's recommendations. All samples were analyzed in triplicates on the same 384 well plate (maximum accepted standard deviation in Ct-value of 0.1 cycles). Two assays (16S and ND6) were used to analyze mitochondrial DNA content (mtDNA) targeting the heavy and light strand, respectively. To analyze nuclear DNA (nDNA) content RNaseP was used as a target. The mtDNA content is calculated as the mean value of ND6 and 16S divided by 2 x RNaseP. Assays used: ND6 (Hs02596879_ g1), 16S (Hs02596860_s1) and RNaseP (4316838).
Statistical analysis Study A
We used ENTREZ custom chip definition files (http:// brainarray.mbni.med.umich.edu) to regroup the individual probes into consistent probesets and remap to the correct sets of genes for Affymetrix Custom Array NuGO-Hs1a520180 array which resulted in a total of 16, 313 genes from study A. We used three different procedures for normalization and summarization as described previously [29]: (1) The GC-content robust multi-array average (GC-RMA) method, (2) Probe logarithmic intensity error (PLIER) method (Affymetrix), and (3) Robust multi-array average (RMA) method [30][31][32][33][34]. We conducted filtering based on the Affymetrix microarray suite version 5.0 (MAS5.0) present/absent calls which classified each gene as expressed above background (present call) or not (absent or marginal call). We included genes, which have detection call as present call in at least 25% of arrays [35], which left 7947 genes out of 16,313 for further analysis in study A.
To identify a reliable list of genes regulating insulin sensitivity, Spearman partial correlation analysis was performed to determine the individual effects of each gene expression on a surrogate measure of insulin sensitivity (1/HOMA-IR) after adjusting for BMI, age and family history of T2D for each of three normalization methods namely GC-RMA, PLIER and RMA separately. We considered only those genes that were significantly correlated with insulin sensitivity with a P < 0.05 in all three different normalization methods.
To technically validate the microarray findings, real time quantitative PCR (QPCR) was used to measure the mRNA expression of FBXW5 and SIRT2 in human skeletal muscle from study A. Correlation between the microarray and QPCR experiments was determined using Spearman's rank correlation coefficient test.
In the study A cohort, correlation between the QPCR expression values of SIRT2, FBXW5, CPT1B, FABP3, MLYCD, PPARG1A and ESRRA with % fiber type and mitochondrial DNA was determined using Spearman's rank correlation coefficient test. All data except that of SIRT2 and FBXW5 was collected and reanalyzed from a previously described study [17,18].
Enrichment analyses were performed on the genes whose expression levels in skeletal muscle were significantly correlated with insulin sensitivity in study A using the WEB-based GEne SeT AnaLysis Toolkit (WebGestalt) which implements the hypergeometric test [36].
Study B
The median intensities of each spot on the array were calculated using the GenePix Pro software (version 6). We performed quantile-based normalization between arrays without background subtraction using linear models for microarray data (limma) package in R [37,38]. We removed poor quality probes that were either saturated (i.e. > 50% of the pixels in a feature are above the saturation threshold) or flagged as non-uniformity outlier (i.e. the pixel noise of feature exceeds a threshold for a uniform feature) in at least one array, which left 29,297 probes for further analysis [39].
Spearman partial correlation analysis was performed to determine the individual effects of each gene expression on a surrogate measure of insulin sensitivity (1/ HOMA-IR) after adjusting for BMI and age. Due to the exploratory nature of the study, no correction for multiple testing was performed. Instead, we considered only those genes that were significantly, positively or inversely, correlated with insulin sensitivity in both studies A and B with a significance level set to 0.05. Paired Wilcoxon signed-rank test was conducted to assess for the change before and after insulin-stimulated glucose uptake. Spearman correlation analysis was between basal-and insulin-stimulated glucose uptake and mRNA expression of FBXW5, SIRT2 and CPT1B. All statistical analyses were performed using IBM® SPSS® Statistics, MATLAB® and R statistical software. The microarray data both studies have been deposited in the National Center for Biotechnology Information's Gene Expression Omnibus (GEO) database (http://www.ncbi.nlm.nih.gov/ geo); series accession number is GSE161721.
Results
To identify genes with skeletal muscle expression related to insulin sensitivity, we obtained muscle biopsies from 38 non-diabetic participants in study A (the data from one participant was excluded, Methods). Clinical characteristics of these participants are shown in Supplementary Table S1. We then profiled muscle gene expression using Affymetrix oligonucleotide microarrays. To replicate the findings from study A, we included 9 non-diabetic participants from study B (the data from one participant was excluded, Methods). Clinical characteristics of these participants are shown in Supplementary Table S2. We performed skeletal muscle gene expression profiling from these participants using the Agilent oligonucleotide microarrays. Insulin sensitivity was estimated using the 1/HOMA-IR method calculated from OGTT values (Methods).
Correlation with insulin sensitivity Study A
We identified 70 genes positively (Supplementary Table S3) and 110 genes inversely (Supplementary Table S4) correlated with insulin sensitivity in human skeletal muscle. Using WebGestalt [36], we performed enrichment analyses of genes significantly correlated to insulin sensitivity. Of the Gene Ontology (GO) categories overrepresented in the 70 genes positively correlated to insulin sensitivity (Supplementary Table S3), several were related to autophagy (Supplementary Table S5). Among enriched Wikipathways of the positively correlated genes were mTOR signaling and thermogenesis (Supplementary Table S6). Enriched GO categories of the genes inversely correlated to insulin sensitivity (Supplementary Table S4) included platelet-derived growth factor binding, fibrillar collagen trimer, banded collagen fibril and complex of collagen trimers (Supplementary Table S7).
Among genes positively correlated with insulin sensitivity, several, including F-box and WD repeat domain containing 5 (FBXW5), TSC2, ULK1, ATG13, AKT1S1, SQSTM1 and TFEB were found to be regulated by or regulating mammalian target-of-rapamycin (mTOR) signaling and autophagy. Among genes involved in lipid metabolism were carnitine palmitoyltransferase 1B (CPT1B) (Fig. 1), the rate limiting enzyme for fatty acid oxidation, SLC27A1 (also known as long chain-fatty acid transport protein 1), a major transporter of fatty acids across the plasma membrane and PNPLA2 (also known as adipocyte triglyceride lipase (ATGL)) a triglyceride lipase known to be expressed in human skeletal muscle [40]. Also, the sirtuin 2 (SIRT2) gene positively correlated with insulin sensitivity, which is a family member of SIRT1 with well-known effects on peripheral insulin signaling [41]. Other interesting genes with relevance for skeletal muscle insulin sensitivity were uncoupling protein 2 (UCP2), an inner mitochondrial membrane protein, and genes with direct functional roles in skeletal muscle, e.g. obscurin, histidine rich calcium binding protein (HCR) and myocyte enhancer factor 2D (MEF2D) (Supplementary Table S3).
Among the genes inversely correlated with insulin sensitivity, several were associated with the extracellular matrix, such as collagen type I alpha 1 chain (COL1A1), collagen type I alpha 2 chain (COL1A2), collagen type III alpha 1 chain (COL3A1) and laminin subunit alpha 4 (LAMA4) (Supplementary Table S4).
Study B
In order to replicate the findings in study A, we analyzed muscle expression in an additional 9 healthy young nondiabetic men without any family history of diabetes. Of the genes found to be correlated with insulin sensitivity, 10 were replicated in study B. Seven of these genes were positively correlated (SIRT2, FBXW5, RAB11FIP5, CPT1B, C16orf86, UCKL1 and ARFGAP2) and three were inversely correlated (ZNF613, UTP6 and LEO1) with insulin sensitivity (Table 1).
Technical validation of the microarray data using real time quantitative PCR (QPCR) To technically validate the microarray findings, QPCR was used to measure the mRNA expression of FBXW5 and SIRT2 in human skeletal muscle from study A. Significant correlation between the microarray and QPCR experiments was observed for both FBXW5 (r = 0.70, P < 0.001), SIRT2 (r = 0.60, P < 0.001) (Fig. 2) and CPT1B (r = 0.74, P < 0.001) (previously shown [18]). The expression of FBXW5 and SIRT2 analyzed with QPCR was also positively correlated with each other (r = 0.81, P < 0.001) and with the QPCR expression value of CPT1B (Table 2).
Correlation between the QPCR expression of the replicated genes FBXW5, SIRT2 and CPT1B with the expression of key metabolic genes, fiber type and mitochondrial DNA content in skeletal muscle from study A participants and with in vitro glucose uptake in human myotube cells The expression of FBXW5, SIRT2 and CPT1B, was positively correlated with malonyl-CoA decarboxylase (MLYCD) and fatty acid binding protein 3 (FABP3), key genes involved in transport and mitochondrial uptake and oxidation of fatty acids in muscle, and with estrogen related receptor alpha (ESRRA) and PPARGC1A (also known as PGC1α) ( Table 2), i.e. with genes playing central roles in regulating mitochondrial biogenesis and oxidative phosphorylation in muscle [42]. Also, expression of FBXW5, SIRT2 and CPT1B was positively correlated with percent type I and inversely correlated with percent type II B fibers in skeletal muscle, and the expression of SIRT2 and CPT1B was also positively correlated with the amount of mitochondrial DNA (Table 3).
Discussion
The objective of this study was to identify genes for which expression levels are correlated with insulin sensitivity in human skeletal muscle. Genes involved in fatty acid metabolism (CPT1B and SIRT2) and in autophagy and mTOR signaling (FBXW5), TSC complex subunit 2 (TSC2) and unc-51 like autophagy activating kinase 1 (ULK1) were found to be associated with insulin sensitivity and related traits (muscle fiber type distribution and mitochondrial number).
We replicated the findings for 10 genes from Study A in Study B using Agilent oligonucleotide microarrays, consisting of 60-mers probes compared to the short 25mers probes utilized by Affymetrix. The expressions of SIRT2, FBXW5, RAB11FIP5, CPT1B, C16orf86, UCKL1 and ARFGAP2 were positively, whereas the expressions of ZNF613, UTP6 and LEO1 were inversely correlated with insulin sensitivity as assessed by 1/HOMA-IR in both studies.
Among the replicated genes positively correlated with insulin sensitivity was CPT1B. CPT1B regulates the transport of long-chain fatty acyl-CoAs from the cytoplasm into the mitochondria, a key regulatory step in lipid β-oxidation. There is strong evidence that βoxidation, plays a crucial role in the development of IR, where inhibition of Cpt1b induces [43] and overexpression of Cpt1b ameliorates [44] IR in rats. Also, a common haplotype of CPT1B has been associated with the metabolic syndrome in male participants [45]. The krüppel-like transcription factor (KLF5) together with C/ EBP-β and PPARδ regulate the expression of CPT1B and UCP2 (also positively correlated with insulin sensitivity (Supplementary Table S3) in skeletal muscle) [46]. Moreover, expression of Cpt1b and Ucp2 in skeletal muscle is up-regulated in the klf5-knockout heterozygous mouse, which is resistant to high fat-induced obesity and glucose intolerance. The skeletal muscle expression of CPT1B in humans is increased after treatment with a PPARδ agonist [47], and this agonist is also shown to increase muscle mitochondrial biogenesis and improve glucose homeostasis, the latter suggested to be mediated by enhanced fatty acid catabolism in muscle [48]. It is likely that the beneficial effect of the PPARδ agonist is partly due to induction of CPT1B in skeletal muscle. Other genes coupled to lipid metabolism whose expression positively correlated with insulin sensitivity include PNPLA2 (ATGL) and SLC27A1 (long chain-fatty acid transport protein 1; FATP-1). Although no correlation between insulin sensitivity and muscle ATGL expression has previously been reported, ATGL mRNA is shown to be strongly coupled to mRNA levels of CPT1B in human muscle [49]. Atgl, Cpt1b and Slc27a1 are highly expressed in insulin responsive oxidative type I fibers, and insulin-stimulated fatty acid uptake is largely dependent on Slc27a1 in rodent muscle [50].
Taken together, data presented here are in-line with and support previous findings that skeletal muscle lipid metabolism, and lipid β-oxidation in particular, plays an important role in the development of IR.
Another replicated gene in this study positively correlated with insulin sensitivity was SIRT2, a predominantly cytoplasmic deacetylase expressed in a wide range of metabolically relevant tissues. Increasing evidence suggests that the expression of SIRT2 is modulated in response to energy availability, being induced during lowenergy status [51]. Conversely, dietary obesity and Fig. 2 Technical replication of the microarray data using QPCR. The relative expression level of (a) FBXW5 and (b) SIRT2 genes were measured in study A using both microarray (x-axis) and QPCR (y-axis). Data was analyzed with Spearman's rank correlation coefficient test (n = 29). For illustration, gene expression data is shown for only GC-content robust multi-array average (GC-RMA) method Table 2 Correlation between the gene expression of SIRT2, FBXW5 and CPT1B analyzed with quantitative PCR (QPCR) with the QPCR expression values of SIRT2, FBXW5, CPT1B, key genes involved in transport and mitochondrial uptake and oxidation of fatty acids in skeletal muscle (FABP3 and MLYCD) and genes with central roles in regulating mitochondrial biogenesis and oxidative phosphorylation in muscle (PPARGC1A and ESRRA). Significant correlation was determined using Spearman's rank correlation coefficient test Abbreviations: SIRT2, sirtuin 2; FBXW5, F-box and WD repeat domain containing 5; CPT1B, carnitine palmitoyltransferase 1B associated pathologies, e.g. IR, is linked to the capacity to suppress β-oxidation in visceral adipocytes, in part through transcriptional repression of SIRT2 with negative effects on the SIRT2-PGC1α regulatory axis [52]. SIRT2 is also described as a novel AKT interactor, critical for AKT activation by insulin, and the potential usefulness of SIRT2 activators in the treatment of insulin-resistant metabolic disorders has been discussed [53]. Unlike the well-documented effects of SIRT1 in skeletal muscle insulin signaling [41], the role of SIRT2 in skeletal muscle is much less defined. A study using mouse C2C12 skeletal muscle cells showed that downregulation of Sirt2 in insulin resistant cells improved insulin sensitivity [54], raising the possibility that Sirt2 has tissue-specific roles regarding insulin sensitivity. The opposite findings presented here, showing a positive association between insulin sensitivity and SIRT2 gene expression in human skeletal muscle could highlight a differential role in various metabolic conditions, or species differences. Of the enriched Gene Ontology (GO) categories of genes whose expression positively correlated with insulin sensitivity, several were related to autophagy, process utilizing autophagic mechanism and regulation of macroautophagy. Interestingly, we found the expression of FBXW5 to be positively correlated to insulin sensitivity in both study A and B. FBXW5 is part of an E3 ubiquitin ligase that regulates TSC2 protein stability and complex turnover [55], with indirect effects on mTOR. Moreover, a variant (rs546064512) in FBXW5 is shown to be associated with total cholesterol (odds ratio = 0.56 and P = 8.93 × 10 − 4 ) in 12,940 individuals of multiple ancestries ( [56] and The T2D Knowledge Portal: http:// www.type2diabetesgenetics.org/). In the fed state insulin signaling activates mTOR, whereas in the fasted state AMPK has the opposite effect leading to inactivation of mTOR and activation of autophagy. ULK1 negatively regulates and is negatively regulated by mTOR, making mTOR a major convergence point for the regulation of autophagy [57]. ULK1 is also a key regulator of mitophagy, and it's phosphorylation by AMPK is required for mitochondrial homeostasis and cell survival during starvation [58]. The large number of autophagy-related genes positively correlating with insulin sensitivity might result from the fasted state of the subjects (10-12 h) and could be a reflection of metabolic flexibility, i.e., the ability to switch from high rates of fatty acid uptake and lipid oxidation to suppression of lipid metabolism with a paralleled increase in glucose uptake, storage and oxidation in response to, e.g., feeding or exercise. Impaired autophagy has been implicated in ageing and IR, and induction of autophagy is required for muscle glucose homeostasis mediated by exercise in mice [59]. A crucial link between autophagy and insulin sensitivity in humans has been suggested in a study where skeletal muscle from severely insulin resistant subjects with T2D show a highly altered gene expression related to mitochondrial dysfunction and abnormal morphology, and that this is associated with decreased expression of autophagy-related genes [15].
Future studies are required to determine the potential role of the remaining replicated genes in the regulation of insulin sensitivity in human skeletal muscle, although it should be mentioned that RAB11FIP5, an AS160-and Rab-binding protein, is suggested to coordinate the protein kinase signaling and trafficking machinery required for insulin-stimulated glucose uptake in adipocytes [60]. Also, RAB11FIP5 is an effector protein of RAB11, a GTPase that regulates endosomal trafficking shown to be required for autophagosome formation [61], suggesting yet another link between the regulation of insulin sensitivity and autophagy in skeletal muscle.
The positive correlation of CPT1B, SIRT2 and FBXW5 expression with insulin sensitivity in this study is supported by the observed positive correlation of these genes with the expression of key genes promoting the phenotype of an insulin sensitive myocyte, e.g. transport and mitochondrial uptake and oxidation of fatty acids and positive regulation of mitochondrial biogenesis and oxidative phosphorylation (Tables 2 and 3). For SIRT2 and FBXW5, this was also supported by the correlation of these genes with glucose uptake measurements in human myotube cells (Fig. 3).
There are several issues to consider in the interpretation of the results. In both studies, we used 1/HOMA-IR as a surrogate measure of insulin sensitivity. The HOMA-IR index is based upon fasting measurements of insulin and glucose and thus more reflects variation in hepatic than in peripheral insulin sensitivity [62].
Although several studies have shown significant correlations between HOMA-IR and insulin-stimulated glucose uptake as measured by an euglycemic hyperinsulinemic clamp, this correlation cannot be expected to be very strong given the different physiological conditions they reflect [22,63]. On the other hand, biopsies in both studies were obtained in the fasting state and should thus more correspond to conditions as measured by 1/ HOMA-IR.
Conclusions
In conclusion, we present a catalog with muscle expression of 180 genes correlated with insulin sensitivity. This data provides compelling evidence that activation of genes involved in lipid metabolism, including SIRT2, and of genes involved the regulation of autophagy and mTOR signaling, e.g. FBXW5, are associated with increased insulin sensitivity in human skeletal muscle. Determining if these genes are causally related with insulin sensitivity in humans should be the aim of future studies.
Additional file 1: Supplementary Table S1. Clinical and biochemical characteristics of male subjects from study A. Supplementary Table S2. Clinical and biochemical characteristics of male subjects from study B. Supplementary Table S3. Genes of which expression levels in skeletal muscle were positively correlated with insulin sensitivity (1/HOMA-IR) in study A. Supplementary Table S4. Genes of which expression levels in skeletal muscle were inversely correlated with insulin sensitivity (1/ HOMA-IR) in study A. Supplementary Table S5. Significantly enriched Gene Ontology (GO) categories in the 70 genes whose expression level in skeletal muscle positively correlated with insulin sensitivity in Study A, analyzed with the WEB-based GEne SeT AnaLysis Toolkit (WebGestalt). Supplementary Table S6. Significantly enriched Wikipathways, in the 70 genes whose expression level in skeletal muscle positively correlated with insulin sensitivity in Study A, analyzed with the WEB-based GEne SeT AnaLysis Toolkit (WebGestalt). Supplementary Table S7. Significantly enriched Gene Ontology (GO) categories in the 110 genes whose expression level in skeletal muscle was inversely correlated with insulin sensitivity in Study A, analyzed with the WEB-based GEne SeT AnaLysis Toolkit (WebGestalt).
Availability of data and materials
The microarray data both studies have been deposited in the National Center for Biotechnology Information's Gene Expression Omnibus (GEO) database (http://www.ncbi.nlm.nih.gov/geo); series accession number is GSE161721.
Ethics approval and consent to participate Study A was approved by the local ethics committee at Lund University, and written, informed consent was obtained from all participants. Study B was approved by the Copenhagen and Frederiksberg Regional Ethics Committee (ref. no. 01-262546) and informed written consent was obtained from all of the subjects before participation. Both studies were conducted according to the principles of the Helsinki Declaration. Muscle SATellite cell (MSAT) study was approved by the local ethics committee at Lund University, and written, informed consent was obtained from all participants (ethical approval: Dnr 2015/593).
Consent for publication
Not applicable.
|
v3-fos-license
|
2021-09-24T15:39:58.500Z
|
2021-08-30T00:00:00.000
|
239704350
|
{
"extfieldsofstudy": [
"Medicine",
"Geography"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://malariajournal.biomedcentral.com/track/pdf/10.1186/s12936-022-04094-w",
"pdf_hash": "ad20295460b1813bd9c8829df0058eebdd62e2d8",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44271",
"s2fieldsofstudy": [
"Medicine",
"Environmental Science"
],
"sha1": "223a50998abd0566036ee88a4423b4569f78e415",
"year": 2022
}
|
pes2o/s2orc
|
Implementation of community case management of malaria in malaria endemic counties of western Kenya: are community health volunteers up to the task in diagnosing malaria?
Background Community case management of malaria (CCMm) is an equity-focused strategy that complements and extends the reach of health services by providing timely and effective management of malaria to populations with limited access to facility-based healthcare. In Kenya, CCMm involves the use of malaria rapid diagnostic tests (RDT) and treatment of confirmed uncomplicated malaria cases with artemether lumefantrine (AL) by community health volunteers (CHVs). The test positivity rate (TPR) from CCMm reports collected by the Ministry of Health in 2018 was two-fold compared to facility-based reports for the same period. This necessitated the need to evaluate the performance of CHVs in conducting malaria RDTs. Methods The study was conducted in four counties within the malaria-endemic lake zone in Kenya with a malaria prevalence in 2018 of 27%; the national prevalence of malaria was 8%. Multi-stage cluster sampling and random selection were used. Results from 200 malaria RDTs performed by CHVs were compared with test results obtained by experienced medical laboratory technicians (MLT) performing the same test under the same conditions. Blood slides prepared by the MLTs were examined microscopically as a back-up check of the results. A Kappa score was calculated to assess level of agreement. Sensitivity, specificity, and positive and negative predictive values were calculated to determine diagnostic accuracy. Results The median age of CHVs was 46 (IQR: 38, 52) with a range (26–73) years. Females were 72% of the CHVs. Test positivity rates were 42% and 41% for MLTs and CHVs respectively. The kappa score was 0.89, indicating an almost perfect agreement in RDT results between CHVs and MLTs. The overall sensitivity and specificity between the CHVs and MLTs were 95.0% (95% CI 87.7, 98.6) and 94.0% (95% CI 88.0, 97.5), respectively. Conclusion Engaging CHVs to diagnose malaria cases under the CCMm strategy yielded results which compared well with the results of qualified experienced laboratory personnel. CHVs can reliably continue to offer malaria diagnosis using RDTs in the community setting.
Background
Early detection and treatment of malaria contributes to reduced complications and deaths [1]. A mixed approach to providing health services at both health facility and community levels is appropriate where only about 70% of people in sub-Saharan Africa use public facilities as the first point of care when a family member has fever [1].
Globally, given the limited human resources in the health sector, the community-based approach has been promoted as a cost-effective and pro-poor intervention to improve the accessibility of healthcare [1][2][3]. This underscores the importance of community health volunteers (CHVs) as a key element in the communitybased approach to most populations in low-and middle-income countries [4]. CHVs are generally defined as non-professional, lay health workers who work in the communities where they reside, and who are equipped with training and incentives to provide promotional, preventive and basic curative healthcare services to community members [5,6].
In Kenya, CHVs are recruited at community meetings (barazas) called by area leaders or community health committees (CHC), using set criteria [7]. CHVs are organized into community units (CUs) supervised by community health extension workers (CHEWs). Each CHV provides services to an average of 100 households, linking them to the formal health sector; about 5,000 community members are served by CHVs within each community unit [8].
Community case management of malaria (CCMm) is an equity-focused strategy that complements and extends the reach of health services by providing timely and effective diagnosis and treatment to populations with limited access to facility-based healthcare [9]. In Kenya, the CCMm strategy utilizes CHVs who have received training on performing and interpreting malaria rapid diagnostic tests (RDT), and prescribe artemether lumefantrine (AL) to confirmed, uncomplicated malaria cases. CHVs refer pregnant women with suspected malaria, suspected severe malaria cases, patients with negative malaria test results, and patients with persistent symptoms to health facilities for further management.
In western Kenya, a malaria-endemic zone, CCMm has been adapted increasingly since 2012 as an approach to increase timely access to malaria care and treatment. About 7,420 CHVs have been trained and equipped with health commodities and tools to promptly diagnose and treat uncomplicated malaria cases at community level and help prevent progression to severe life-threatening disease. CHVs receive 3 days' training with theory and practical sessions, and thereafter receive on-the-job training during supervision and monthly meetings by CHEWs.
CHVs are also trained to identify severe malaria cases for early referral and thus help to reduce malaria deaths. They hold monthly meetings to discuss their reports and progress, and learn from each other under the leadership of the CHEW. The CHEW also performs on-site support supervision to ensure CHVs are providing quality servicse to the community, including the performance of RDTs. While on site, CHEWs observe how CHVs perform and intrepret RDTs and take corrective action as required. Any of these corrective actions and good performances are discussed during monthly meetings to ensure the CHVs are informed to promote improvemed performance.
CHVs are part of the first level of national malaria monitoring, and conduct epidemiological surveillance of malaria cases at community level. CHVs submit monthly reports to the Kenya Health Information System (KHIS) and thus contribute to the national malaria control strategy with up-to-date information [10].
Symptom-based malaria diagnosis is inaccurate and contributes to poor management of febrile illness, overtreatment of malaria, and may promote drug resistance to current anti-malarial drugs [11]. The World Health Organization (WHO) recommends testing of all suspected malaria cases before treatment as best practice in malaria case management [12]. The 2019-2023 Kenya Malaria Strategy emphasizes this recommendation with testing in healthcare facilities using microscopy and malaria RDT [13]. While microscopy is the diagnostic test of choice in health facilities with laboratories, RDTs are used in facilities where microscopy is unavailable due to several factors, such as lack of microscopes, trained laboratory personnel or electricity. Testing of malaria in a community setting is entirely by RDT with the intention of reducing the practice of presumptive malaria treatment and irrational use of anti-malarial treatment drugs.
While microscopy detects the presence of malaria parasites in blood by direct observation, RDT detects the presence of circulating malaria parasite antigens. The most commonly used RDT detects Plasmodium falciparum-specific histidine-rich protein 2 (PfHRP2) while others detect lactate dehydrogenase (LDH) and aldolase. RDT results may remain positive for a variable amount Keywords: Community case management of malaria, Community health volunteers, Test positivity rate, Rapid diagnostic test of time (5-61 days) following effective treatment with anti-malarial drugs, depending on the type of RDT used, age and treatment, thereby affecting their specificity [14]. Sensitivity is associated with the inherent performance of the test, as well as quality issues related to handling of test kits and performance of the testing procedure. Although CHVs undergo training on the use of RDT, storage and transport conditions and human error may affect the validity of the test results. Procedural factors include the quality of the blood drop as well as the time taken by the operator to read the test results [15].
The National Malaria Control Programme uses routine surveillance data reported in the KHIS to produce a quarterly Malaria Surveillance Bulletin. In the July-September 2018 issue, the all-age malaria test positivity rate (TPR) was 24% with the TPR in the malaria-endemic lake zone being comparatively high at 35% [16]. However, these data are not disaggregated to facilities or community level. From the CCMm routine data reported for the same period, the average TPR for malaria RDTs performed by CHVs in the malaria-endemic lake zone was almost two-fold at 67%. There was therefore a need to evaluate the performance of CHVs in conducting RDTs and determine the accuracy of their reports in comparison with tests performed by qualified laboratory personnel.
Methods
The study was conducted in Kakamega, Vihiga, Siaya and Migori Counties, in the malaria-endemic lake zone in Kenya, between September and October 2020. The study population was selected from CHVs conducting CCMm in these counties. The climate in this area is mainly tropical, with variations due to altitude, and rainfall all year round with warm temperatures that influence mosquito populations and malaria transmission. The main sources of livelihood are agriculture, small-scale businesses and fishing. There are about 9,000 CHVs covering about 30% of the population and 385 public health facilities in the four counties [17]. This was a cross-sectional survey to evaluate the performance of CHVs in testing for malaria using RDTs. A quantitative sub-set of the study data was used where 200 CHVs were observed conducting RDT on 200 patients. These results were then compared with results from RDTs performed by experienced medical laboratory technicians (MLTs) using a second sample of capillary blood from the same patient. Blood films were also prepared and examined in the laboratory setting by Level 1 microscopists (as defined by WHO competency levels) [18] as back-up verification as required. In cases with discordant results, the RDT results of the MLTs were used to manage patients.
Multi-stage cluster sampling was used to select the study sample, with the sampling frame as the eight malaria-endemic counties, sub-counties and CUs where CCMm is practised. The first stage involved random sampling of four counties based on their predominantly cultural backgrounds. The sample was then proportionately apportioned to the four counties based on the number of CUs implementing CCMm. The second stage was a random selection of sub-counties from each randomly selected county, followed by a random selection of CUs from the selected sub-counties. Consecutive sampling was then used to identify five CHVs who had encountered a suspected malaria case from a sampled CU, that is, from a sampled CU, only five CHVs were observed.
Research assistants (RAs) were trained on study procedures before starting data collection. CHVs tested patients presenting with symptoms and signs of malaria; consenting patients with suspected uncomplicated malaria were included in the study. Patients with suspected severe or complicated malaria, pregnant women and children under one year old were excluded. The RDT brand used was CareStart Malaria ™ (AccessBio, USA), obtained from government central stores using the usual procedures; storage and handling strictly met the manufacturer's guidelines.
MLTs independently performed an RDT (same brand and batch number as used by the CHV) on the same participant by performing a second prick to collect capillary blood. MLTs also prepared thick and thin blood films (on the same slide) for back-up microscopy. The CHVs and MLTs were blinded to the results of each other. Thin films were fixed in methanol and air-dried, then both thick and thin films were stained with 10% Giemsa solution for 15 min. Staining was done within 12 h to avoid auto-fixation of films. Malaria parasites were recorded as the number of asexual parasites counted per 500 white blood cells in the thick film. If no parasite was found in at least 100 fields at 1000 magnification in the thick film, the result was recorded as no malaria parasites seen. All thick and thin blood films were read by two WHO-certified, Level 1 microscopists and any discrepancy resolved by a third Level 1 microscopist. All microscopists were blinded to the results of the malaria RDTs.
For ease of tracking and analysis, the study participants were given unique identifier numbers containing the CU code, CHV code and study subject number. The RDT strip and blood slides were labelled with the same unique identifier numbers. A log with a record of the RDT results (from the CHV and MLTs) and blood slides were maintained using the identifier number; only the study coordinator had access to this log to ensure the blinding of test results.
Data were captured using electronic Open Data Kit (ODK). The log data kept by the RAs were entered at the end of each day into ODK using a tablet. The data were transmitted to a server hosted by Amref Health Africa in real-time. At the end of each study day, data transmitted to the server were reviewed by the study coordinator and any quality issues flagged for immediate correction. The data were stored on a password-protected computer with back-up on a password-protected external hard-drive only accessible to authorized study staff. Quantitative data were downloaded from the server into Excel (Microsoft, USA) and transferred to STATA version 15 (Stata-Corp, College Station, TX, USA) for statistical analysis.
Cohen's kappa statistic for inter-rater reliability testing was calculated to establish the level of agreement between the CHVs and qualified laboratory staff. The kappa statistic (or kappa coefficient) was used to assess the strength of the agreement. Interpretation of kappa was as follows: < 0.20 slight agreement, 0.21-0.40 fair agreement, 0.41-0.60 moderate agreement, 0.61-0.80 substantial agreement, and 0.81-0.99 almost perfect agreement. Inter-reader agreement for facilities versus reference values was expressed as kappa (κ) with a 95% confidence interval (CI) using the 'kapci' function in Stata [19].
Sensitivity and specificity were calculated as the proportion of RDT-positive and -negative test results obtained by the CHVs against the results of the MLTs. Positive and negative predictive values were calculated as the proportion of true positive results among all positive samples and the proportion of true negative results among all negative samples, respectively.
Results
A total of 200 CHV participants were enrolled into the study, distributed proportionate to the size of each county, with 42% from Kakamega, 27% from Migori, 18.5% from Siaya, and 12.5% from Vihiga Counties. The socio-demographic characteristics of the study participants are shown in Table 1. The CHVs' median age was 45 (IQR: 43-47) years, age range 26-75 years.
At 95% CI (0.82, 0.95), the kappa score was 0.89 indicating almost perfect agreement (92.5%) in RDT results between CHVs and MLTs. The standard error was 0.07 and Prob > Z was 0.000.
Discussion
Many malaria endemic areas of the world lack sufficient capacity and resources for accurate diagnosis, and reliance on clinical symptoms and signs alone are inadequate and imprecise indicators of malaria disease. Use of CHVs has been shown to improve the acceptance of community interventions to address malaria diagnosis and treatment as CHVs are well respected within their communities of residence [20]. The age and gender characteristics of the participating CHVs were similar to those in a study by [21] which evaluated RDT use by community health workers.
This study demonstrated consistently similar results between RDTs conducted by CHVs and MLTs, and both results compared well with microscopy. A study done by [22], to compare RDTs and microscopy to diagnose malaria found 64% and 59% positivity of RDTs and microscopy, respectively. The difference could be because RDTs detect malaria antigens still in circulation after recovery from the disease, giving false positive results, as compared to microscopy that detects parasite forms.
Results of kappa scores, and sensitivity and specificity in this study were consistent with those of [21] which demonstrated that CHVs generally adhered to testing procedures, could safely and accurately perform RDTs, and interpreted test results correctly. While the findings contributed to the body of evidence that CHVs perform RDTs at an acceptable level [23], their skills were observed to improve with increasing years of experience. These findings could be as a result of routine support supervision and a good understanding of CCMm leading to its effective implementation.
Inter-rater reliability is important as it represents the extent to which the collected data in the study correctly represented the variables measured. The kappa statistic was used to test inter-rater reliability between the CHVs and MLTs. As guided by [19], the study kappa score was above 80% agreement, which is the minimum acceptable inter-rater agreement, and an indication for an almost perfect agreement between CHVs and MLTs.
CHVs achieved very good sensitivity, specificity, positive predictive values and negative predictive values. These results are similar to those reported in most studies although they used microscopy as the gold standard for diagnosis [23].
The difference in malaria positivity rates as reported in the KHIS needs to be studied further to understand the root cause of the difference. The national malaria surveillance data may need to be disaggregated by CHVs and health facilities at different levels, with follow-up of the different sources of data for verification.
Conclusion
The ability of CHVs to diagnose malaria cases under the CCMm project compared well with the findings of qualified, experienced laboratory staff as evidenced by comparable sensitivity, specificity and kappa scores. CCMm should continue to scale up its valuable and important role of first-line diagnosis and treatment of uncomplicated malaria. Alternative possible causes of differing malaria positivity rates between those from CHVs and general national data need to be explored.
|
v3-fos-license
|
2017-01-17T17:39:44.000Z
|
2016-03-25T00:00:00.000
|
119293886
|
{
"extfieldsofstudy": [
"Mathematics"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://discreteanalysisjournal.com/article/850.pdf",
"pdf_hash": "9cc95fe8a067c4229c4371cebbf11e2c3d4905fe",
"pdf_src": "Arxiv",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44274",
"s2fieldsofstudy": [
"Mathematics"
],
"sha1": "9cc95fe8a067c4229c4371cebbf11e2c3d4905fe",
"year": 2016
}
|
pes2o/s2orc
|
Concatenation theorems for anti-Gowers-uniform functions and Host-Kra characteristic factors
We establish a number of"concatenation theorems"that assert, roughly speaking, that if a function exhibits"polynomial"(or"Gowers anti-uniform","uniformly almost periodic", or"nilsequence") behaviour in two different directions separately, then it also exhibits the same behavior (but at higher degree) in both directions jointly. Among other things, this allows one to control averaged local Gowers uniformity norms by global Gowers uniformity norms. In a sequel to this paper, we will apply such control to obtain asymptotics for"polynomial progressions"$n+P_1(r),\dots,n+P_k(r)$ in various sets of integers, such as the prime numbers.
Concatenation of polynomiality
Suppose P : Z 2 → R is a function with the property that n → P(n, m) is an (affine-)linear function of n for each m, and m → P(n, m) is an (affine-)linear function of m for each n. Then it is easy to see that P is of the form P(n, m) = αnm + β n + γm + δ for some coefficients α, β , γ, δ ∈ R. In particular, (n, m) → P(n, m) is a polynomial of degree at most 2.
The above phenomenon generalises to higher degree polynomials. Let us make the following definition: Definition 1.1 (Polynomials). Let P : G → K be a function from one additive group G = (G, +) to another K = (K, +). For any subgroup H of G and for any integer d, we say that P is a polynomial of degree < d along H according to the following recursive definition 1 : (i) If d ≤ 0, we say that P is of degree < d along H if and only if it is identically zero.
(ii) If d ≥ 1, we say that P is of degree < d along H if and only if, for each h ∈ H, there exists a polynomial P h : G → K of degree < d − 1 along H such that for all x ∈ G.
We then have Proposition 1.2 (Concatenation theorem for polynomials). Let P : G → K be a function from one additive group G to another K, let H 1 , H 2 be subgroups of G, and let d 1 , d 2 be integers. Suppose that (i) P is a polynomial of degree <d 1 along H 1 .
(ii) P is a polynomial of degree <d 2 along H 2 .
Then P is a polynomial of degree <d 1 + d 2 − 1 along H 1 + H 2 .
The degree bound here is sharp, as can for instance be seen using the example P : Z × Z → R of the monomial P(n, m) := n d 1 −1 m d 2 −1 , with H 1 := Z × {0} and H 2 := {0} × Z.
Proof. The claim is trivial if d 1 ≤ 0 or d 2 ≤ 0, so we suppose inductively that d 1 , d 2 ≥ 1 and that the claim has already been proven for smaller values of d 1 + d 2 .
Let h 1 ∈ H 1 and h 2 ∈ H 2 . By (i), there is a polynomial P h 1 : G → K of degree <d 1 − 1 along H 1 such that P(x + h 1 ) = P(x) + P h 1 (x) for all x ∈ G. Similarly, by (ii), there is a polynomial P h 2 : G → K of degree <d 2 − 1 along H 2 such that for all x ∈ G. Replacing x by x + h 1 and combining with (3), we see that for all x ∈ G, where P h 1 ,h 2 (x) := P h 1 (x) + P h 2 (x + h 1 ).
Proof. The claim is again trivial when d 1 = 0 or d 2 = 0, so we may assume inductively that d 1 , d 2 ≥ 1 and that the claim has already been proven for smaller values of d 1 + d 2 .
Concatenation of anti-uniformity
Now we turn to a more non-trivial variant of Proposition 1.2, in which the property of being polynomial in various directions is replaced by that of being anti-uniform in the sense of being almost orthogonal to Gowers uniform functions. To make this concept precise, and to work at a suitable level of generality, we need some notation. Recall that a finite multiset is the same concept as a finite set, but in which elements are allowed to appear with multiplicity. where the sum ∑ a∈A is also counting multiplicity. For instance, E a∈{1,2,2} a = 5 3 . Definition 1.7 (G-system). Let G = (G, +) be an at most countable additive group. A G-system (X, T ) is a probability space X = (X, B, µ), together with a collection T = (T g ) g∈G of invertible measure-preserving maps T g : X → X, such that T 0 is the identity and T g+h = T g T h for all g, h ∈ G. For technical reasons we will require that the probability space X is countably generated modulo null sets (or equivalently, that the Hilbert space L 2 (X) is separable). Given a measurable function f : X → C and g ∈ G, we define T g f := f • T −g . We shall often abuse notation and abbreviate (X, T ) as X. Remark 1.8. As it turns out, a large part of our analysis would be valid even when G was an uncountable additive group (in particular, no amenability hypothesis on G is required); however the countable case is the one which is the most important for applications, and so we shall restrict to this case to avoid some minor technical issues involving measurability. Once the group G is restricted to be countable, the requirement that X is countably generated modulo null sets is usually harmless in applications, as one can often easily reduce to this case. In combinatorial applications, one usually works with the case when G is a finite group, and X is G with the uniform probability measure and the translation action T g x := x + g, but for applications in ergodic theory, and also because we will eventually apply an ultraproduct construction to the combinatorial setting, it will be convenient to work with the more general setup in Definition 1.7. Definition 1.9 (Gowers uniformity norm). Let G be an at most countable additive group, and let (X, T ) be a G-system. If f ∈ L ∞ (X), Q is a non-empty finite multiset and d is a positive integer, we define the Gowers uniformity (semi-)norm f U d Q (X) by the formula where ∆ h,h is the nonlinear operator More generally, given d non-empty finite multisets Q 1 , . . . , Q d , we define the Gowers box (semi-)norm f d Q 1 ,...,Q d (X) by the formula so in particular f U d Q (X) = f d Q,...,Q (X) . Note that the ∆ h,h commute with each other, and so the ordering of the Q i is irrelevant.
It is well known that f U d Q (X) and f d f U d Q (X) * := sup{| f , g L 2 (X) | : g ∈ L ∞ (X), g U d Q (X) ≤ 1} for all f ∈ L ∞ (X), where f , g L 2 (X) := X f g dµ.
Our first main theorem is analogous to Proposition 1.2, and is stated as follows.
Theorem 1.11 (Concatenation theorem for anti-uniformity norms). Let Q 1 , Q 2 be coset progressions of ranks r 1 , r 2 respectively in an at most countable additive group G, let (X, T ) be a G-system, let d 1 , d 2 be positive integers, and let f lie in the closed unit ball of L ∞ (X). Let c 1 , c 2 : (0, +∞) → (0, +∞) be functions such that c i (ε) → 0 as ε → 0 for i = 1, 2. We make the following hypotheses: 2 Strictly speaking, one should refer to the tuple (Q, H, r, ) as the coset progression, rather than just the multi-set Q, as one cannot define key concepts such as rank or the dilates εQ without this additional data. However, we shall abuse notation and use the multi-set Q as a metonym for the entire coset progression.
Heuristically, hypothesis (i) (resp. (ii)) is asserting that f "behaves like" a function of the form x → e 2πiP(x) for some function P : X → R/Z that "behaves like" a polynomial of degree <d 1 along Q 1 (resp. <d 2 along Q 2 ), thus justifying the analogy between Theorem 1.11 and Proposition 1.2. The various known inverse theorems for the Gowers norms [21,5,27,28,33] make this heuristic more precise in some cases; however, our proof of the above theorem does not require these (difficult) theorems (which are currently unavailable for the general Gowers box norms).
We prove Theorem 1.11 in Sections 6, 7, after translating these theorems to a nonstandard analysis setting in Section 5.1. The basic idea is to use the hypothesis (i) to approximate f by a linear combination of "dual functions" along the Q 1 direction, and then to use (ii) to approximate the arguments of those dual functions in turn by dual functions along the Q 2 direction. This gives a structure analogous to the identities (3)-(6) obeyed by the function P considered in Proposition 1.2, and one then uses an induction hypothesis to conclude. To obtain the desired approximations, one could either use structural decomposition theorems (as in [14]) or nonstandard analysis (as is used for instance in [28]). We have elected to use the latter approach, as the former approach becomes somewhat messy due to the need to keep quantitative track of a number of functions such as ε → c(ε), whereas these functions are concealed to the point of invisibility by the nonstandard approach. We give a more expanded sketch of Theorem 1.11 in Section 4 below. Remark 1.12. It may be possible to establish a version of Theorem 1.11 in which one does not shrink the coset progressions Q by a small parameter ε, so that the appearance of εQ in (9) is replaced by Q. This would give the theorem a more "combinatorial" flavor, as opposed to an "ergodic" one (if one views the limit ε → 0 as being somewhat analogous to the ergodic limit n → ∞ of averaging along a Følner sequence Φ n ). Unfortunately, our methods rely heavily on techniques such as the van der Corput inequality, which reflects the fact that Q is almost invariant with respect to translations in εQ when ε is small. As such, we do not know how to adapt our methods to remove this shrinkage of the coset progressions Q. Similarly for Theorem 1.13 below.
We also have an analogue of Proposition 1.5: Theorem 1.13 (Concatenation theorem for anti-box norms). Let d 1 , d 2 be positive integers. For any i = 1, 2 and 1 ≤ j ≤ d i , let Q i, j be a coset progression of rank r i, j in an at most countable additive group G. Let (X, T ) be a G-system, let d 1 , d 2 be positive integers, and let f lie in the unit ball of L ∞ (X). Let c 1 , c 2 : (0, +∞) → (0, +∞) be functions such that c i (ε) → 0 as ε → 0 for i = 1, 2. We make the following hypotheses: Then there exists a function c : (0, ∞) → (0, +∞) with c(ε) → 0 as ε → 0, which depends only on d 1 , d 2 , c 1 , c 2 and the r 1, j , r 2, j , such that The proof of Theorem 1.13 is similar to that of Theorem 1.11, and is given at the end of Section 7.
Concatenation of characteristic factors
Analogues of the above results can be obtained for characteristic factors of the Gowers-Host-Kra seminorms [30] in ergodic theory. To construct these factors for arbitrary abelian group actions (including uncountable ones), it is convenient to introduce the following notation (which should be viewed as a substitute for the machinery of Følner sequences that does not require amenability). Given an additive group H, we consider the set F[H] of non-empty finite multisets Q in H. We can make F[H] a directed set by declaring Q 1 ≤ Q 2 if one has Q 2 = Q 1 + R for some non-empty finite multiset R; note that any two Q 1 , Q 2 have a common upper bound Q 1 + Q 2 . One can then define convergence along nets in the usual fashion: given a sequence of elements x Q of a Hausdorff topological space indexed by the non-empty finite multisets Q in H, we write lim Q→H x Q = x if for every neighbourhood U of x, there is a finite non-empty multiset Q 0 in H such that x Q ∈ U for all Q ≥ Q 0 . Similarly one can define joint limits lim (Q 1 ,...,Q k )→(H 1 ,...,H k ) x Q 1 ,...,Q k , where each Q i ranges over finite non-empty multisets in H i , using the product directed set F[H 1 ] × · · · × F[H d ]. Thus for instance lim (Q 1 ,Q 2 )→(H 1 ,H 2 ) x Q 1 ,Q 2 = x if, for every neighbourhood U of x, there exist Q 1,0 , Q 2,0 in H 1 , H 2 respectively such that x Q 1 ,Q 2 ∈ U whenever Q 1 ≥ Q 1,0 and Q 2 ≥ Q 2,0 . If the x Q or x Q 1 ,...,Q k take values in R, we can also define limit superior and limit inferior in the usual fashion. Remark 1.14 (Amenable case). If G is an amenable (discrete) countable additive group with a Følner sequence Φ n , and a g is a bounded sequence of complex numbers indexed by g ∈ G, then we have the relationship lim between traditional averages and the averages defined above, whenever the right-hand side limit exists. Indeed, for any ε > 0, we see from the Følner property that for any given Q ∈ F[G], that E g∈Φ n a g and E g∈Φ n +Q a g differ by at most ε if n is large enough; while from the convergence of the right-hand side limit we see that E g∈Φ n +Q a g and E g∈Q a g differ by at most ε for all n if Q is large enough, and the claim follows. A similar result holds for joint limits, namely that lim n 1 ,...,n d →∞ whenever Φ n,i is a Følner sequence for H i and the a h 1 ,...,h d are a bounded sequence of complex numbers.
Given a G-system (X, T ), a natural number d, a subgroup H of G, and a function f ∈ L ∞ (X), we define the Gowers-Host-Kra seminorm we will show in Theorem 2.1 below that this limit exists, and agrees with the more usual definition of the Gowers-Host-Kra seminorm from [30]; in fact the definition given here even extends to the case when G and H are uncountable abelian groups. More generally, given subgroups H 1 , . . . , H d of G, we define the Gowers-Host-Kra box seminorm again, the existence of this limit will be shown in Theorem 2.1 below. Define a factor of a G-system (X, T ) with X = (X, B, µ) to be a G-system (Y, T ) with Y = (Y, Y, ν) together with a measurable factor map π : X → Y intertwining the group actions (thus T g • π = π • T g for all g ∈ G) such that ν is equal to the pushforward π * µ of µ, thus µ(π −1 (E)) = ν(E) for all E ∈ Y. By abuse of notation we use T to denote the action on the factor Y as well as on the original space X. Note that L ∞ (Y) can be viewed as a (G-invariant) subalgebra of L ∞ (X), and similarly L 2 (Y) is a (G-invariant) closed subspace of the Hilbert space L 2 (X); if f ∈ L 2 (X), we write E( f |Y) for the orthogonal projection onto L 2 (Y). We also call X an extension of Y. Note that any subalgebra Y of B can be viewed as a factor of X by taking Y = X and ν = µ Y . For instance, given a subgroup H of G, the invariant σ -algebra B H consisting of sets E ∈ B such that T h E = E up to null sets for any h ∈ H generates a factor X H of X, and so we can meaningfully define the conditional expectation E( f |X H ) for any f ∈ L 2 (X).
Two factors Y, Y of X are said to be equivalent if the algebras L ∞ (Y) and L ∞ (Y ) agree (using the usual convention of identifying functions in L ∞ that agree almost everywhere), in which case we write Y ≡ Y . We partially order the factors of X up to equivalence by declaring This gives the factors of X up to equivalence the structure of a lattice: the meet Y ∧ Y of two factors is given (up to equivalence) by setting L ∞ (Y ∧ Y ) = L ∞ (Y) ∩ L ∞ (Y ), and the join Y ∨ Y of two factors is given by setting L ∞ (Y ∨ Y ) to be the von Neumann algebra generated by L ∞ (Y) ∪ L ∞ (Y ) (i.e., the smallest von Neumann subalgebra of L ∞ (X) containing L ∞ (Y) ∪ L ∞ (Y )).
We say that a G-system X is H-ergodic for some subgroup H of G if the invariant factor X H is trivial (equivalent to a point). Note that if a system is G-ergodic, it need not be H-ergodic for subgroups H of G. Because of this, it will be important to not assume H-ergodicity for many of the results and arguments below, which will force us to supply new proofs of some existing results in the literature that were specialised to the ergodic case.
For the seminorm U d H (X), it is known 3 (see [30] or Theorem 2.4 below) that there exists a characteristic factor (Z <d H (X), T ) = (Z <d H , Z <d H , ν, T ) of (X, T ), unique up to equivalence, with the property that for all f ∈ L ∞ (X); for instance, Z <1 H (X) can be shown to be equivalent to the invariant factor X H , and the factors Z <d H (X) are non-decreasing in d. In the case when H is isomorphic to the integers Z, and assuming for simplicity that the system X is H-ergodic, the characteristic factor was studied by Host and Kra [30], who obtained the important result that the characteristic factor Z <d H (X) was isomorphic to a limit of d − 1-step nilsystems (see also [37] for a related result regarding characteristic factors of multiple averages); the analogous result for actions of infinite-dimensional vector spaces F ω := n F n was obtained in [5]. More generally, given subgroups H 1 , . . . , H d , there is a unique (up to equivalence) characteristic factor Z <d for all f ∈ L ∞ (X); this was essentially first observed in [29], and we establish it in Theorem 2.4. However, a satisfactory structural description of the factors Z <d H 1 ,...,H d (X) (in the spirit of [30]) is not yet available; see [1] for some recent work in this direction.
We can now state the ergodic theory analogues of Theorems 1.11 and 1.13. In these results G is always understood to be an at most countable additive group. Because our arguments will require a Følner sequence of coset progressions of bounded rank, we will also have to temporarily make a further technical restriction on G, namely that G be the sum of a finitely generated group and a profinite group (or equivalently, a group which becomes finitely generated after quotienting by a profinite subgroup). This class of groups includes the important examples of lattices Z d and vector spaces F ω = n F n over finite fields, but excludes the infinitely generated torsion-free group Z ω = n Z n . Observe that this class of groups is also closed under quotients and taking subgroups. Theorem 1.15 (Concatenation theorem for characteristic factors). Suppose that G is the sum of a finitely generated group and a profinite group. Let X be a G-system, let H 1 , H 2 be subgroups of G, and let d 1 , d 2 be positive integers. Then Equivalently, using the lattice structure on factors discussed previously, or equivalently, In Section 2, we deduce these results from the corresponding combinatorial results in Theorems 1.11, 1.13. and where α ∈ R/Z is a fixed irrational number. These shifts commute and generate a Z 2 -system T (n,m) (x, y, z) := (x + nα, y + mα, z + ny + mx + nmα) (compare with (1)). The shift T (1,0) does not act ergodically on X, but one can perform an ergodic decomposition into ergodic components R/Z × {y} × R/Z for almost every y ∈ R/Z, with T (1,0) acting as a circle shift (x, z) → (x + α, z + y) on each such component. From this one can easily verify that On the other hand, Z <2 Z 2 (X) = X, as there exist functions in L ∞ (Z) whose U 2 Z 2 (X) norm vanish (for instance the function (x, y, z) → e 2πiz ). Nevertheless, Corollary 1.15 concludes that Z <3 Z 2 (X) = X (roughly speaking, this means that X exhibits "quadratic" or "2-step" behaviour as a Z 2 -system, despite only exhibiting "linear" or "1-step" behaviour as a Z × {0}-system or {0} × Z-system).
Remark 1.18. In the case that H is an infinite cyclic group acting ergodically, Host and Kra [30] show that the characteristic factor Z <d H (X) = Z <d H,...,H (X) is an inverse limit of d − 1-step nilsystems. If H does not act ergodically, then (assuming some mild regularity on the underlying measure space X) one has a similar characterization of Z <d H (X) on each ergodic component. The arguments in [30] were extended to finitely generated groups H acting ergodically in [19]; see also [5] for an analogous result in the case of actions of infinite-dimensional vector spaces F ω over a finite field. Theorem 1.15 can then be interpreted as an assertion that if X acts as an inverse limit of nilsystems of step d 1 − 1 along the components of one group action H 1 , and as an inverse limit of nilsystems of step d 2 − 1 along the components of another (commuting) group action H 2 , then X is an inverse limit of nilsystems of step at most d 1 + d 2 − 2 along the components of the joint H 1 + H 2 action. It seems of interest to obtain a more direct proof of this assertion. A related question would be to establish a nilsequence version of Proposition 1.2. For instance one could conjecture that whenever a sequence f : Z × Z → C was such that n 1 → f (n 1 , n 2 ) was a Lipschitz nilsequence of step d 1 − 1 uniformly in n 2 (as defined for instance in [24]), and n 2 → f (n 1 , n 2 ) was a Lipschitz nilsequence of step d 2 − 1 uniformly in n 1 , then f itself would be a Lipschitz nilsequence jointly on Z 2 of step d 1 + d 2 − 2. It seems that Proposition 1.11 is at least able to show that f can be locally approximated (in, say, an L 2 sense) by such nilsequences on arbitrarily large scales, but some additional argument is needed to obtain the conjecture as stated.
We are able to remove the requirement that G be the sum of a finitely generated group and a profinite group from Theorem 1.15: Theorem 1.19 (Concatenation theorem for characteristic factors). Let G be an at most countable additive group. Let (X, T ) be a G-system, let H 1 , H 2 be subgroups of G, and let d 1 , d 2 be positive integers. Then We prove this result in Section 3 using an ergodic theory argument that relies on the machinery of cubic measures and cocycle type that was introduced by Host and Kra [30], rather than on the combinatorial arguments used to establish Theorems 1.11, 1.13. It is at this point that we use our requirement that G-systems be countably generated modulo null sets, in order to apply the ergodic decomposition (after first passing to a compact metric space model), as well as the Mackey theory of isometric extensions. It is likely that a similar strengthening of Theorem 1.16 can be obtained, but this would require extending much of the Host-Kra machinery to tuples of commuting actions, which we will not do here.
Globalizing uniformity
We have seen that anti-uniformity can be "concatenated", in that functions which are approximately orthogonal to functions that are locally Gowers uniform in two different directions are necessarily also approximately orthogonal to more globally Gowers uniform functions. By duality, one then expects to be able to decompose a globally Gowers uniform function into functions that are locally Gowers uniform in different directions. For instance, in the ergodic setting, one has the following consequence of Theorem 1.15: Corollary 1.20. Let (X, T ) be a G-system, let H 1 , H 2 be subgroups of G, and let d 1 , d 2 be positive integers. If f ∈ L ∞ (X) is orthogonal to L ∞ (Z <d 1 +d 2 −1 H 1 +H 2 (X)) (with respect to the L 2 inner product), then one can write f = f 1 + f 2 , where f 1 ∈ L ∞ (X) is orthogonal to L ∞ (Z <d 1 H 1 (X)) and f 2 ∈ L ∞ (X) is orthogonal to L ∞ (Z <d 2 H 2 (X)); furthermore, f 1 and f 2 are orthogonal to each other.
Proof. By Theorem 1.19 and (13) applied to the system Z <d 1 H 1 (X), any function in L ∞ (Z <d 1 H 1 (X)) orthogonal to L ∞ (Z <d 1 +d 2 −1 This seminorm is the same as the U d 2 H 2 (X) seminorm, and so by (13) again this function must be necessarily orthogonal to L ∞ (Z <d 2 H 2 (X)). We conclude that the restrictions of the spaces L ∞ (Z <d 1 H 1 (X)) and L ∞ (Z <d 2 H 2 (X)) to L ∞ (Z <d 1 +d 2 −1 H 1 +H 2 (X)) are orthogonal, and the claim follows.
One can similarly use Theorem 1.16 to obtain Corollary 1.21. Suppose that G is the sum of a finitely generated group and a profinite group. Let (X, T ) be a G-system, let d 1 , d 2 be positive integers, and let H )) and f 2 ∈ L ∞ (X) is orthogonal to L ∞ (Z <d 2 H 2,1 ,...,H 2,d 2 (X)); furthermore, f 1 and f 2 are orthogonal to each other.
We can use the orthogonality in Corollary 1.20 to obtain a Bessel-type inequality: Corollary 1.22 (Bessel inequality). Let G be an at most countable additive group. Let (X, T ) be a G-system, let (H i ) i∈I be a finite family of subgroups of G, and let (d i ) i∈I be a family of positive integers.
Then for any f ∈ L ∞ (X), we have Proof. Write f i := E( f |Z <d i H i (X)). We can write the left-hand side of (14) as f , ∑ i∈I f i which by the Cauchy-Schwarz inequality is bounded by and the claim follows.
One can use Corollary 1.21 to obtain an analogous Bessel-type inequality involving finitely generated subgroups H i,k , k = 1, . . . , d i , which we leave to the interested reader.
Returning now to the finitary Gowers norms, one has a qualitative analogue of the Bessel inequality involving the Gowers uniformity norms: Theorem 1.23 (Qualitative Bessel inequality for uniformity norms). Let (Q i ) i∈I be a finite non-empty family of coset progressions Q i , all of rank at most r, in an additive group G. Let (X, T ) be a G-system, and let d be a positive integer. Let f lie in the unit ball of L ∞ (X), and suppose that for some ε > 0. Then where c : (0, +∞) → (0, +∞) is a function such that c(ε) → 0 as ε → 0. Furthermore, c depends only on r and d. (In particular, c is independent of the size of I.) We prove this theorem in Section 8. We remark that the theorem is only powerful when the cardinality of the set I is large compared to ε, otherwise the claim would easily follow from considering the diagonal contribution i = j to (15). Theorem 1.23 has an analogue for the Gowers box norms: Theorem 1.24 (Qualitative Bessel inequality for box norms). Let d be a positive integer. For each 1 ≤ j ≤ d, let (Q i, j ) i∈I be a finite family of coset progressions Q i, j , all of rank at most r, in an additive group G. Let (X, T ) be a G-system. Let f lie in the unit ball of L ∞ (X), and suppose that Then where c : (0, +∞) → (0, +∞) is a function such that c(ε) → 0 as ε → 0. Furthermore, c depends only on r and d.
implies some non-trivial lower bound for f U d 1 +d 2 −1 εQ 1 +εQ 2 (X) ). Unfortunately this is not the case; a simple counterexample is provided by a function f of the form , Q 2 are large subgroups of G, X = G with the translation action, and f 1 is constant in the Q 1 direction but random in the Q 2 direction), and vice versa for f 2 .
A sample application
In a sequel to this paper [36], we will use the concatenation theorems proven in this paper to study polynomial patterns in sets such as the primes. Here, we will illustrate how this is done with a simple example, namely controlling the average In that case we will be able to control this expression by the global Gowers U 3 norm: Z/NZ (Z/NZ) ≤ ε for some ε > 0 and i = 1, . . . , 4, where we give Z/NZ the uniform probability measure. Then for some quantity c(ε) depending only on ε and C that goes to zero as ε → 0.
For instance, using this proposition (embedding [N] in, say, Z/5NZ) and the known uniformity properties of the Möbius function µ (see [22]) we can now obtain the asymptotic as N → ∞; we leave the details to the interested reader. As far as we are able to determine, this asymptotic appears to be new. In the sequel [36] to this paper we will consider similar asymptotics involving various polynomial averages such as , and arithmetic functions such as the von Mangoldt function Λ.
We prove Proposition 1.26 in Section 9. Roughly speaking, the strategy is to observe that can be controlled by averages of "local" Gowers norms of the form U 2 Q , where Q is an arithmetic progression in Z/NZ of length comparable to M. Each individual such norm is not controlled directly by the U 3 Z/NZ norm, due to the sparseness of Q; however, after invoking Theorem 1.23, we can control the averages of the U 2 Q norms with averages of U 3 Q+Q norms, where Q, Q are two arithmetic progressions of length comparable to M. For typical choices of Q, Q , the rank two progerssion Q + Q will be quite dense in Z/NZ, allowing one to obtain the proposition.
One would expect that the U 3 norm in Proposition 1.26 could be replaced with a U 2 norm. Indeed this can be done invoking the inverse theorem for the U 3 norm [21] as well as equidistribution results for nilsequences [26]: . This result can be proven by adapting the arguments based on the arithmetic regularity lemma in [25, §7]; we sketch the key new input required in Section 10. In the language of Gowers and Wolf [15], this proposition asserts that the true complexity of the average A N,M is 1, rather than 2 (the latter being roughly analogous to the "Cauchy-Schwarz complexity" discussed in [15]). This drop in complexity is consistent with similar results established in the ergodic setting in [4], and in the setting of linear patterns in [15,16,17,18,25], and is proven in a broadly similar fashion to these results. In principle, Proposition 1.27 is purely an assertion in "linear" Fourier analysis, since it only involves the U 2 Gowers norm, but we do not know of a way to establish it without exploiting both the concatenation theorem and the inverse U 3 theorem.
We thank the anonymous referees for a careful reading of the paper and for many useful suggestions and corrections.
The ergodic theory limit
In this section we show how to obtain the main ergodic theory results of this paper (namely, Theorem 1.15 and Theorem 1.16) as a limiting case of their respective finitary results, namely Theorem 1.11 and Theorem 1.13. The technical hypothesis that the group G be the sum of a finitely generated group and a profinite group will only be needed near the end of the section. Readers who are only interested in the combinatorial assertions of this paper can skip ahead to Section 4.
We first develop some fairly standard material on the convergence of the Gowers-Host-Kra norms, and on the existence of characteristic factors. Given a G-system (X, T ), finite non-empty multisets Q 1 , . . . , Q d of G, and elements f ω of L 2 d (X) for each ω ∈ {0, 1} d , define the Gowers inner product where ω = (ω 1 , . . . , ω d ), |ω| := ω 1 + · · · + ω d , and C : f → f is the complex conjugation operator; the absolute convergence of the integral is guaranteed by Hölder's inequality. Comparing this with Definition 1.9, we see that We also recall the Cauchy-Schwarz-Gowers inequality (see [20,Lemma B.2]). By setting f ω to equal f when ω d +1 = · · · = ω d = 0, and equal to 1 otherwise, we obtain as a corollary the monotonicity property We have the following convergence result: Theorem 2.1 (Existence of Gowers-Host-Kra seminorm). Let (X, T ) be a G-system, let d be a natural number, and let H 1 , . . . , H d be subgroups of G. For each ω ∈ {0, 1} d , let f ω be an element of L 2 d (X).
Then the limit exists. In particular, the limit exists for any f ∈ L 2 d (X).
It is likely that one can deduce this theorem from the usual ergodic theorem, by adapting the arguments in [30], but we will give a combinatorial proof here, which applies even in cases in which G or H 1 , . . . , H d are uncountable.
Proof. By multilinearity we may assume that the f ω are all real-valued, so that we may dispense with the complex conjugation operations. We also normalise the f ω to lie in the closed unit ball of L 2 d (X).
and ω = (ω 1 , . . . , ω d ) agree in the first d components (that is, ω i = ω i for i = 1, . . . , d ). We will prove Theorem 2.1 by downward induction on d , with the d = 0 case establishing the full theorem.
Thus, assume that 0 ≤ d ≤ d and that the claim has already been proven for larger values of d (this hypothesis is vacuous for d = d). We will show that for any given (and sufficiently small) ε > 0, and for sufficiently large only increases by at most ε if one increases any of the Q i , that is to say for any i = 1, . . . , d and any finite non-empty multiset R. Applying this once for each i, we see that the limit superior of the f d Q 1 ,...,Q d (X) does not exceed the limit inferior by more than dε, and sending ε → 0 (and using the boundedness of the f d Q 1 ,...,Q d (X) , from Hölder's inequality) we obtain the claim. It remains to establish (20). There are two cases, depending on whether i ≤ d or i > d . First suppose that i ≤ d ; by relabeling we may take i = 1. Using (17) and the d -symmetry (and hence 1-symmetry) of f , we may rewrite f d From the unitarity of shift operators and the triangle inequality, we have for any finite non-empty multiset R in H 1 . This gives (20) (without the epsilon loss!) in the case i ≤ d . Now suppose i > d . By relabeling we may take i = d. In this case, we rewrite f d where Note from Hölder's inequality that the f ω d all lie in the closed unit ball of L 2 (X). A similar rewriting shows that the quantity . This tuple of functions is d + 1-symmetric after rearrangement, and so by induction hypothesis this expression converges to a limit as (Q 1 , . . . , Q d ) → (H 1 , . . . , H d ). In particular, for (Q 1 , . . . , Q d ) sufficiently large, we have for any n ∈ H d . By the parallelogram law, this implies that for all n ∈ H d , which by the triangle inequality implies that for any finite non-empty multiset R in H d . By Cauchy-Schwarz and (21), this implies (for ε small enough) that In the degenerate case d = 0 we adopt the convention D 0 () = 1. We also abbreviate D d H,...,H as D d H . We can similarly define the local dual operators ..,H d in the weak operator topology. We can upgrade this convergence to the strong operator topology: Proof. The claim is trivially true for d = 0, so assume d ≥ 1. By multilinearity we may assume the f ω are real. By a limiting argument using Hölder's inequality, we may assume without loss of generality that f ω all lie in L ∞ (X) and not just in . Theorem 2.1 already gives weak convergence, so it suffices to show that the limit exists in the strong L 2 (X) topology. By the cosine rule (and the completeness of L 2 (X)), it suffices to show that the joint limit exists. But the expression inside the limit can be written as an inner product andf ω := 1 for all other ω , and the claim then follows from Theorem 2.1 (with d replaced by 2d).
In the d = 1 case, we have H f is orthogonal to all H-invariant functions; thus we obtain the mean ergodic theorem In particular, we have which on taking limits using Theorem 2.1 and dominated convergence implies that From this, we see that the seminorm U d H (X) defined here agrees with the Gowers-Host-Kra seminorm from [30]; see [5, Appendix A] for details.
A key property concerning dual functions is that they are closed under multiplication after taking convex closures: Proposition 2.3. Let X be a G-system, and let H 1 , . . . , H d be subgroups of G. Let B be the closed convex hull (in L 2 (X)) of all functions of the form D d , where the f ω all lie in the closed unit ball of L ∞ (X). Then B is closed under multiplication: if F, F ∈ B, then FF ∈ B.
Proof. We may assume d ≥ 1, as the d = 0 case is trivial. By convexity and a density argument, we may assume that F, F are themselves dual functions, thus and for some f ω , f ω in the closed unit ball of L ∞ (X). By Proposition 2.2, we can write FF as where the limits are in the L 2 (X) topology. For any given h ∈ G, averaging a bounded sequence over Q i and averaging over Q i + h are approximately the same if Q i is sufficiently large (e.g. if Q i is larger than the progression {0, h, . . . , Nh} for some large h). Because of this, we can shift the k i variable by h i in the above expression without affecting the limit. In other words, FF is equal to By Proposition 2.2 (with d replaced by 2d), the above expression remains convergent if we work with the joint limit lim In particular, we may interchange limits and write the above expression as Computing the inner limit, this simplifies to This is the strong limit of convex averages of elements of B, and thus lies in B as required.
We can now construct the characteristic factor (cf. [ to be the set of measurable sets E such that 1 E is expressible as the limit (in L 2 (X)) of a uniformly bounded sequence in the set RB : (25) we conclude that f is orthogonal to B, and hence to L ∞ (Z <d We have a basic corollary of Theorem 2.4 (cf. [30,Proposition 4.6]): Corollary 2.5. Let X be a G-system, let Y be a factor of X, and let H 1 , . . . , H d be subgroups of G. Then Proof and (by Theorem 2.4) f can be expressed as the limit of dual functions of functions in L ∞ (Y), and hence in L ∞ (X), and so the inclusion then follows from another application of We now can deduce Theorem 1.15 and Theorem 1.16 from Theorem 1.11 and Theorem 1.13 respectively. At this point we will begin to need the hypothesis that G is the sum of a finitely generated group and a profinite group. We just give the argument for Theorem 1.15; the argument for Theorem 1.16 is completely analogous and is left to the reader. Let (X, T ), G, H 1 , H 2 , d 1 , d 2 be as in Theorem 1.15. By Corollary 2.5, we may assume without loss of generality that Indeed, if we set we see from Corollary 2.5 that X obeys the condition (27), and that Z <d 1 +d 2 −1 . By Theorem 2.4, we see that for every δ > 0, there exists a real number F(δ ) such that f lies within δ in L 2 (X) norm of both F(δ ) · B 1 and F(δ ) · B 2 . On the other hand, from the Cauchy-Schwarz-Gowers inequality (18) one has whenever i = 1, 2 and f i ∈ B i , and f is in the closed unit ball of L ∞ (X). We conclude that (22) and (9) we conclude that for any coset progression Q i in H i . The right-hand side goes to zero as ε → 0.
Since G is the sum of a finitely generated group and a profinite group, the subgroups H 1 , H 2 are also. In particular, for each i = 1, 2, we may obtain a Følner sequence Q i,n for H i of coset progressions of bounded rank (thus for any g ∈ H i , Q i,n and Q i,n + h differ (as multisets) by o(|Q i,n |) elements as n → ∞). (Indeed, if H i is finitely generated, one can use ordinary progressions as the Følner sequence, whereas if H i is at most countable and bounded torsion, one can use subgroups for the Følner sequence, and the general case follows by addition.) Applying Theorem 1.11, we conclude that for some c(ε) independent of n that goes to zero as ε → 0. Since the Q i,n are Følner sequences for H i , Q 1,n + Q 2,n is a Følner sequence for H 1 + H 2 . In particular, by (11) one has for every ε > 0. Sending ε → 0, we obtain the claim.
An ergodic theory argument
We now give an ergodic theory argument that establishes Theorem 1.19. The arguments here rely heavily on those in [30], but are not needed elsewhere in this paper. For this section it will be convenient to restrict attention to G-systems (X, T ) in which X is a compact metric space with the Borel σ -algebra, in order to access tools such as disintegration of measure. The requirement of being a compact metric space is stronger than our current hypothesis that X is countably generated modulo null sets; however, it is known (see [8,Proposition 5.3] that every G-system that is countably generated modulo null sets is equivalent (modulo null sets) to another G-system (X , T ) in which X is a compact metric space with the Borel σ -algebra. The corresponding characteristic factors such as Z <d 1 H 1 (X) are also equivalent up to null sets (basically because the Gowers-Host-Kra seminorms are equivalent). Because of this, we see that to prove Theorem 1.19 it suffices to do so when X is a compact metric space.
We now recall the construction of cubic measures from [30], which in [29] was generalised 4 to our current setting of arbitrary actions of multiple subgroups of an at most countable additive group. Definition 3.1 (Cubic measures). Let (X, T ) be a G-system with X = (X, B, µ), and let H 1 , . . . , H d be subgroups of G. We define the G-system (X H 1 ,...,H d is the unique probability measure such that for all f ω ∈ L ∞ (X), where the tensor product ω∈{0,1} d C |ω| f ω is defined as Finally, the shift T on X H 1 ,...,H d is defined via the diagonal action: (28). We leave the details of these arguments to the interested reader. Once this measure is constructed, it is easy to see that the diagonal action of T preserves the measure µ H 2 ,H 1 , with the isomorphism given by the map (x 00 , x 10 , x 01 , x 11 ) → (x 00 , x 01 , x 10 , x 11 ).
One can informally view the probability space X H 1 ,...,H d as describing the distribution of certain d-dimensional "parallelopipeds" in X, where the d "directions" of the parallelopiped are "parallel" to H 1 , . . . , H d . We will also need the following variant of these spaces, which informally describes the distribution of "L-shaped" objects in X.
Definition 3.2 (L-shaped measures). Let (X, T ) be a G-system with X = (X, B, µ), and let H, K be subgroups of G. We define the system (X L H,K , T ) by setting X L H,K := (X L , B L , µ L H,K ), where X L := X {00,01,10} is the set of tuples (x 00 , x 10 , x 01 ) with x 00 , x 10 , x 01 ∈ X, B L is the product measure, and µ L H,K is the unique probability measure such that for all f 00 , f 01 , f 10 ∈ L ∞ (X). The shift T on X L H,K is defined by the diagonal action.
The system X L H,K is clearly a factor of X [2] H,K , and also has factor maps to X [1] H and X [1] K given by (x 00 , x 10 , x 01 ) → (x 00 , x 10 ) and (x 00 , x 10 , x 01 ) → (x 00 , x 01 ) respectively. A crucial fact for the purposes of establishing concatenation is that X L H,K additionally has a third factor map to the space X H+K , T ).
Informally, this lemma reflects the obvious fact that if x 00 and x 10 are connected to each other by an element of the H action, and x 00 and x 01 are connected to each other by an element of the K action, then x 10 and x 01 are connected to each other by an element of the H + K action.
Proof. If f 00 , f 10 , f 01 ∈ L ∞ (X) are real-valued, then by Theorem 2.1, Definition 1.9, and Definition 3.2 we have where we use Proposition 2.2 and (26) in the last two lines. Specialising to the case f 00 = 1, we conclude in particular that H+K (X) = X [1] f 10 ⊗ f 01 dµ We now begin the proof of Theorem 1.19. Let (X, T ), G, H 1 , H 2 , d 1 , d 2 be as in Theorem 1.15. We begin with a few reductions. By induction we may assume that the claim is already proven for smaller values of d 1 + d 2 . By shrinking G if necessary, we may assume that G = H 1 + H 2 (note that replacing G with H 1 + H 2 does not affect factors such as Z <d 1 H 1 (X)). Next, we observe that we may reduce without loss of generality to the case where the action of G on (X, T ) is ergodic. To see this, we argue as follows. As X was assumed to be a compact metric space, we have an ergodic decomposition µ = Y µ y dν(y) for some probability space (Y, ν) (the invariant factor X G of X), and some probability measures µ y on X depending measurably on y, and ergodic in G for almost every y; see [8,Theorem 5.8] or [6,Theorem 6.2]. Let X y = (X, B, µ y ) denote the components of this decomposition. From (12), (7) we have the identity for any f ∈ L ∞ (X). We conclude that a bounded measurable function f ∈ L ∞ (X) vanishes in U d 1 H 1 (X) if and only if it vanishes in U d 1 H 1 (X y ) for almost every y. By (13), this implies that f is measurable (modulo null sets) with respect to Z <d 1 H 1 (X) if and only if it is measurable (modulo null sets) with respect to Z <d 1 H 1 (X y ) for almost every y. Similarly for Z <d 2 H 2 (X) and Z <d H 1 +H 2 (X). From this it is easy to see that Theorem 1.15 for X will follow from Theorem 1.15 for almost every X y . Thus we may assume without loss of generality that the system (X, T ) is G-ergodic.
Next, if we set X := Z <d 1 H 1 (X) ∧ Z <d 2 H 2 (X), we see from Corollary 2.5 (as in the proof of Theorem 1.15) that Z <d 1 +d 2 −1 Thus we may replace X by X and assume without loss of generality that Following [30] (somewhat loosely 5 ), let us say that a G-system X is of H-order <d if X ≡ Z <d (X); thus we are assuming that X has H 1 -order <d 1 and H 2 -order <d 2 . Our task is now to show that X has G-order <d 1 + d 2 − 1. For future reference we note from Corollary 2.5 that any factor of a G-system with H-order <d also has H-order <d.
For future reference, we observe that the property of having G-order <d is also preserved under taking Host-Kra powers: Lemma 3.4. Let H, H be subgroups of G. Let Y be a G-system with H-order <d for some d ≥ 1. Then Y [1] H is also of H-order <d.
Proof. The space Y [1] H contains two copies of Y as factors, which we will call Y 1 and Y 2 . By Corollary 2.5, we have , and the claim follows.
If d 1 = 1, then every function in L ∞ (X) is H 1 -invariant, and it is then easy to see that the U d 2 H 2 (X) and U d 2 H 1 +H 2 (X) seminorms agree. By (13), we conclude that Z <d 2 H 2 (X) is equivalent to Z <d 2 H 1 +H 2 (X), and the claim follows. Similarly if d 2 = 1. Thus we may assume that d 1 , d 2 > 1.
We now set Y := Z <d 1 +d 2 −2 From the induction hypothesis we have We now analyse X as an extension over Y, following the standard path in [10], [30]. Given a subgroup H of G, we say (as in [9]) that X is a compact extension of Y with respect to the H action if any function in L ∞ (X) can be approximated to arbitrary accuracy (in L 2 (X)) by an H-invariant finite rank module over L ∞ (Y). We have We now invoke [9, Proposition 2.3], which asserts that if one system X is a compact extension of another Y for two commuting group actions H, K, then it is also a compact extension for the combined action of H + K. Since we are assuming G = H 1 + H 2 , we conclude that X is a compact extension of Y as a G-system.
Since X is also assumed to be G-ergodic, we may now use the Mackey theory of isometric extensions from [10, §5], and conclude that X is an isometric extension of Y in the sense that X is equivalent to a system of the form Y × ρ K/L where K = (K, ·) is a compact group, L is a closed subgroup of K, ρ : G × Y → K is a measurable function obeying the cocycle equation and the Y × ρ K/L is the product of the probability spaces Y and K/L (the latter being given the Haar measure) with action given by the formula T g (y,t) := (T g y, ρ(g, y)t) for all y ∈ Y and t ∈ K/L. We now give the standard abelianisation argument, originating from [10] and used also in [30], that allows us to reduce to the case of abelian extensions. Proposition 3.6 (Abelian extension). With ρ, K, L as above, L contains the commutator group [K, K]. In particular, after quotienting out by L we may assume without loss of generality that K is abelian and L is trivial.
Proof. This is essentially [30, Proposition 6.3], but for the convenience of the reader we provide an arrangement (essentially due to Szegedy [32]) of the argument here, which uses the action of H 1 but does not presume H 1 -ergodicity.
We identify X with Y × ρ K/L. For any k ∈ K, we define the rotation actions τ k on L ∞ (X) by τ k f (y,t) := f (y, k −1 t).
At present we do not know that these actions commute with the shift T g ; however they are certainly measure-preserving, so in particular = 0 for all f ∈ L ∞ (X) and k ∈ K. From the Cauchy-Schwarz-Gowers inequality (23) we conclude that the inner product ( f ω ) ω∈{0,1} d 1 −1 U d 1 −1 H 1 (X) vanishes whenever one of the functions f ω ∈ L ∞ (X) is of the form f − τ k f for some f ∈ L ∞ (X) and k ∈ K. By linearity, this implies that the inner product is unchanged if τ k is applied to one of the functions f ω ∈ L ∞ (X) for some k ∈ K. Using the recursive relations between the Gowers inner products, connected by an edge, for any k ∈ K. Taking the commutator of this fact using two intersecting edges on {0, 1} d 1 (recalling that d 1 > 1), we conclude that ( f ω ) ω∈{0,1} d 1 U d 1 H 1 (X) is unchanged if a single f ω is shifted by τ k for some k in the commutator group [K, K]. Equivalently, for f ∈ L ∞ (X) and k ∈ [K, K], f − τ k f is orthogonal to all dual functions for U d 1 H 1 (X); since X ≡ Z <d 1 H 1 (X), this implies that f − τ k f is trivial. Thus the action of [K, K] on Y × ρ K/L is trivial, and so L lies in [K, K] as required.
With this proposition, we may thus write X = Y × ρ K for some compact abelian group K = (K, ·); our task is now to show that Y × ρ K has G-order <d 1 + d 2 − 1.
Define an S 1 -cocycle (or cocycle, for short) of a G-system (Y, T ) to be a map η : G × Y → S 1 taking values in the unit circle S 1 := {z ∈ C : |z| = 1} obeying the cocycle equation (32); this is clearly an abelian group. Observe that for any character χ : K → S 1 in the Pontryagin dual of K, χ • ρ is a cocycle. We say that a cocycle is an H-coboundary if there is a measurable F : X → S 1 such that η(h, x) = F(T h x)/F(x) for all h ∈ H and almost every x ∈ X; this is a subgroup of the space of all cocycles for each H. Given a cocycle η : G × Y → S 1 on Y and a subgroup H of G, we define the cocycle d [1] H η : G × Y [1] H → S 1 on the Host-Kra space Y [1] H by the formula d [1] H η(g, x, x ) := η(g, x)η(g, x ) for all g ∈ G and x, x ∈ X; it is easy to see that d [1] H is a homomorphism from cocycles on Y to cocycles on Y [1] H , which maps H -coboundaries to H -coboundaries for any subgroup H of G. We may iterate this construction to define a homomorphism d Note that d [1] H and d [1] K commute for any H, K (after identifying Y [2] H,K with Y [2] K,H in the obvious fashion), and so the operator d H . We say that a cocycle σ : H σ is a H-coboundary. Because the operator d [1] H maps H-coboundaries to H-coboundaries for any H ≤ G, we see that d [1] H maps cocycles of H-type d to cocycles of H-type d, and that any cocycle of H-type d is also of H-type d for any d > d.
We have a fundamental connection between type and order from [30]: Proposition 3.7 (Type equation). Let d ≥ 1 be an integer, let H be a subgroup of an at most countable additive group G, and let Y be an G-system of H-order <d that is G-ergodic. Let K be a compact abelian group, and let ρ : H ×Y → K be a cocycle.
(i) If the system Y × ρ K has H-order <d, then χ • ρ has H-type d − 1 for all characters χ : K → S 1 .
One can use a result of Moore and Schmidt [31] to conclude that ρ itself is of H-type d − 1 in conclusion (i), but we will not need to do so here. A key technical point to note is that no H-ergodicity hypothesis is imposed.
Proof. For part (i), see 6 [30,Proposition 6.4] or [5,Proposition 4.4]. For part (ii), we argue as follows 7 . Suppose for contradiction that Y × ρ K did not have H-order <d, then by (13) there is a non-zero function 6 Again, the argument in [30] is stated only for G = Z, but extends to other at most countable additive groups G without any modification of the argument. In any event, the version of the argument in [5] is explicitly stated for all such groups G. 7 One can also establish (ii) by using [30,Proposition 7.6] to handle the case when Y × ρ K is ergodic, and Mackey theory and the ergodic decomposition to extend to the non-ergodic case; we leave the details to the interested reader. respect to the K-action, since this action commutes with the H-action. Using Fourier inversion in the K direction (i.e. decomposition of L 2 (Y × ρ K) into K-isotypic components of the K-action) and the triangle inequality, we may assume that f takes the form f (y, k) = F(y)χ(k) for some F ∈ L ∞ (Y) and some character χ : K → S 1 , with F not identically zero. By (13) and the hypothesis that Y has H-order <d, the property of having vanishing U d H norm is unaffected by multiplication by functions in L ∞ (Y), as well as shifts by G, thus (y, k) → (T g |F(y)|)χ(k) also has vanishing U d H norm for g ∈ G. Averaging and using the ergodic theorem, we conclude that the function u : (y, k) → χ(k) has vanishing U d H (Y × ρ K) seminorm, and hence so doesFu for anyF ∈ L ∞ (Y). By the Gowers-Cauchy-Schwarz inequality, this implies that H ), where the integrals are understood to be with respect to the cubic measure µ , where by abuse of notation we write B ⊗ B for the function ((y , k ), (y, k)) → B(y )B(y). By construction of the cubic measure on (Y × ρ K) [d] H and the ergodic theorem, this implies that In our current context, Proposition 3.7(i) shows that χ • ρ is of H i -type d i − 1 for all i = 1, 2 and all characters χ : K → S 1 .
We will shortly establish the following proposition: where e(x) := e 2πix , then σ is a cocycle. One can view Y [1] H 1 as the space of pairs ((x, y), (x , y)) with x, y, x ∈ R/Z, with the product shift map. We have d [1] H 1 σ ((n, 0), ((x, y), (x , y))) = 1 and so σ is certainly of H 1 -type 1; it is similarly of H 2 -type 1. The system Y × σ S 1 is the system X in Example 1.17, and is thus of G-order < 3. One can also check that d [2] G σ is identically 1, basically because the phase ny + mx + nmα is linear in x, y.
Assuming Proposition 3.8 for the moment, we combine it with Proposition 3.7(i) to conclude that χ • ρ is of G-type d 1 + d 2 − 2 for all characters χ : K → S 1 . Applying Proposition 3.7(ii), we conclude that Y × ρ K is of G-order <d 1 + d 2 − 1, as required.
It remains to establish Proposition 3.8. We first need a technical extension of a result of Host and Kra: Proposition 3.11. Let Y be an G-system that is G-ergodic, and let ρ : G ×Y → S 1 be a cocycle which is of H 1 -type d 1 − 2. Then ρ differs by a G-coboundary from a cocycle which is measurable with respect to Proof. If Y was H 1 -ergodic then this would be immediate from [30,Corollary 7.9] (the argument there is stated for H 1 = Z, but extends to more general at most countable additive groups). To extend this result to the G-ergodic case, we will give an alternate arrangement 8 of the arguments in [30], which does not rely on H 1 -ergodicity. Let X denote the G-system X := Y × ρ S 1 , then S 1 acts on X by translation, with each element ζ of S 1 transforming a function f : (y, z) → f (y, z) in L ∞ (X) to the translated function τ ζ f : (y, z) → f (y, ζ −1 z). As X is an abelian extension of Y, the S 1 -action commutes with the G-action and in particular with the H 1 -action. This implies that the factor Z <d 1 −1 H 1 (X) of X inherits an S 1 -action (which by abuse of notation we will also call τ ζ ) which commutes with the G-action.
Let us say that a function f ∈ L ∞ (X) has S 1 -frequency one if one has τ ζ f = ζ −1 f for all ζ ∈ S 1 , or equivalently if f has the form f (y, ζ ) =f (y)ζ for somef ∈ L ∞ (Y). We claim 9 that there is a function f of S 1 -frequency one with non-vanishing U d 1 −1 H 1 (X) seminorm. Suppose for the moment that this were the case, then by (13) there is a function f ∈ L ∞ (Z <d 1 −1 H 1 (X)) which has a non-zero inner product with a function of S 1 -frequency one. Decomposing f into Fourier components with respect to the S 1 action, and recalling that this action preserves L ∞ (Z <d 1 −1 H 1 (X)), we conclude that L ∞ (Z <d 1 −1 H 1 (X)) contains a function F of S 1 -frequency one. The absolute value |F| of this function is S 1 -invariant and lies in L ∞ (Z <d 1 −1 . The support of |F| may not be all of Y, but from G-ergodicity we can cover Y (up to null sets) by the support of countably many translates |T g F| of |F|. By gluing these translates T g F together and then normalizing, we may thus find a function u in L ∞ (Z <d 1 −1 H 1 (X)) of S 1 -frequency one which has magnitude one, that is to say it takes values in S 1 almost everywhere. One can then check that the functionρ : as usual) is a cocycle that differs from ρ by a G-coboundary, giving the claim.
It remains to prove the claim. We use an argument similar to the one used to prove Proposition 3.7(ii). Suppose for contradiction that all functions of S 1 -frequency one had vanishing U d 1 −1 H 1 (X) norm. By the Cauchy-Schwarz-Gowers inequality (18), we then have ) by tensor products, we conclude that ). Now recall that ρ is of H 1 -type d 1 − 2, so that one has an identity of the form for g ∈ H 1 and almost every y in Y ) taking values in S 1 . Since T g (y, z) = (T g y, ρ(g, y)z), this implies that for almost every ((y , y), z) in X We use this result to obtain a variant of Proposition 3.8: Concatenation of type, variant). Let Y be a G-system of G-order <d 1 + d 2 − 2, of H 1 -order <d 1 , and H 2 -order <d 2 . Let σ : G × Y → S 1 be a cocycle which is of H 1 -type d 1 − 2 and Similarly if one assumes instead that σ has H 1 -type d 1 − 1 and H 2 -type d 2 − 2.
Proof. We just prove the first claim, as the second is similar. Applying the preceding proposition to the restriction of σ to H ×Y , we see that σ differs by a G-coboundary from a cocycle σ : G ×Y → S 1 which is measurable with respect to Z <d 1 −1 (Y) when restricted to H 1 ×Y . By Proposition 3.7(ii), we now conclude that the system Z <d 1 −1 On the other hand, since σ is of H 2 -type d 2 − 1, σ is also. Since Y is of H 2 -order <d 2 , we may apply Proposition 3.7(ii) to conclude that X is of H 2 -order <d 2 . In particular f also lies in Z <d 2 −1 Applying the induction hypothesis for Theorem 1. 19, we conclude that f lies in L ∞ (Z <d 1 +d 2 −2 G (X)). Since Y was already of G-order <d 1 + d 2 − 2, L ∞ (Y) also lies in L ∞ (Z <d 1 +d 2 −2 G (X)). By Fourier analysis, any element of L ∞ (X) can be approximated in L 2 to arbitrary accuracy by polynomial combinations of f and elements of L ∞ (Y), and hence L ∞ (X) is contained in L ∞ (Z <d 1 +d 2 −2 G (X)); that is to say, X has G-order <d 1 + d 2 − 2. By Proposition 3.7(i), this implies that σ is of G-type d 1 + d 2 − 3 on Y. Since σ differs from σ by a G-coboundary, we conclude that σ is of G-type d 1 + d 2 − 3 also, as required.
Sketch of combinatorial concatenation argument
In this section we give an informal sketch of how Theorem 1.11 is proven, glossing over several technical issues that the nonstandard analysis formalism is used to handle.
We assume inductively that d 1 , d 2 > 1, and that the theorem has already been proven for smaller values of d 1 + d 2 . Let us informally call a function f structured of order < d 1 along H 1 if it obeys bounds similar to (i), and similarly define the notion of structured of order < d 2 along H 2 ; these notions can be made rigorous once one sets up the nonstandard analysis formalism. Roughly speaking, Theorem 1.11 then asserts that functions that are structured of order < d 1 along H 1 and structured of order < d 2 along H 2 are also structured of order < d 1 + d 2 − 1 along H 1 + H 2 . A key point (established using the machinery of dual functions) is that the class of functions that have a certain structure (e.g. being structured of order < d 1 along H 1 ) form a shift-invariant algebra, in that they are closed under addition, scalar multiplication, pointwise multiplication, and translation.
By further use of the machinery of dual functions, one can show that if f is structured of order < d 1 along H 1 , then the shifts T n f of f with n ∈ H 1 admit a representation roughly of the form where E h represents some averaging operation 11 with respect to some parameter h, the g h are bounded functions, and the c n,h are functions that are structured of order < d 1 − 1 along H 1 ; this type of "higher order uniformly almost periodic" representation of shifts of structured functions generalizes (2), and first appeared in [34]. In particular, if n ∈ H 2 , we have T n+n f ≈ E h T n c n,h T n g h .
A crucial point now (arising from the shift-invariant algebra property mentioned earlier) is that the structure one has on the original function f is inherited by the functions c n,h and g h . Specifically, since f is structured of order < d 1 along H 1 and of order < d 2 along H 2 , the functions g h appearing in (35) should also be structued in this fashion. This implies that where c n ,h,h is structured of order < d 1 along H 1 and of order < d 2 − 1 along H 2 , and g h,h is some bounded function. This leads to a representation of the form where c n,n ,h,h = (T n c n,h )c n ,h,h . But by the induction hypothesis as before one can show that c n ,h,h is structured of order < d 1 + d 2 − 2 along H 1 + H 2 , and then from the shift-invariant algebra property mentioned earlier, we see that c n,n ,h,h is also structured of order < d 1 + d 2 − 2 along H 1 + H 2 . The representation (37) can then be used (basically by a Cauchy-Schwarz argument, similar to one used in [34]) to establish that f is structured of order < d 1 + d 2 − 1 along H 1 + H 2 .
Remark 4.1. We were not able to directly adapt this argument to give a purely ergodic theory proof of Theorem 1.15 or Theorem 1.16, mainly due to technical problems defining the notion of "uniform almost periodicity" in the ergodic context, and in ensuring that this almost periodicity was uniformly controlled with respect to parameters such as n, n , h, h . Instead, the natural ergodic analogue of this argument appears to be the variant inclusion under the hypotheses of Theorem 1.19, where the Furstenberg factors F <d H (X) [7] are defined recursively by setting F <1 H (X) = X H to be the invariant factor and F <d+1 H (X) to be the maximal compact extension of F <d H (X). This inclusion can be deduced from [9, Proposition 2.3] (which was already used in Section 3) and an induction on d 1 + d 2 ; we leave the details to the interested reader. We remark that the proof of [9, Proposition 2.3] can be viewed as a variant of the arguments sketched in this section. The Furstenberg factors F <d H (X) are, in general, larger than the Host-Kra factors Z <d H (X), because the cocycles in the latter must obey the type condition in Proposition 3.7, whereas the former factors have no such constraint.
Taking ultraproducts
To prove our main combinatorial theorems rigorously, it is convenient to use the device of ultraproducts to pass to a nonstandard analysis formulation, in order to hide most of the "epsilon management", as well as to exploit infinitary tools such as countable saturation, Loeb measure and conditional expectation. The use of nonstandard analysis to analyze Gowers uniformity norms was first introduced by Szegedy [32], [33], and also used by Green and the authors in [28].
We quickly set up the necessary formalism. (See for instance [11] for an introduction to the foundations of nonstandard analysis that is used here.) We will need to fix a non-principal ultrafilter α ∈ β N\N on the natural numbers, thus α is a collection of subsets of natural numbers such that the function A → 1 A∈α forms a finitely additive {0, 1}-valued probability measure on N, which assigns zero measure to every finite set. The existence of such a non-principal ultrafilter is guaranteed by the axiom of choice. We refer to elements of α as α-large sets.
We assume the existence of a standard universe U -a set that contains all the mathematical objects of interest to us, in particular containing all the objects mentioned in the theorems in the introduction. Objects in this universe will be referred to as standard objects. A standard set is a set consisting entirely of standard objects, and a standard function is a function whose domain and range are standard sets. The standard universe will not need to obey all of the usual ZFC set theory axioms (though one can certainly assume this if desired, given a suitable large cardinal axiom); however we will at least need this universe to be closed under the ordered pair construction x, y → (x, y), so that the Cartesian product of finitely many standard sets is again a standard set.
A nonstandard object is an equivalence class of tuples (x n ) n∈A of standard objects indexed by an α-large set A, with two tuples (x n ) n∈A , (y n ) n∈B equivalent if they agree on an α-large set. We write lim n→α x n for the equivalence class associated with a tuple (x n ) n∈A , and refer to this nonstandard object as the ultralimit of the x n . Thus for instance a nonstandard natural number is an ultralimit of standard natural numbers, a nonstandard real number is an ultralimit of standard real numbers, and so forth. If (X n ) n∈A is a sequence of standard sets indexed by an α-large set A, we define the ultraproduct ∏ n→α X n to be the collection of all ultralimits lim n→α x n , where x n ∈ X n for an α-large set of n. An internal set is a set which is an ultraproduct of standard sets. We use the term external set to denote a set of nonstandard objects that is not necessarily internal. Note that every standard set X embeds into the nonstandard set * X := ∏ n→α X, which we call the ultrapower of X, by identifying every standard object x with its nonstandard counterpart lim n→α x. In particular, the standard universe U embeds into the nonstandard universe * U of all nonstandard objects.
If X = ∏ n→α X n and Y = ∏ n→α Y n are internal sets, we have a canonical isomorphism which (by abuse of notation) allows us to identify the Cartesian product of two internal sets as another internal set. Similarly for Cartesian products of any (standard) finite number of internal sets. We will implicitly use such identifications in the sequel without further comment. An internally finite set is an ultraproduct of finite sets (such sets are also known as hyperfinite sets in the literature). Similarly with "set" replaced by "multiset". (The multiplicity of an element of an internally finite multiset will of course be a nonstandard natural number in general, rather than a standard natural number.) Given a sequence ( f n ) n∈A of standard functions f n : X n → Y n indexed by an α-large set A, we define the ultralimit lim n→α f n : ∏ n→α X n → ∏ n→α Y n to be the function This is easily seen to be a well-defined function. Functions that are ultralimits of standard functions will be called internal functions. We use the term external function to denote a function between external sets that is not necessarily an internal function. We will use boldface symbols such as f to refer to internal functions, distinguishing them in particular from functions f that take values in the standard complex numbers C rather than the nonstandard complex numbers * C.
Using the ultralimit construction, any ultraproduct X = ∏ n→α X n of structures X n for some first-order language L, will remain a structure of that language L; furthermore, thanks to the well-known theorem of Łos, any first-order sentence will hold in X if and only if it holds in X n for an α-large set of n. For instance, if G n is an additive group for an α-large set of n, then ∏ n→α G n will also be an additive group.
A crucial property of internal sets for us will be the following compactness-like property of internal sets.
Theorem 5.1 (Countable saturation).
(i) Let (X (i) ) i∈N be a countable sequence of internal sets. If the finite intersections N i=1 X (i) are non-empty for every (standard) natural number N, then ∞ i=1 X (i) is also non-empty.
(ii) Let X be an internal set. Then any countable cover of X by internal sets has a finite subcover.
Proof. It suffices to prove (i), as the claim (ii) follows from taking complements and contrapositives. Write is non-empty for all n ∈ A N . By shrinking the A N as necessary, we may assume that the A N are nonincreasing in N. If we then choose x n for any n in A N \A N+1 to lie in N i=1 X (i) n for all N, the ultralimit lim n→α x n lies in ∞ i=1 X (i) , giving the claim.
A nonstandard complex number z ∈ * C is said to be bounded if one has |z| ≤ C for some standard C, and infinitesimal if |z| ≤ ε for all standard ε > 0. We write z = O(1) when z is bounded and z = o(1) when z is infinitesimal. By modifying the proof of the Bolzano-Weierstrass theorem, we see that every bounded z can be uniquely written as z = stz + o(1) for some standard complex number stz, known as the standard part of z.
Countable saturation has the following important consequence: Corollary 5.2 (Overspill/underspill). Let A be an internal subset of * C.
(i) If A contains all standard natural numbers, then A also contains an unbounded natural number.
(ii) If all elements of A are bounded, then A is contained in {z ∈ * C : |z| ≤ C} for some standard C > 0.
(iii) If all elements of A are infinitesimal, then A is contained in {z ∈ * C : |z| ≤ ε} for some infinitesimal ε > 0.
(iv) If A contains all positive standard reals, then A also contains a positive infinitesimal real.
Proof. If (i) failed, then we would have N = A ∩ * N, and hence N would be internal, which contradicts Theorem 5.1(ii). The claim (ii) follows from the contrapositive of (i) applied to the internal set {n ∈ * N : |z| ≥ n for some z ∈ A}. The claim (iii) similarly follows from (i) applied to the internal set {n ∈ * N : |z| ≤ 1/n for all z ∈ A}. Finally, the claim (iv) follows from (i) applied to the internal set {n ∈ * N : 1/n ∈ A}.
An internal function f : X → * C is said to be bounded if it is bounded at every point, or equivalently (thanks to overspill or countable saturation) if there is a standard C such that | f (x)| ≤ C for all x ∈ X, and we denote this assertion by f = O(1). Similarly, an internal function f : X → * C is said to be infinitesimal if it is infinitesimal at every point, or equivalently (thanks to underspill or countable saturation) if there is an infinitesimal ε > 0 such that | f (x)| ≤ ε for all x ∈ X, and we denote this assertion by f = o(1).
Let X n = (X n , B n , µ n ) be a sequence of standard probability spaces, indexed by an α-large set A. In the ultraproduct X := ∏ n→α X n , we have a Boolean algebra B of internally measurable sets -that is to say, internal sets of the form E = ∏ n→α E n , where E n ∈ B n for an α-large set of n. Similarly, we have a complex *-algebra A[X] of bounded internally measurable functions -functions f : X → * C that are ultralimits of measurable functions f n : X n → C, and which are bounded. We have a finitely additive nonstandard probability measure * µ := lim n→α µ n : B → * [0, 1], and a finitely additive nonstandard integral X f d * µ := lim n→α X n f n dµ n defined for bounded internally measurable functions f = lim n→α f n ∈ A[X]. The standard part µ := st * µ of the nonstandard measure * µ is then an (external) finitely additive probability measure on B. From Theorem 5.1(ii), this finitely additive measure is automatically a premeasure, and so by the Carathéodory extension theorem it may be extended to a countably additive measure µ : L → [0, 1] to the σ -algebra L generated by B. The space X := (X, L, µ) is then known as the Loeb probability space associated to the standard probability spaces X n . By construction, every Loeb-measurable set E ∈ L can be approximated up to arbitrarily small error in µ by an internally measurable set. As a corollary, for any (standard) 1 ≤ p < ∞, then any function in L p (X) can be approximated to arbitrary accuracy in L p (X) norm by the standard part of a bounded internally measurable function, that is to say stA[X] is dense in L p (X). Indeed, we can say a little more: Proof. We may normalise f to lie in the closed unit ball of L ∞ (X). By density, we may find a sequence f n ∈ A[X] for n ∈ N bounded in magnitude by 1, such that f − stf n L 1 (X) < 1/n for all n. In particular, f N − f n * L 1 (X) ≤ 2/n for all n ≤ N. By countable saturation, there thus exists an internally measurable function f bounded in magnitude by 1 such that f − f n * L 1 (X) ≤ 2/n for all n. Taking limits we see that f − stf L 1 (X) = 0, and the claim follows.
By working first with simple functions and then taking limits, we easily establish the identity X stf dµ = st If H = ∏ n→α H n is an internally finite non-empty multiset, and h → f h is an internal map from H to A[X], then the internal function h → f h * L ∞ (X) is bounded, and hence (by Corollary 5.2) its image is bounded above by some standard C, so that {stf h : h ∈ H} is bounded in L ∞ (X). We can also define the internal average E h∈H f h as where h → f h is the ultralimit of the maps h n → f h n ,n . We have a basic fact about the location of this average: Proof. If this were not the case, then by the Hahn-Banach theorem there would exist ε > 0 and g ∈ L 2 (X) such that Re stE h∈H f h , g L 2 (X) > Re stf h , g L 2 (X) + ε for all h ∈ H. By truncation (and the boundedness of {stf h : h ∈ H}) we may assume (after shrinking ε slightly) that g ∈ L ∞ (X), and then by Lemma 5.3 we may write g = stg for some g ∈ A[X]. But then we have which on taking internal averages implies which is absurd.
Translation to nonstandard setting
We now translate Theorems 1.11, 1.13 to a nonstandard setting. Let G = ∏ n→α G n be a nonstandard additive group, that is to say an ultraproduct of standard additive groups G n . Define a internal coset progression Q in G to be an ultraproduct Q = ∏ n→α Q n of standard coset progressions Q n in G n . We will be interested in internal coset progressions of bounded rank, which is equivalent to Q n having bounded rank on an α-large set of n. Given a internal coset progression Q, we define the (external) set o(Q) as o(Q) := ε∈R + (εQ), that is to say the set of all x ∈ G such that x ∈ εQ for all standard ε > 0. Here we are interpreting εQ and o(Q) as sets rather than multisets.
Given a sequence (X n , T n ) of standard G n -sytems, we can form a G-system (X, T ) by setting X to be the Loeb probability space associated to the X n , and T g to be the ultralimit of the T g n n for any g = lim n→α g n . It is easy to verify that this is a G-system; we will refer to such systems as Loeb G-systems.
Let d be a standard positive integer, and let Q = ∏ n→α Q n be an internal coset progression. If f = lim n→α f n ∈ A[X] is a bounded internally measurable function, we can define the internal Gowers norm
From the Hölder and triangle inequalities one has
for each n, and hence on taking ultralimits and then on taking standard parts .
This can be rewritten in turn as The claim (39) now follows from the Cauchy-Schwarz-Gowers inequality (18). (X) on L 2 d (X), and hence on L ∞ (X).
We can now state the nonstandard theorem that Theorem 1.11 will be derived from.
Theorem 5.6 (Concatenation theorem for anti-uniformity norms, nonstandard version). Let Q 1 , Q 2 be internal coset progressions of bounded rank in a nonstandard additive group G, let (X, T ) be a Loeb G-system, let d 1 , d 2 be standard positive integers, and let f lie in the closed unit ball of L ∞ (X). We make the following hypotheses: Let us assume this theorem for the moment, and show how it implies Theorem 1.11. It suffices to show that for any d 1 , d 2 , r 1 , r 2 , c 1 , c 2 as in Theorem 1.11, and any δ > 0, there exists an ε > 0, such that Suppose this were not the case. Then there exists d 1 , d 2 , r 1 , r 2 , c 1 , c 2 , δ as above, together with a sequence G n of standard additive groups, sequences Q 1,n , Q 2,n of standard coset progressions in G n of ranks r 1 , r 2 respectively, a sequence (X n , T n ) of G n -systems, and a sequence f n of functions in the closed unit ball of L ∞ (X) such that f n U d 1 and f n U d 2 Q 2,n (X n ) * ,ε ≤ c 2 (ε) for all (standard) ε > 0, but such that for some ε n > 0 that goes to zero as n → ∞. Now we take ultraproducts, obtaining a nonstandard additive group G, internal coset progressions Q 1 , Q 2 of bounded rank in G, a Loeb G-system (X, T ), and a bounded internally measurable function f := lim n→α f n . Since the f n lie in the closed unit ball of L ∞ (X n ), we see that stf lies in the closed unit ball of L ∞ (X). Now suppose that g ∈ L ∞ (X) is such that g U d 1 o(Q 1 ) (X) = 0; we claim that stf is orthogonal to g. We may normalise g to lie in the closed unit ball of L ∞ (X). By Lemma 5.5 one has g U d 1 εQ 1 (X) = 0 for all standard ε > 0. We can write g as the limit in L 2 d 1 (X) of stg (i) as i → ∞, for some g (i) ∈ A[X] that are bounded in magnitude by 1. Since the L 2 d 1 (X) norm controls the U d 1 εQ 1 (X) semi-norm, we have for any standard δ > 0 that stg (i) for sufficiently large i and all ε > 0. In particular, writing g (i) = lim n→α g (i) n and setting ε = 2δ , we have for sufficiently large i that for an α-large set of n. By (40) we conclude that and thus on taking ultralimits and then standard parts and hence on sending i to infinity | stf, g L 2 (X) | ≤ c 1 (2δ ).
Sending δ to zero, we obtain the claim. For similar reasons, stf is orthogonal to any g ∈ L ∞ (X) with g U d 2 o(Q 2 ) (X) = 0. Applying Theorem 5.6, we conclude that stf is orthogonal to any g ∈ L ∞ (X) with g U d 1 +d 2 −1 o(Q 1 +Q 2 ) (X) = 0. On the other hand, from (41) and (9), we can find a g n in the closed unit ball of L ∞ (X n ) for each n such that g n U d 1 +d 2 −1 εnQ 1,n +εnQ 2,n (X n ) ≤ ε n and | f n , g n L 2 (X n ) | ≥ δ .
Setting g := st lim n→α g n , we conclude on taking ultralimits that g is in the closed unit ball of L ∞ (X) with for some infinitesimal ε > 0 and | stf, g L 2 (X) | ≥ δ .
But from Lemma 5.5 we have g U d 1 +d 2 −1 o(Q 1 +Q 2 ) (X) = 0 and we contradict the previously established orthogonality properties of stf. This concludes the derivation of Theorem 1.11 from Theorem 5.6.
An identical argument shows that Theorem 1.13 is a consequence of the following nonstandard version.
Nonstandard dual functions
We now develop some nonstandard analogues of the machinery in Section 2, and use this machinery to prove Theorem 5.6 and Theorem 5.7 (and hence Theorem 1.11 and Theorem 1.13 respectively).
Let (X, T ) be a Loeb G-system for some nonstandard abelian group G, and let Q 1 , . . . , Q d be internal coset progressions of bounded rank in G for some standard d ≥ 0. Given bounded inter- where ω = (ω 1 , . . . , ω d ), |ω| := ω 1 + · · · + ω d , and C : f → f is the complex conjugation operator. (When d = 0, we adopt the convention that * D 0 () = 1.) This is a multilinear map from A[X] 2 d −1 to A[X], and from the definition of the internal box norm * d Q 1 ,...,Q d (X) we have the identity for any f ∈ A[X]. From Hölder's inequality and the triangle inequality we see that By a limiting argument and multilinearity, we may thus uniquely define a bounded multilinear dual operator D d Q 1 ,...,Q d : for all f ω ∈ L 2 d (X), and such that An important fact about dual functions, analogous to Theorem 2.4, is that the dual operator maps factors to characteristic subfactors for the associated Gowers norm: Theorem 6.1 (Dual functions and characteristic factors). Let (Y, T ) be a factor of (X, T ), and let Q 1 , . . . , Q d be internal coset progressions of bounded rank. Let V be the linear span of the space We now prove this theorem. From (i) and (ii) it is clear that Z <d o(Q 1 ),...,o(Q d ) (Y) is unique; the difficulty is to establish existence. We first observe Lemma 6.2. We have V ⊂ L ∞ (Y).
Proof. It suffices by linearity to show that
But such functions lie in a bounded subset of L ∞ (Y), and the claim follows.
Next, we observe that V is almost closed under multiplication (cf. Proposition 2.3): 3. If f , f ∈ V , then f f is the limit (in L 2 (X), or equivalently in L 2 d /(2 d −1) (X)) of a sequence in V that is uniformly bounded in L ∞ norm.
Proof. By linearity and density, as well as Lemma 5.3 and (43), we may assume that , which we may take to be real-valued. We can thus write Let ε * > 0 be infinitesimal. We can shift εQ i − εQ i or ε Q i − ε Q i by an element of ε * Q i − ε * Q i while only affecting the above average by o (1). We conclude that Performing the k 1 , . . . , k d average first, we conclude that This bound holds for all infinitesimal ε * > 0. By overspill, we conclude that for any standard δ > 0 we have for all sufficiently small standard ε * > 0. By Lemma 5.4, the second term inside the norm is the limit in L 2 of a bounded sequence in V , and the claim follows.
Let W denote the space of all functions in L ∞ (X) which are the limit (in L 2 d /(2 d −1) ) of a sequence in V that is uniformly bounded in L ∞ norm. From the previous two lemmas we see that W is a subspace of L ∞ (Y) that is closed under multiplication, and which is also closed with respect to limits in L 2 d /(2 d −1) of sequences uniformly bounded in L ∞ . If we let Z denote the set of all measurable sets E in Y such that 1 E lies in W , we then see that Z is a σ -algebra. From the translation invariance identity we see that V , and hence W and Z, are invariant with respect to shifts T n , n ∈ G. If we set Z In particular, f is not orthogonal to V , and hence to L ∞ (Z <d o(Q 1 ),...,o(Q d ) (Y)). Conversely, suppose that By the (internal) Cauchy-Schwarz-Gowers inequality (18), we thus have We conclude that f is orthogonal to V , and hence to L ∞ (Z <d o(Q 1 ),...,o(Q d ) (Y)). This concludes the proof of Theorem 6.1.
We record a basic consequence of Theorem 6.1 (cf. Corollary 2.5 or [30, Proposition 4.6]): Corollary 6.4 (Localisation). Let X be a Loeb G-system for some nonstandard additive group G, let Y be a factor of X, and let Q 1 , . . . , Q d be internal coset progressions of bounded rank in G for some standard positive integer d. Then Proof Remark 6.5. Following the informal sketch in Section 4, functions that are measurable with respect to Z <d o(Q 1 ),...,o(Q d ) (X) should be viewed as being "structured" along the directions Q 1 , . . . , Q d . The factor Y represents some additional structure, perhaps along some directions unrelated to Q 1 , . . . , Q d . Corollary 6.4 then guarantees that this additional structure is preserved when one performs such operations as orthogonal projection to L 2 (Z <d o(Q 1 ),...,o(Q d ) (X)).
For the purpose of concatenation theorems, the following special case of Corollary 6.4 is crucial (cf. Corollary 1.20): Corollary 6.6. Let (X, T ) be a Loeb G-system for some nonstandard additive group G, and let Q 1,1 , . . . , Q 1,d 1 , Q 2,1 , . . . , Q 2,d 2 be internal coset progressions of bounded rank. Then Proof. The first claim is immediate from Corollary 6.4. For the second claim, note from the first claim and Theorem 6.1 that f 1 has vanishing d 2 o(Q 2,1 ),...,o(Q 2,d 2 ) (X) norm, and the claim follows from a second application of Theorem 6.1.
We now briefly discuss the proof of Theorem 5.6, which is proven very similarly to Theorem 5.7. With d 1 , d 2 , G, Q 1 , Q 2 , X, T as in that theorem, our task is to show that f , g are orthogonal whenever The base case d 1 = d 2 = 1 is treated as before, so assume inductively that d 1 + d 2 > 2 and the claim has already been proven for smaller values of d 1 + d 2 . As before, we reduce to the case where f is expressible in terms of f ω , f ω,ω , f, f ω , f ω,ω as in the preceding argument (with Q 1,i = Q 1 and Q 2, j = Q 2 ), and again arrive at the representation (49). Repeating the previous analysis of stc n,h , but now using the inductive hypothesis for Theorem 5.6 rather than Theorem 5.7 (and not using the monotonicity of Gowers box norms), we see that stc n,h is measurable with respect to Z <d 1 +d 2 −2 (o(Q 1 +Q 2 ),...,o(Q 1 +Q 2 ) for any n, h ∈ G; similarly for stc n ,h and stc n 1 ,n 1 ,h,h c n 2 ,n 2 ,h,h . Repeating the previous arguments (substituting Gowers box norms by Gowers uniformity norms as appropriate), we conclude Theorem 5.6.
Proof of qualitative Bessel inequality
We now prove Theorem 1.23. The proof of Theorem 1.24 is very similar and is left to the interested reader. Our arguments will be parallel to those used to prove Corollary 1.22, but using a nonstandard limit rather than an ergodic limit.
It will be convenient to reduce to a variant of this theorem in which the index set I has bounded size. More precisely, we will derive Theorem 1.23 from Theorem 8.1. Let M ≥ 1, and let Q 1 , . . . , Q M be a finite sequence of coset progressions Q i , all of rank at most r, in an additive group G. Let X be a G-system, and let d be a positive integer. Let f lie in the unit ball of L ∞ (X), and suppose that for some coefficients c ξ with ∑ ξ ∈Z/NZ |c ξ | 1. By the pigeonhole principle, we may thus find ξ 1 , ξ 2 , ξ 3 ∈ Z/NZ such that E x,a,b,c∈Z/NZ ∏ ω 1 ,ω 2 ,ω 3 ∈{0,1} f i (x + ω 1 a + ω 2 b + ω 3 c) e 2πi(ξ 1 a+ξ 2 b+ξ 3 c)/N 1.
Writing e 2πiξ 1 a/N = e 2πiξ 1 (x+a)/N e −2πix/N , and similarly for the other two phases in the above expression, we thus have
Reducing the complexity
In this section we sketch how the U 3 norm in Proposition 1.26 can be lowered to U 2 . The situation here is closely analogous to that of [25, Theorem 7.1], and we will assume familiarity here with the notation from that paper. That theorem was proven by combining an "arithmetic regularity lemma" that decomposes an arbitrary function into an "irrational" nilsequence, plus a uniform error, together with a "counting lemma" that controls the contribution of the irrational nilsequence. The part of the proof of [25, Theorem 7.1] that involves the regularity lemma goes through essentially unchanged when establishing Proposition 1.26; the only difficulty is to establish a suitable counting lemma for the irrational nilsequence, which in this context can be taken to a nilsequence of degree ≤ 2. More precisely, we need to show using the asymptotic notation conventions from [25].
|
v3-fos-license
|
2016-11-01T19:18:48.349Z
|
2016-11-15T00:00:00.000
|
1195015
|
{
"extfieldsofstudy": [
"Computer Science",
"Medicine",
"Biology"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://bmcbioinformatics.biomedcentral.com/track/pdf/10.1186/s12859-016-1402-1",
"pdf_hash": "8206fcbba9b17f9527312a632fff1ad6ec5380a8",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44275",
"s2fieldsofstudy": [
"Computer Science"
],
"sha1": "06dd3d1de9e03bc31fa5915254bfecef57ac0a9d",
"year": 2016
}
|
pes2o/s2orc
|
Efficient randomization of biological networks while preserving functional characterization of individual nodes
Background Networks are popular and powerful tools to describe and model biological processes. Many computational methods have been developed to infer biological networks from literature, high-throughput experiments, and combinations of both. Additionally, a wide range of tools has been developed to map experimental data onto reference biological networks, in order to extract meaningful modules. Many of these methods assess results’ significance against null distributions of randomized networks. However, these standard unconstrained randomizations do not preserve the functional characterization of the nodes in the reference networks (i.e. their degrees and connection signs), hence including potential biases in the assessment. Results Building on our previous work about rewiring bipartite networks, we propose a method for rewiring any type of unweighted networks. In particular we formally demonstrate that the problem of rewiring a signed and directed network preserving its functional connectivity (F-rewiring) reduces to the problem of rewiring two induced bipartite networks. Additionally, we reformulate the lower bound to the iterations’ number of the switching-algorithm to make it suitable for the F-rewiring of networks of any size. Finally, we present BiRewire3, an open-source Bioconductor package enabling the F-rewiring of any type of unweighted network. We illustrate its application to a case study about the identification of modules from gene expression data mapped on protein interaction networks, and a second one focused on building logic models from more complex signed-directed reference signaling networks and phosphoproteomic data. Conclusions BiRewire3 it is freely available at https://www.bioconductor.org/packages/BiRewire/, and it should have a broad application as it allows an efficient and analytically derived statistical assessment of results from any network biology tool.
Background
Representing and modeling biological processes as networks, in particular signaling and gene regulatory relations, is a widely used practice in bioinformatics and computational biology. This bridges these research fields to the vast repertoire of tools and formalisms provided by graph-and complex-network-theory. Furthermore, this facilitate an integrative analysis of experimental observations, either by derivation of networks from the data, or by mapping the latter on the former. Hence, networkbased approaches have become a popular paradigm in computational biology [1,2].
In the last few years this has allowed the design of a broad assortment of algorithms and tools whose aim ranges from providing an interpretative framework for the modeled biological relations, to the identification of network-modules able to explain phenotypic traits and experimental data from large reference signaling graphs [3,4]. Many methods in this last class aim at identifying a sub-network, for example, that is composed by the most differentially expressed or significantly mutated genes [5][6][7][8][9], or that it is targeted by a given external perturbation [10][11][12][13][14]. Toward this aim different optimization procedures have been used to analyze experimental data, identifying a pathway that is deregulated in a given disease, or whose activity is perturbed upon a given drug treatment.
In many approaches, directed signed networks (DSNs, formally defined in the following sections) are used to model pathways and to interlink pathways from a given collection. In these networks, nodes represent biological entities (typically proteins) while edges represent the biological relationships between them (e.g., the activity of protein A affects that of protein B). These edges have a direction to discriminate effectors and affected nodes in a modeled relation, and a sign to specify whether the modeled relation is an activation (positive sign) or an inhibition (negative sign). Unsigned/undirected edges modeling generic interactions can be also present. When available, sign and direction allow a more detailed detection of the nature of the interaction between the nodes. In this study, the number, sign and direction of a node's connections are cumulatively denoted by the functional characterization level (FCL) of the corresponding modeled biological entity (from now entity).
In a reference network modeling a set of interlinked pathways or protein-protein-interactions, the FCL might be high for a node that models a functional hub. For example a kinase phosphorylating a large number of substrate proteins will have a high number of out-going edges with positive sign. Similarly, a gene activated by a large number of transcription factors will have a high number of positive in-coming edges. On the other hand the FCL might be strongly biased by the relevance of a biological entity in a given research field, and the resource the network has been assembled from. For example, in a cancer focused reference network it is reasonable to find nodes that have a high FCL just because they have oncogenetic or tumor-suppressive properties, thus have been studied more than others. As a consequence, solutions to the network optimization problems tackled in bioinformatics (and mentioned above) can be strongly influenced by the topology of the initial network, and by the FCL of its nodes.
In an attempt to overcome this issue, some tools assess this bias by comparing their provided sub-network solutions with those that would be obtained (using the same experimental data and the same algorithm) across a large number of trials, each starting from a reference network that is a randomized version of the original one. Many other tools neglect this aspect and the significance of the solution is computed by randomizing the experimental data only. For both options, the expectation of some topological properties (for example the inclusion of a given edge or node) of the sub-network solutions is estimated by analyzing the random solutions obtained across the trials. In this way, the significance of these properties is quantified as the divergence from their expectation, testing against the null hypothesis that there is no association between the analyzed experimental data and the outputted sub-network solutions.
To our knowledge, all the existing methods assessing their solution significance through reference network randomizations make use of a simple edge shuffling. This means that in a randomization trial each edge of the network is simply set to link two randomly selected nodes. This implicitly means that null models resulting from this randomization strategy are totally unconstrained with regards to the degree of the nodes, and the way they are linked to each other in the original network. Therefore, the impact of the FCL of the nodes in the original reference network on the outputted sub-network solution is not considered. In order to take this into account a constrained randomization strategy preserving the FCL of all the nodes in the original network must be used.
The problem of randomizing an undirected and unweighted network while preserving the degree of its nodes, i.e. the total number of incident edges for each node, is known in graph theory as network rewiring and unfortunately presents itself with analytical and numerical challenges [15]. With the additional constrain that the network to rewire is bipartite (i.e. nodes can be partitioned into two sub-sets such that there are no edges linking nodes in the same set), this problem reduces to randomizing a binary matrix preserving its marginal totals, i.e. its row-wise and column-wise sums. Several algorithms exist to solve this problem [16,17] but the computationally efficient randomization of moderately large matrices (therefore the rewiring of large bipartite networks) is still challenging. Additionally, to our knowledge, none of the methods published is formally shown to be able to actually simulate samplings from the uniform distributions of all the possible binary matrices with prescribed marginal totals. Such proof exists for methods rewiring directed binary networks based on swap-and-fill strategies applied to their adjacency matrices [18] but not dealing with DSNs. Finally, some recent methods have been proposed to solve the related (but yet different from FCL preserving rewiring) problem of randomizing metabolic networks in a mass-balanced way [19].
In [20] we showed how an algorithm based on a Monte Carlo procedure known as the switching-algorithm (SA) [21] can be used to efficiently randomize large cancer genomics datasets preserving the mutation burdens observed across patients and the number of mutations harbored by individual genes (hence to efficiently rewire large bipartite networks). To this aim, we derived a novel lower bound for the number of steps required by the SA in order for its underlying Markov chain to reach a stationary distribution. Additionally, we implemented the SA in the R package BiRewire (publicly available on Bioconductor [20]) and we showed a massive reduction in computational time requirements of our package and bound with respect to other existing R implementations [22] and bounds [21].
Here (i) we introduce the problem of rewiring a DSN modeling a biological network in a way that the FCL of all the modeled entities is preserved: F-Rewiring; (ii) we formally show how this problem reduces to rewiring 2 bipartite networks; (iii) we provide a generalized bound to the SA for bipartite networks of any size; and (iv) we show the validity of the Markov chain convergence criteria (used in our previous work) for F-rewiring DSNs.
Finally, we provide an overview of the functions included in a new version of BiRewire for F-Rewiring, and we show results from two case studies where solutions obtained with two network optimization methods (BioNet [9], and CellNOpt [23]) are assessed for statistical significance and intial reference network biases against constrained null models generated with BiRewire.
Preliminary notations
The problem we are tackling is the computationally efficient randomization of a directed and signed network (DSN) (formally defined below) in a way that some local features of its individual nodes are preserved.
In such a network G = (V , E), the edges in E can be encoded as triplets (a, b, * ) where a is called source node, b is called target node and * is a label denoting the sign of the relation occurring among them, which could be positive, * = +, or negative, * = −.
According to this definition, in a DSN the edge (a, b, +) is different from the edge (a, b, −), thus making this formalism more flexible than that provided by a directed weighted network (with weights ∈ {+1, −1}). In fact, differently from such a model, in a DSN two edges with same terminal nodes and direction but different sign can coexist. In addition, a DSN is different and less general than a multidigraph (a directed multigraph), because only two possible edges with the same direction can coexist between the same couple of nodes.
Given an edge e ∈ E, we define the function λ(e) : E → {+, −}, mapping each edge to its sign label.
Given a node v ∈ V , we define its in-bound-star I(v) as the set of edges in E having v as destination, By naturally extending the definition of node degree (i.e. the number of edges connected to a node) to these formalisms, we call positive-in-degree of a node v the quantity |I + (v)| equal to the number of edges with positive label having v as destination. Similarly we define the v negative-in-degree, positive-out-degree and positivein-degree, the quantities |I − (v)|, |O + (v)| and |O − (v)|, respectively.
In the light of the introduced notation, the object of this study can be redefined as the randomization of the edges of a DNS G while preserving not only its general node-degrees (network rewiring), but also all the signed degrees defined above, for all the nodes: network F-rewiring.
A biological pathway can be naturally represented through a DNS G = (V , E). In this case the nodes in V would represent biological entities, and the edges in E would represent functional relationships occurring among them, whose type would be defined by the sign label (+ for activatory and − for inhibitory interactions), with directions indicating effector/affected roles (source/destination of the edges). In this case the signed degrees introduced above would define the functional characterization level (FCL) of the individual biological entities considering all the possible roles that they can assume within a given pathway.
Particularly the positive-out-degree of a node v would correspond to the level of characterization of the corresponding biological entity as activator of other entities; the negative-out-degree would correspond to its characterization as inhibitor; finally, the positive-, respectively negative-, in-degree of a node would correspond to the level of characterization of the corresponding entity as activated, respectively inhibited, by other entities in the same DSN.
As a consequence, the ultimate goal of this study is to efficiently randomize a pathway (or a collection of interlinked pathways) in a way the functional characterization levels of its individual entities, i.e. the signed-directed degrees of all the nodes, are preserved.
F-rewiring of a directed signed networks is reducible to the rewiring of two bipartite networks: reduction proof
Let us consider a directed signed network (DSN) G = (V , E), with λ(e) ∈ {−, +}, ∀e ∈ E and a transforming function f (G), from the set of all the possible DSNs to the set of all the possible pairs of bipartite networks Worthy of note is that the same node of G can be both a source (therefore belonging the set S * ) for some edge in E, and a destination (therefore belonging to the set D * ) for some other edge in E. As a consequence f should also relabel the nodes (for example adding a subscript to labels of the nodes in D * ). Here, for simplicity we will neglect this relabeling.
As a conclusion, the function f maps G to two bipartite networks (BNs) (B + , B − ) such that B + = (S + , D + , E + ) is the BN induced by the positive edges of G, where all the sources of these edges are included in the first node set S + , all the destinations in the second set D + and two nodes across these two sets are connected by an undirected edge if they are connected in the original network G by a positive edge that goes from the node in the first set to that in the second one. The second bipartite network of the pair B − is similarly induced by the negative edges of G. For- An example of this transformation is shown in Fig. 1a.
It can be shown that such a function f realizes a bijection between the set of all the possible DNSs and the set of all the possible pairs of BNs [24]. As a consequence its inverse f −1 is a function from the set of all the possible pairs of BNs to the set of all the possible DSNs, and it is defined as For simplicity, we assume that f −1 re-assignes to the nodes their original labels before constructing the node/edge sets of G, if they were relabeled by the function f. An example of this inverse transformation is shown in Fig. 1b. Proof First of all we need to show that H is a randomized version of G, in other words that H is a directed signed network with the same nodes set and number of edges of G and the same signed-directed node degrees but a different edge set.
To this aim let be Since a rewiring does not affect the node set of the transformed network, R + has the same node set of B + , and R − has the same node set of B − . On the other hand, B + and B − are the two bipartite networks induced by the positive and negative edges (respectively) of G. For construction, the union of their nodes gives V. Taken together these observations imply that W = V From the definition of f, B + contains the positive edges in E and B − the negative edges of E (whose terminal nodes have been possibly relabeled). From the definition of rewiring, the edge set of R + contains the same number of edges of B + but at least one edge not contained in B + . Similarly the edge set of R − contains the same number of edges of B − and at least one edge not contained in B − . Therefore, from the definition of f −1 , |F| = |E| and F contains at least two edges that are not included in E. This imply that F = E.
As a conclusion G and H have the same set of nodes and number of edges but different edge sets.
Secondly we need to show that the signed degrees of all the nodes of H are equal to those of all the nodes in G.
Let us assume that the positive-in-degrees of H are different from those of G. From the f −1 definition, this implies that R + contains at least a node in the source set for which the degree is different from that of its counterpart in B + . However, this contradicts R + being a rewired version of B + . With the same argument it is possible to prove that all the signed-directed node degrees of H are equal to those of G.
Switching-algorithm lower bound for bipartite networks of any size
To rewire a bipartite network B = (S, D, E), the switchingalgorithm (SA) [21] performs a cascade of switching-steps (SS). In each of these SS two edges (a, b) and (c, d) are randomly selected from E and replaced with (a, d) and (c, b) if these two new edges are not already contained in E. In this case the SS under consideration is said successful.
Underlying the SA is a Markov chain whose states are different rewired versions of the initial network G and a transition between states is realized by a successful SS.
In [20] we prove that, if executing a sufficiently large number of SS, the SA can efficiently simulate samplings from the uniform distribution of all the possible bipartite networks with predefined node sets and prescribed node degrees. Therefore it can be used to obtain a rewired version of a network B that it is, on average, no more similar to B than are similar to each other two bipartite networks B 1 and B 2 sampled from the real uniform distribution of all the possible bipartite networks with the same node sets and node degrees of B. To this aim, the number of SS to be performed before sampling the (k+1)th rewired network must be large enough to assure that the algorithm has forgotten the k-th sampled rewired network (the starting network G for k = 0). Formally, the number of SS between two following samplings must be at least equal to the burn-in time of the Markov chain underlying the SA, which is needed to reach a stationary distribution [25,26]. An example of this is shown in Fig. 2: the 5 plots show results from a simulation study in which the SA has been used to rewire a synthetic bipartite network of 50 + 50 nodes and an edge density of 20%, and rewired versions of this network have been sampled at different intervals of SSs. A sampling interval of 1 SS produces sampled networks that are strongly related to each other (Fig. 2a). Gradually increasing the sampling interval (from 5 to 20 SS, Fig. 2b to d), reduces the sampled network similarities but some local dependencies are maintained. At a sampling interval of 300 SS (Fig. 2e) the Markov chain underlying the SS has reached its stationary distribution, the sampled networks are completely unrelated and there are no dependencies. Therefore, for the bipartite network under consideration, a number of SS ≥ 300 is sufficient to simulate samplings from the uniform distribution of all the possible bipartite networks with 50 + 50 nodes and node degrees equal to those of the original network. identifiers (a, b, c, d and e). Points represent sampled networks, arrows indicate a starting synthetic network, and colors indicate the sampling order. Point proximities reflect corresponding network similarities quantified through the Jaccard index. Point coordinates have been obtained with a generalized multi-dimensional scaling procedure (t-SNE) An empirical bound N for the minimal number of SS to be performed by the SA between two consecutive samplings has been proposed in [21] as being equal to 100 times the number of edges of the bipartite network to rewire. This makes rewiring moderately large networks computationally very expensive.
By analyzing the trend of similarity to the original network along the sample path of the Markov chain simulation implemented by the SA, in [20] we proposed a novel lower bound to the number of SS needed to rewire large bipartite networks equal to where E is the set of edges of the network to rewire B = (S, D, E) and d = |E|/(|S||D|) is its edge density.
In [20] we show that this bound is much lower than N and that our SA implementation and bound provide a massive reduction of the computational time required to rewire large bipartite networks (with thousands of nodes and tens of thousands of edges) with respect to other SA implementations [22] and the bound N .
Here we provide a generalization of the lower bound N making the SA effective and computationally efficient in rewiring bipartite networks of any size. This is led by the observation that a DSN modeling a pathway (and the two bipartite networks induced by its positive and negative edges, respectively) can be even composed by a modest number of nodes and edges.
As shown in the supplementary data of [20] (from now on going, equations from this paper will have GSD, for Gobbi supplementary data, as prefix), Eq. 1 follows from the GSD-Equation 11 (page 20) and it is a simplified form of where t = |S||D| is the total number of possible edges of the original network, d = |E|/t is its edge density, p r is the probability of a SS to be successful. is the accuracy of the bound in terms of distance (quantified through the convergence metric that we used to monitor the Markov chain underlying the SA, based on the number of edge shared by the original network and its rewired version at the generic k-th SS, and defined in GSD-Equation 9, page 19) from the real fixed pointx. Under the assumption of a uniform degree distribution 1 we showed that p r = (1 − d) 2 (GDS-Equation 4, page 16). As a consequence Eq. 2 can be rewritten as: which for = 1, gives Eq. 1. Equation 3 expresses the lower bound of the number of SS as a function that accounts for the network topology and the estimated distance of the Markov chain underlying the SA from its steady-state, according to the convergence metric used in [20]. More detailed, this distance is equal to |x (k) −x|, where x (k) is the number of common edges between the original network and its rewired version after k SS, andx is the expected number of common edges between the original network and its rewired version, after the Markov chain underlying the SA has reached its stationary distribution.
In our previous bound definition was defined in terms of number of edges, and N defined as in Eq. 1 in order to have |x (k) −x| ≤ 1 for k ≥ N.
For large bipartite networks, i.e. |E| > 10000, a value of = 1 guarantees a relative error δ < 0.01% of edges for a number of SS k ≥ N. However, for relatively smaller networks, for example when |E| = 100, a value of = 1 implies a substantially increase in the relative error to δ = 1%, making the estimated lower bound N increasingly suboptimal with respect to the estimated real fixed point.
For this reason here we redefine the lower bound N for the number of SS as a function of its relative error δ, which quantifies its sub-optimality with respect to the estimated real fixed point. Through the simple substitution = |E|δ, Eq. 3 can be rewritten as: 2p r depends only on the level of accuracy δ, the density d of the original network and the probability p r of a successful SS. For uniformly distributed degrees 1 , i.e. p r = (1 − d) 2 , this bound reads as: A value of δ = 0.00005 (corresponding to = 1 edge when |E| ∼ 20000), is used by default by our new implementation of the SA in the new version of the package BiRewire but this parameter can also be set to a user defined value, making our tool and bound suitable for the rewiring of bipartite networks of any size. Additionally, the choice of a suitable value for this parameter can be determined by visually inspecting the SA Markov chain convergence with a new dedicated function (described in "Overview of the new functions included in BiRewire v3.0.0" Section)
Convergence criteria for signed directed networks
In [20] we showed that the convergence criteria we used to estimate our lower bound N for the number of switchingsteps (SS) needed to rewire bipartite networks can be applied also to the more generic case of undirected networks.
To show the validity of this criteria for F-rewiring of directed signed networks (DSNs) let us observe that the Jaccard Index (J) [27] used to assess the similarity between two DSN with the same set of nodes and same number of edges: G = (V , E) and H = (V , F) is defined as is the number of common edges and the last equivalence holds because the two DSNs have the same number of edges. When estimated for bipartite networks, our N guarantees that the number of common edges between an initial network B and its rewired version at the N-switching-step is asymptotically minimized. Proof J(G, H) reaches a minimum when the number of common edges x between G and H reaches a minimum.
x is given by the sum of the number of common positive and negative edges across the two networks, namely x = x + + x − . Given that H = f − 1(R + , R − ), x + is the number of common edges between B + and R + . Analogously x − is the number of common edges between B − and R − . Since R + and R − are rewired version of B + and B − computed through N + and N − (minimizing x + and x − , respectively) also x = x + + x − is minimized.
Overview of the new functions included in BiRewire v3.0.0
The R-package BiRewire (http://bioconductor.org/ packages/BiRewire/) was originally designed to efficiently rewire large bipartite networks ( [20]). We have performed a major update, by including functions to: • read/write directed signed networks (DSN) from/to simple interaction format (SIF) files (functions birewire.load.dsg and birewire.save.dsg); • perform the transformation f (and its inverse f −1 ) to derive bipartite networks induced by positive and negative edges of a DSN, and vice-versa (functions birewire.induced.bipartite and birewire.build.dsg); • F-rewire a DSN by applying the switching-algorithm (SA) to the two corresponding induced bipartite networks with numbers of switching-steps automatically determined for both networks individually, using Eq. 3 (function birewire.rewire.dsg); • sample K rewired versions of a network: this function runs K instances of the SA in cascade; each of these instances performs a number of switching-steps (SS) determined using Eq. 3. This function can take in input a bipartite network, an undirected network or a DSN (in this case Eq. 3 is used individually for the two bipartite networks induced by the positive and negative edges of the DSN, respectively) (birewire.sampler. * functions); • monitor the convergence of the Markov chain underlying the SA on user defined networks. This routine samples a user-defined number of networks at user defined intervals of SS. For each of these intervals, it computes a Jaccard similarity [27] pair-wisely comparing the sampled networks to each other; finally it plots the sampled networks in a plane where points proximities reflect Jaccard similarities of the corresponding networks and point coordinates are computed through the generalized multidimensional scaling method t-SNE [28]; this function gives in output the network coordinates of such scale reductions and produce the plots shown in Fig. 2. Also in this case the inputted graph can be a bipartite network, an undirected network or a DSN (birewire.visual.monitoring. * functions); • perform an analysis of the trends of Jaccard similarity across SS. This function performs a user-defined number of independent runs of the SA, computing the mean value and a confidence intervals of the observed pairwise Jaccard similarities between the obtained rewired networks. The result is a dataset containing the Jaccard similarity scores computed and sampled at user-defined intervals of SS, and a plot similar to that showed in Figs. 3a and 4a. This function takes in input a bipartite network or an undirected network or a DSN (birewire.analysis. * functions).
Worthy of note is that, supporting the analysis of DSNs, our package can handle also generic directed graphs, therefore with BiRewire3 it is now possible to rewire any kind of unweighted networks.
We have developed also a cython wrapper of the corresponding C library for Python users. A first release (with some basic functions) can be found in https://github.com/ andreagobbi/pyBiRewire.
Case study 1: BioNet
The R package BioNet [29] provides a set of methods to map gene expression data onto a large reference biological network, and to identify (with a heuristic method) a maximal scoring sub-network (MSS), which a is a set of connected nodes (or module) with unexpectedly high levels of differential expression [30]. Several other methods moving along the same lines exist (as, among others, EnrichNet [6]). Here we focus on BioNet because it can be considered a typical example among these methods, and we show how BiRewire3 can be used to estimate the impact of the reference network topology and the functional characterization level (FCL), i.e. sign-directed degree, of its nodes on the optimal module outputted by this tool.
The initial reference network used by BioNet (the Interactome) is a large undirected protein-protein-interaction network assembled from HPRD [31] and encompassing 9,392 nodes and 36,504 edges. In [29], the authors show an application of BioNet to gene expression data from a diffuse large B-cell lymphoma (DLBCL) patient dataset, with corresponding survival data. After determining gene-wise P-values for differential expression and risk-association, the authors aggregate them and fit a beta-uniform mixture model to the distribution of aggregated P-values that yields a final score (accounting for both considered factors) for each gene: the higher this score the more a gene is differentially expressed across the contrasted groups of patients. Then the methods proceeds with mapping these scores onto the Interactome nodes and, applying a heuristic method [9], it identifies a sub-network (referred to as a module) that is a sub-optimal estimate of the MSS. This module is shown in Fig. 3c and the BioNet package vignette contains detailed instructions on how to reproduce this result.
To evaluate the impact of the FCLs of the Interactome nodes on the module outputted by BioNet when used on the DLBCL dataset, we generated 1,000 F-rewired versions of the Interactome with BiRewire3 and used each of them as initial reference network in 1,000 individual BioNet runs, using the DLBCL dataset as input.
To this aim we first conducted a BiRewire3 analysis (using the dedicated function of our package) to determine the number of switching-steps (SS) to be performed by the switching-algorithm (SA) in order to F-rewire the Interactome. This function makes use of the convergence criteria we designed in [20], which is based on the estimated time, in terms of SS, in which the Jaccard similarity (JS) between the original network and its rewired version at the k-th SS reaches a plateau (Fig. 3a). In [20] we showed that this criteria is equivalent to other established methods to monitor Markov chain convergence when the states are networks. In addition its relatively simple formulation consents the analytical derivation of an estimated plateau time, i.e. our bound N. Neverthless, our package allows also a visual inspection of the optimality of the estimated bound N showing how independent are F-rewired versions of an initial network sampled at a number of user-defined SS intervals as well as every N SS (Fig. 2).
These preliminary analyses resulted in a required number of SS equal to N = 170, 491 (Fig. 3a) and showed that this number of SS is actually sufficient to generate unrelated F-rewired versions of the Interactome, thus to Fig. 4 CellNOpt study case. a Analysis of the Jaccard index trend across switching-steps (SS) while rewiring the two bipartite network induced by the positive (respectively negative) edges of the reference DSN (liver prior knowledge network (liver-PKN)) and estimation of the lower bounds for the number of switching-steps; b visual inspection of the switching-algorithm Markov chain convergence to verify the suitability of the estimated bounds (see Fig. 2 legend for further details); c Comparison of the CellNOpt scores and the rewired scores; d Empirical p-values of the CellNOpt scores across the entire family of models. e The liver-PKN used by CellNOpt as initial reference network; f The model outputted by CellNOpt when using the liver-PKN as initial reference network with superimposed the frequency of inclusion of each node in a set of 1,000 models outputted by CellNopt using F-rewired versions of the liver-PKN as reference networks simulate samplings from the uniform distribution of all the possible networks with the same number of nodes and FCLs of the Interactome (Fig. 3b). Generating 1,000 Frewired versions of the Interactome sampled each N SS required ∼ 2 hours on a 4 core 2.4 Ghz computer with 8GB memory.
Running 1,000 independent instances of BioNet using each of these F-rewired Interactome as reference network and the DLBCL dataset in input resulted into 1,000 different module solutions (rewired solutions). For each of the nodes included in the original BioNet module solution (Fig. 3c), we quantified the ratio of rewired solutions including them and we investigated how this quantity related to the corresponding BioNet scores (Fig. 3d). As expected, we observed a significant correlation (R = 0.51, p = 0.001). In fact, as per the definition of the MSS, it is reasonable that nodes with high scores (such as, for example NR3C1 and BCL2) tend to be included in the module outputted by BioNet regardless their edges and degree in the reference Interactome. Similarly, nodes with large negative scores (such as CDC2 and JUN) are included in the module only because they link high scored nodes and it is obvious that they do not tend to be included in the rewired solutions, as in this case the way they are interlinked to other nodes is crucial.
Nevertheless, a number of nodes (such as, SMAD4, SMAD2 and PIK3R1) have modest score but tend to be included very frequently in the rewired solutions. This hints that what leads the inclusion of such nodes in the BioNet module is their high FCL. As a confirmation of this, SMAD4, SMAD2 and PIK3R1 fall over the 99th percentile when sorting all the nodes in the Interactome (and included in the DLBCL) based on their FCL (which in this case corresponds to their degree). This is a proof that the reference network provides the BioNet outputted module with a positive impact, and that at least some nodes are included in the solution because of their high FCL.
When extending this analysis to the nodes of the Interactome (included in the DLBCL dataset) that are not present in the module outputted by BioNet we observed again an expected significant correlation (R = 0.51, p < 10 −16 ), and some nodes (such as JUP, MMP2 and ITGA6) with high scores frequently included in the rewired solutions (the fact that these nodes do not appear in the BioNet outputted module is due to the sub-optimality of the used heuristic). However we also observed a large number of nodes (such as RPL13A, STK17A and IDH3A) scored high but relatively infrequently included in the rewired solutions. This hints that these nodes are penalized by their low FCL in the reference Interactome, thus proving the existence of a negative impact provided by the reference Interactome to the BioNet outputted module, and that at least some nodes are not included in the solution because of their low FCL.
An indication of both these types of impacts, together with diagnostic plots and statistics would complement and complete the output of many valuable and widely used tools, such as BioNet.
Case study 2: CellNOpt
CellNOpt (www.cellnopt.org) is a tool used to train logic models of signal transduction starting from a reference directed signed network (DSN) called a prior knowledge network (PKN), describing causal interactions among signaling species (obtained typically from literature), and a set of experimental data (typically phosphorylation), obtained upon various perturbatory conditions ( [23]).
CellNOpt converts the PKN into a logic model and identifies the set of interactions (logic gates) that best explain the experimental data. This is performed through a set of Bioconductor packages supporting a number of mathematical formalisms from Boolean models to ordinary differential equations.
Through a built-in genetic algorithm CellNOpt identifies a family of subnetworks from the reference DSN (from now, models) together with the value of the objective function (the model score δ) quantifying at what extent each model is able to explain the experimental data (the lower this value the better is the fit of the model to the data). By default, the best model with the lowest score denotedδ is returned to the end-users. Note, however, that multiple models may be returned if they cannot be discriminated given the experimental evidence. Besides, to account for experimental noise, users may also provide a parameter, which is called tolerance (in percentage), that will keep all models below a threshold defined as λ =δ(1 + tolerance).
Setting this tolerance parameter is non-trivial and depends largely on the experimental error. One idea would be to estimate this threshold by looking at the expected ability of F-rewired versions of the liver-PKN to explain the data, when they are used as input to CellNOpt. In fact, even if original local node properties are maintained, in each of these F-rewired networks the topology of the biological pathways interlinked in the liver-PKN is disrupted. As described before, a large score calculated by CellNOpt indicates a large disagreement between data and network logic behavior at the measured nodes. Therefore the distribution of the δs outputted by CellNOpt when using these F-rewired networks gives an idea of the attainable base-line performaces, which are not derived from biologically meaningful models but depend only on the FCL (signed and directed node degrees) of the original liver-PKN.
Based on these considerations, here we show how BiRewire3 can be used to identify such a threshold as the maximal δ value whose deviance from expectation is statistically significant. Similarly to the previous case study, this expectation can be empirically estimated by running a large number of independent CellNOpt runs using Frewired versions of the initial reference signaling network and the same experimental data. Thus accounting for the effect of the node FCLs on both scores and outputted models. To this aim, we used the same reference PKN network and phosphoproteomic data used in [23], which has about 80 nodes and 120 directed and signed edges. This was a study on human liver cell and hence the network is called liver-PKN hereafter. With the BiRewire3 package we generated (in less than 10 seconds, on a standard unix laptop) 1000 F-rewired versions of the liver-PKN, visually inspecting (as in the previous case study) the optimality of our estimated lower bound N for the number of switching-steps (SS) to be performed by the switchingalgorithm (SA) (Fig. 4a,b) between one sampled F-rewired network and the following one. Subsequently we run 1000 independent instances of CellNOpt (using the CellNOptR package [23], v1.16 available on Bioconductor at www. bioconductor.org/packages/CellNOptR/).
On each of these F-rewired liver-PKN networks and the same phosphoproteomic dataset (obtaining one rewired model per each analysis), as well as a final run using the original liver-PKN network (obtaining a family of 1000 different models).
When comparing the two populations of CellNOpt scores obtained from these two analyses we observed, as expected, a notably statistically significant difference (t-test p-value < 10 −16 , Fig. 4c), indicating that in the Frewired networks the topology of the pathways originally interlinked in the liver-PKN is actually disrupted. Subsequently, using the distribution of scores of the rewired models we computed empirical p-values for the CellNOpt scores for the entire model family outputted by the final run (making use of the original liver-PKN).
For a given score δ i corresponding to the i−th model of the family, an empirical p-value was set equal to the number of rewired models m such that δ m ≥ δ i divided by 1000 (the number of tested f-rewired liver-PKNs). More than 90% of the models in the outputted family had a CellNOpt score significantly divergent from expectation (p-value < 0.05) and the estimated score threshold guaranteeing this (or a greater) divergence from expectation, thus a minimal impact of the initial liver-PKN FCLs, was equal to 0.06.
Finally, and similarly to the analysis performed in the first study case, we quantified the tendency of each of the nodes included in the final merged CellNOpt model to be included in the rewired models, finding that also in this case this is indeed proportional to the nodes' FCL.
In summary, BiRewire3 could be effectively used to determine a score threshold on an analytical ground, based on which meaningful models could be selected from the family outputted by CellNOpt for further analyses, and finally assemble a consensual model solution.
Additionally, it could be employed to evaluate the extent of impact of the CellNOpt reference network on the topology of its outputted consensual model.
Discussion
BiRewire3 is a one-stop tool to rewire in a meaningful way any type of unweighted networks (undirected, directed, and signed) currently used to model different datasets and relations in computational biology (including presence-absence matrices, genomics datasets, pathways and signaling networks) in an computationally efficient way. It represents a significant and formally demonstrated advance with respect to its previous version [20], whose applicability was restricted to presence/absence matrices and undirected bipartite networks. We have previously shown that, thanks to an analytically derived lower bound to the number of steps of its underlying algorithm, the computational time requirements of BiRewire3 are vastly lower than those of other similar tools, reducing from months to minutes (on a typical desktop computer) when rewiring networks with tens of thousands of nodes and edge density ranging up to 20%. Additionally, the core algorithm underlying BiRewire3 is based on a Markov chain procedure that could be easily parallelized in future implementations, to exploit the power of modern multicore computer architectures, thus reducing these time requirements even further.
Our package is available as free open source software on Bioconductor and, as we showed in our case studies, it can be easily combined into computational pipelines together with a wide range of existing bioinformatics tools aiming at integrating signaling networks with experimental data.
Conclusion
We have presented a computational framework implemented in a R package that could complement existing network based tools. This will be useful for computing a wide range of constrained null models testing the significance of the solutions of these tools, and to investigate how the topology of the used reference networks can potentially bias these results.
Moreover, the range of applicability of BiRewire3 goes beyond computational biology, and includes all those fields making use of tools from network theory, from operative research, to microeconomy, and ecological research (an example of the application of BiRewire application in a micro-economy and technology patent study can be found at http://arxiv.org/abs/1509.07285). 1 Our proof applies also to non uniform degree distributions, leading to the same conclusions for the case of directed signed networks. Here we use the uniform case for simplicity.
|
v3-fos-license
|
2022-03-31T15:44:34.802Z
|
2022-03-28T00:00:00.000
|
247828807
|
{
"extfieldsofstudy": [],
"oa_license": "CCBYSA",
"oa_status": "HYBRID",
"oa_url": "https://biarjournal.com/index.php/lakhomi/article/download/615/605",
"pdf_hash": "013f955fa762cceababc135b143238e76902c9ef",
"pdf_src": "MergedPDFExtraction",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44276",
"s2fieldsofstudy": [
"Linguistics"
],
"sha1": "7340301083c5e27675ebd977073cda2d35a73c0e",
"year": 2022
}
|
pes2o/s2orc
|
Meaning Used By Pranatacara in Javanese Wedding Ceremony “Panggih Manten”
1 ___________________________________________________________ Abstract: This study used the descriptive qualitative method. The qualitative method was a research procedure which results in descriptive data, including written and oral word from the research objectives. The data used in this study were taken from the speech utterances by Pranatacara in "Panggih Manten" wedding ceremony in the Aeksongsongan. The researcher took the data from two Pranatacara and two Panggih Manten which taken from the speech utterances. The researcher used a source of theories about the semantic field by Knowles and Moon to support the thesis. Some steps were undertaken during the data collection: the process of selecting, focusing on the important things, simplifying, abstracting and transforming the data that appear in transcription. The results showed that the data into 23 metaphorical meanings from total two pranatacara of Javanese Wedding Ceremony or Panggih Manten. The dominant of metaphorical meaning was creative metaphor which had the highest percentage. The other result showed the least of metaphorical meaning was conventional meaning which had the lowest percentage. We took to identify and consider three things : metaphor, meaning, and connection by analysing and discussing metaphors in any depth. The percentage of the sequences Panggih Manten consisted of the opening, balangan gantal, injak telur, sinduran, dulangan, kacar kucur, sungkeman and the closing.
I. Introduction
In order to build a good communication, it is needed an understanding, between the speaker and the listener. The objective of the understanding is to encourage the indication of something that is known as meaning. Meaning is very important to understand. The listener has to comprehend the meaning the speaker says in order to avoid misunderstanding of the word.
The study of meaning in linguistics is known as semantics. (Yule, 2006) stated that semantic is the study of the meaning of words, phrases, and sentences. Moreover, semantics are concerned with the meaning of syntactic units larger than the words. Geeraerts (2010) stated that lexical semantics as an academic discipline in its own right originated in the early nineteenth century, but that does not mean that matters of word meaning had not been discussed earlier. The meaning can be interpreted literally and to get knowledge of it.
Meanwhile, there is a metaphor in the term of semantics. In literature, metaphor as one of stylistic elements does not convey merely ideas. According to Simpson (2004) said, a metaphor is a process of mapping between two different conceptual domains. The different domains are known as the target domain and the source domain. The target domain is the topic or concept that you want to describe through the metaphor while the source domain refers to the concept that you draw upon in order to create the metaphorical construction. So the metaphors can describe a source domain while the meaning is truth. When she draws a thing there are two meaning different on it, the first meaning is original of the sentence and the second is meaning of mapping of the metaphors or only to the imaginations.
As we all know that each region has different customs and cultures. Wedding ceremony is one of the traditional ceremonies which until now the wedding ceremony is implemented and continued in the society. The implementation of traditional ceremonies, such as ceremonies of birth, marriage, pregnancy and death, will not be separated from the role of language as the introduction. In Javanese ethnic marriages, ritually used in nine Javanese weddings that is responsive discourse in the event of surrender to hands or handover of candidates groom, marriage contract, "panggih manten", performing Javanese traditional ceremonies, ceremonies of respect, carnival, giving advice to the bride and groom, cover. "panggih manten" is a traditional ceremony of meeting between brides man and woman.
Other than that, from this traditional Javanese marriage we can learn a lot of cultural values that are owned by the Javanese. Not only to know panggih manten or the ceremony, but must understand the meaning spoken by pranatacara. Considering the statement above, the researcher was interested in doing this research with the entitled "Metaphorical Meaning Used by Pranatacara in Javanese Wedding Ceremony "Panggih Manten".
Semantics
In brief, semantics means the study of meaning. However, the word meaning have the wide perceptions and there are no general agreements among experts about the way in which it should be described. Chaer (2009) thought that the study of semantics means of the words or sentence. He said that semantics is derived from Greek sema noun is meaning "sign" or "symbol" semaino verb is meaning "mark" or "symbolize".
Metaphor
The metaphor is a kind of figurative language which uses connotative meaning through the comparison without using the word "like" or "as". The metaphor is considered difficult especially in understanding the meaning. It depends on the background knowledge of the readers. It needs a deeper attention since the comparison is conveyed implicitly. According to Lakoff and Johnson (2003), Metaphor for most people is a device of the poetic imagination and the rhetorical flourish a matter of extraordinary rather than ordinary language.
Conceptual Metaphor
It is the way we understand metaphor through the concept of mapping. The mapping processes of two domains that participle in conceptual metaphor has special names. Kovecses (2002) adds that "the conceptual domain from which we draw metaphorical expressions that to understand another conceptual domain is called source domain, while the conceptual domain that understands this way is the target domain". By this condition, these domains will bring us into the comprehension of the concept of the metaphor and map each other. Source domains consist of common entities, attributes, processes and relationships, such as "The human body, Health and Illness, Animal, Plants, Building and Construction, Movement and Direction"Kovecses (2002).
Metaphor Based on Semantic Field of Knowles and Moon (2006)
According to Knowles and Moon in his book introducing metaphors. Metaphor is the use of language to refer to something other than what is originally applied to, or what it 'literally' means, in order to suggest some resemblance or make a connection between the two thing. The conventionality of metaphor, but with more general classification, is also adopted by Knowles and Moon (2006) who distinguish between Creative/Novel and Conventional metaphors. In addition, according to Knowles and Moon, related to this research, the metaphor can also communicate what the pranatacara think and feel about something, be able to explain and articulate an idea or ideas that are special in a way that more attractive so easily understood by the audiences.
a. Creative Metaphor
Metaphorical ground in these metaphors is associated with specific connotations that are specifically employed by the writer for certain purposes. Creative metaphors, according to Knowles and Moon, can be shown in many types of texts, but literary metaphors are the most prominent ones.
b. Conventional Methapor
Conventional metaphors, according to Knowles and Moon, are often associated with the cultural values of a certain community as they exemplify its "ideas, assumptions, and beliefs".
c. Elements
Elements are made about more general patterns of metaphor in literature, which act as a background against which the metaphors under analysis are assumed to function and sometimes even stand out. According to Knowles and to analyse and discuss metaphors in any depth, we need to identify and consider three things: the metaphor (a word, phrase, or longer stretch of language); its meaning (what it refers to metaphorically); and the similarity or connection between the two. In traditional approaches to metaphor, including literary metaphor, these three elements have been referred to as, respectively, vehicle, topic, and grounds.
Panggih Manten
Wedding ceremony is the important event in every human life. Basically, a wedding is a rite of passage, an event that marks a person's transition from one life status (single) to another (married). (Purba, N and Mulyadi, 2020). Tradition is something that is passed down from the heritage of the ancestors to the next generation in a relay descends performed by the indigenous communities that have become deeply entrenched the culture in life. (Purba, N. 2020). Murtiadji (1993) says panggih manten wedding ceremony is a meeting between the groom and the bride. This ceremony means that the effort to find the most perfect level of life is very many obstacles and obstruction. There are several steps of panggih manten's sequences : (Balangan Gantal Ngidak Tigan (Treading Eggs), Sinduran / Disingepi sindur, Bobot Timbang, Ngombe rujak degan, Kacar Kucur, Dulangan, and Sungkeman)
Pranatacara
One of the activities that really need expertise in using rhetoric is the speech system, especially in guiding Javanese traditional wedding events. Pranata adicara is an event guide for the Javanese indigenous community. In Indonesia, the speech institution can be called the MC (Master of Ceremony). Javanese traditional program guides and national events certainly have their own characteristics. Each host of course must be able to concoct the words that were spoken so as to give a beautiful impression and be able to attract the attention of the speech partners.
III. Research Method
The researcher used the descriptive qualitative method. The qualitative method was a research procedure which results in descriptive data, including written and oral word from the research objectives whether it is from society or books. Miles Huberman (2014), qualitative research provides ways of discerning, examining, comparing, contrasting and interpreting meaningful patterns or themes. The research used descriptive qualitative methods because the data is the form of words or qualitative. The researcher's data sources were taken from the speech utterances by Pranatacara in "Panggih Manten" wedding ceremony in the Aeksongsongan. The researcher need the data from two Pranatacara and two Panggih Manten. The researcher used a source of theories about the semantic field by Knowles and Moon to support the thesis.
Metaphorical Meaning Used By Pranatacara In Javanese Wedding Ceremony "Panggih Manten"
From the analysis of metaphorical used by Pranatacara in two of Javanese Wedding Ceremony "Panggih Manten". According to Knowles and Moon in his book Introducing Metaphor (2006) there are creative and conventional metaphor. Metaphor is the use of language to refer to something other than what is originally applied to, or what it 'literally' means, in order to suggest some resemblance or make a connection between the two things. The frequency of those metaphors found in the texts can be seen in Table 4.1 below. Types of metaphorical meanings found in the two of Javanese Wedding Ceremony of Panggih Manten ceremony in the Aeksongsongan sub-district are described in this part by providing the examples from the data. . The researcher used a source of theories about the semantic field by Knowles and Moon to support the thesis.
Wedding Ceremony
According to Knowles and Moon (2006) which was to analyze and discuss metaphors in any depth by identifing and considering three things: the metaphor (a word, phrase, or longer stretch of language); its meaning (what it refers to metaphorically); and the similarity or connection between the two. In traditional approaches to metaphor, including literary metaphor, these three elements have been referred to as, respectively, metaphor, meaning, and connection. By analyzing each sentence, the elements are presented in explanation as follows: Based on the picture in one of the sequences's Panggih Manten, the pranatacara said that Penganten jaler nuangken was jane kalih telapak tangan penganten setri.
From the example above there is something to do for the bride and groom. can be seen in the metaphor of jane kalih "yellow rice" which shows how providing a marriage's needs or a living.
Applying the metaphor-meaning-connection model, we can identify that the metaphor : Yellow rice. the meaning : Providing a marriage's needs or a living the connection : The groom is providing a living fir the bride as his responsible to fulfill the marriage's needs.
Wedding Ceremony
Panggih manten or wedding ceremony is a meeting between the groom and the bride. This ceremony means that the effort to find the most perfect level of life is very many obstacles and obstruction. Regarding offerings or behavior and equipment can described as follows the opening, balangan gantal, injak telur, sinduran, dulangan, kacar kucur and sungkeman.
a. The Opening
In this sequence, A symbol or proposition to redeem a bride, so it is usually called a bride's ransom.
b. Balangan Gantal/ Betel Leaves
In this sequence, the procedure is that the groom takes the heart or love of his lover. Instead of the woman shows her devotion to the husband. This procession alson means an event that is fleeting but cannot be repeated
c. Injak Telur/ Step on The Egg
In this sequence, with bare feet the groom stepped on a egg placed on a tray until the yellow and white parts were crushed and became one. Next the bride washed the groom's showed how devoted her to express how wisely her husband d. Sindur/ Sindur Cloth In this sequence, the shoulders of both brides are covered with a sindur cloth by the bride's mother. Walking slowly towards the 'krobongan' followed by the father from behind.
e.Dulangan / Feeding
In this sequence, feeding each other from parents to the bride to make the fortune. It was a symbol from the parents to the bride for getting the fortune in this marriage.
f. Kacar-Kucur/ Showing Responsibility
In this sequence, the groom pours yellow rice into the brides's hand. The groom is providing a living for the bride as his responsible to fulfill the marriage's need.
g. Sungkeman/ Asking for Blessing
In this sequence, the procession to show the devotion of the bride and groom to both parents. Apologizing at the wedding to both parents is for asking their prayers and congratulations, hopefully he can be a responsible husband h. The Closing or Praying In this sequence, a pair of brides who have officially become a husband and a wife in their own family need to end with prayer.
Discussion
The data were taken from from two Pranatacara and two Panggih Manten. The researcher used a source of theories about the semantic field by Knowles and Moon to support the thesis. The frequency is 23 metaphorical meanings from total two metaphorical meanings. The percentage of creative metaphor in Panggih Manten was highest. The percentage of conventional metaphor in Panggih Manten was lowest. The reason why the creative metaphor was the dominant, Prantacara assumed that creative metaphor are new and unique metaphors which used to express their ideas into piece of written/spoken so that the utterances becomes easily understood by the reader.
According to Knowles and Moon (2006), elements are made about more general patterns of metaphor in literature, which act as a background against which the metaphors under analysis are assumed to function and sometimes even stand out. We need to identify and consider three things: the metaphor (a word, phrase, or longer stretch of language); its meaning (what it refers to metaphorically); and the similarity or connection between the two.
In traditional approaches to metaphor, including literary metaphor. In the sequence of balangan gantal which contained metaphorical meaning, sipengantin nyerawat balangan gantal sageto sipenganten melempar kasih ing ikatan suci, in English has the meaning; the bride are throwing betel leaves each other. The metaphor explained by throwing betel leaves, its meaning refers to betel leaves shows the love in a sacred bond, and the connection conducted that the brode are showing how the sacred of their love or relationship in the marriage.
Panggih Manten or wedding ceremony is a meeting between the groom and the bride. This ceremony means that the effort to find the most perfect level of life is very many obstacles and obstruction. The sequences consists of the opening which presented proposition to redeem a bride , balangan gantal which presented the bride's devotion to the husband , injak telur which presented how wisely her husband/the groom, sinduran which presented the parents will always encourage the brides , dulangan which presented feeding each other from parents to the brides to make the fortune, kacar kucur which presented the groom's responsibility to fulfill the marriage's needs, sungkeman which presented the asking of parent's prayers and congratulations and the closing or praying, with the opening and balangan gantal are dominant.
V. Conclusion
The frequency are 23 metaphorical meanings from two types metaphorical meanings used in the two of Javanese Wedding Ceremony or Panggih Manten in the Aeksongsongan sub-district. The percentage of creative metaphor in Panggih Manten was 65.21 %. The percentage of conventional metaphor in Panggih Manten was 34.79 %. The elements or metaphorical meaning used by Pranatacara In Javanese Wedding Ceremony "Panggih Manten" have been analysed by the researcher.
According to Knowles and to analyse and discuss metaphors in any depth, we need to identify and consider three things : metaphor which are a word or pharse that has a metaphorical meaning, topic or meaning is the metaphorical meaning intended by the writer, not the literal meaning, and connection is the relationship between literal meaning and metaphorical meaning. Through the connection or grounds can be seen the meaning of what sentence want delivered and what kind of prototype want to be transferred. One of the examples, from the utterance "Its forehead is like marble" in the sequence of the opening. The metahpor is marble, the meaning is smooth and the connection is how smooth the forehead and bride are.
The realization from the two of Javanese Wedding Ceremony or Panggih Manten in the Aeksongsongan sub-district are consisted of the opening, balangan gantal, injak telur, sinduran, dulangan, kacar kucur, sungkeman and the closing or praying, with the opening and balangan gantal are dominant.
Suggestion
By considering the conclusion mentioned above, the writer formulized some suggestions as follows. After analyzing the data and summarizing the conclusion, researcher suggests to linguistics students who want to do a research in semantics approach, to deeply explore about conceptual Metaphor theory, for instance, observing other media or clues such as event or ceremony to find the types of metaphor.
In the wedding ceremony or Panggih Manten, theory of conceptual metaphor used to convey the concept of metaphor that leads to understand the meaning of metaphor. Meanwhile, in the sequences of Panggih Manten we can also use this theory in order to know what the metaphor meaning is actually talking about. Therefore, the writer hopes that there will be other researchers who will conduct the research using conceptual metaphor as a theory.
|
v3-fos-license
|
2021-12-14T14:32:44.455Z
|
2021-12-01T00:00:00.000
|
245126292
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://bmcmusculoskeletdisord.biomedcentral.com/track/pdf/10.1186/s12891-021-04946-7",
"pdf_hash": "e455903f63aaf46a5f5870ed6cfbf1b7e64f81bc",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44278",
"s2fieldsofstudy": [
"Medicine"
],
"sha1": "e455903f63aaf46a5f5870ed6cfbf1b7e64f81bc",
"year": 2021
}
|
pes2o/s2orc
|
What are risk factors for subsequent fracture after vertebral augmentation in patients with thoracolumbar osteoporotic vertebral fractures
Background Due to its unique mechanical characteristics, the incidence of subsequent fracture after vertebral augmentation is higher in thoracolumbar segment, but the causes have not been fully elucidated. This study aimed to comprehensively explore the potential risk factors for subsequent fracture in this region. Methods Patients with osteoporotic vertebral fracture in thoracolumbar segment who received vertebral augmentation from January 2019 to December 2020 were retrospectively reviewed. Patients were divided into refracture group and non-refracture group according to the occurrence of refracture. The clinical information, imaging findings (cement distribution, spine sagittal parameters, degree of paraspinal muscle degeneration) and surgery related indicators of the included patients were collected and compared. Results A total of 109 patients were included, 13 patients in refracture group and 96 patients in non-refracture group. Univariate analysis revealed a significantly higher incidence of previous fracture, intravertebral cleft (IVC) and cement leakage, greater fatty infiltration of psoas (FIPS), fatty infiltration of erector spinae plus multifidus (FIES + MF), correction of body angle (BA), BA restoration rate and vertebral height restoration rate in refracture group. Further binary logistic regression analysis demonstrated previous fracture, IVC, FIPS and BA restoration rate were independent risk factors for subsequent fracture. According to ROC curve analysis, the prediction accuracy of BA restoration rate was the highest (area under the curve was 0.794), and the threshold value was 0.350. Conclusions Subsequent fracture might cause by the interplay of multiple risk factors. The previous fracture, IVC, FIPS and BA restoration rate were identified as independent risk factors. When the BA restoration rate exceeded 0.350, refractures were more likely to occur. Supplementary Information The online version contains supplementary material available at 10.1186/s12891-021-04946-7.
However, more and more studies suggested this procedure might accelerate or facilitate subsequent fractures, which lead to renewed pain, reduced daily activity and repeated treatment [3,4].
Due to its unique anatomical location and mechanical characteristics, thoracolumbar segment has a higher incidence of subsequent fracture after vertebral augmentation, but the causes have not been fully elucidated. While scholars confirmed some risk factors, other factors, such as intravertebral cleft (IVC), cement distribution and leakage, correction of kyphotic deformity, were controversial to date. Furthermore, more recent studies suggested paraspinal muscle atrophy might play a role in chronic low back pain and lumbar degenerative diseases. Whereas the exact association between paraspinal muscle degeneration with subsequent fracture after vertebral augmentation remained largely unknown. In this context, we conducted this study to comprehensively evaluate the potential risk factors for subsequent fracture in thoracolumbar segment, including the effect of paraspinal muscles.
Study participants
This retrospective study was conducted in the orthopedic department of two hospitals, vertebral augmentation procedures were performed by four senior surgeons via bilateral transpedicular approach according to standard procedures. Patients with symptomatic OVCF in thoracolumbar segment (T10-L2) who treated with vertebral augmentation from January 2019 to December 2020 were retrospectively enrolled. The inclusion criteria were as follows: (1) patients>65 years old with single or multiple level acute OVCF in the thoracolumbar segment. (2) patients received percutaneous vertebroplasty (PVP) or percutaneous kyphoplasty (PKP) treatment. (3) patients with at least 6 months follow-up data. And the exclusion criteria included: (1) patients with OVCF in other segments. (2) patients received other treatments. (3) fractures caused by severe trauma or pathological fractures due to tumor, infection or bone metabolic disease. (4) patients with previous spinal surgery. (5) patients with incomplete follow-up data.
Data collection and image analysis
The included patients were assigned into refracture group and non-refracture group according to the occurrence of refracture during the follow-up. For each included patient, the following clinical information were collected: age, gender, previous fracture history, number and level of primary and refracture vertebrae, surgical technique, duration of follow-up. In addition, the presence of IVC, cement distribution (12 scores method) [5] and leakage, preoperative anterior height of fractured vertebrae and intact adjacent vertebrae above and below it, postoperative anterior height of cemented vertebrae, preoperative body angle (pre-BA), Cobb's angle (pre-CA), thoracolumbar kyphosis (pre-TLK), lumbar lordosis (pre-LL) and postoperative body angle (post-BA), the cross-sectional area (CSA) of vertebral body, the CSA and fatty infiltration (FI) of bilateral paraspinal muscles (psoas (PS) and erector spinae plus multifidus (ES + MF)) ( Fig. 1) at the superior endplate of L4 on preoperative T2-weighted axial image were obtained [6]. (The CSA and FI of paraspinal muscles, the vertebral height were measured using Image J V1.8, National Institutes of Health, USA, and the angles were measured using DICOM viewer Weasis, V1.2.4, Weasis Team) Of note, the vertebral height, pre-BA, pre-CA, post-BA and cement distribution were not collected in patients with multiple fractures. Based on the results of above parameters, the relative CSA (r-CSA) of paraspinal muscles (r-CSA PS and r-CSA ES + MF ), vertebral compression rate, vertebral height restoration rate, BA restoration rate and correction of BA were also calculated. (The methods used to measure and calculate these parameters were demonstrated in Supplemental Table 1 and Fig. 1, Supplemental Fig. 1).
After reaching an agreement, the above parameters were independently measured and calculated by a spine surgeon and a radiologist. The interobserver reliability was assessed via the intraclass correlation coefficient (ICC), and the result showed excellent.
Statistical analysis
All statistical analyses were conducted using statistics software SPSS 23.0, and significant differences were indicated when p < 0.05. The Chi-square test (for categorical data), the Student's t-test (for normally distributed data) and the Mann-Whitney U-test (for non-normally distributed data) were used to compare the difference between the two groups. Variables with a statistical difference were entered into binary logistic regression analysis to identify independent risk factors, and ROC curve was used to predict the critical value.
Demographic characteristics and imaging findings
Based on the inclusion and exclusion criteria, a total of 109 patients were included in this study, the duration of follow-up was 17.53 ± 6.47 month. There were 13 patients in refracture group (age: 78.85 ± 7.18) and 96 patients in non-refracture group (age: 76.51 ± 7.27). Univariate analysis revealed no significant differences in age, sex, number of fracture (single or multiple), surgical technique (PVP or PKP), cement distribution, vertebral compression rate, r-CSA PS , r-CSA ES + MF , pre-BA, pre-CA, pre-TLK, pre-LL and post-BA. But the results showed significantly higher incidence of previous fracture (P = 0.032), IVC (P = 0.022) and cement leakage (P = 0.011), greater FI PS (P = 0.015), FI ES + MF (P = 0.029), correction of BA (P = 0.004), vertebral height restoration rate (P = 0.018) and BA restoration rate (P = 0.002) in refracture group. (Table 1).
Binary logistic regression and ROC curve analysis
Based on the results of univariate analysis between the two groups, all statistically significant variables (previous fracture, IVC, cement leakage, FI PS , FI ES + MF , correction of BA, vertebral height restoration rate, BA restoration rate) were included in the binary logistic regression analysis. Among these variables, we found that previous fracture, IVC, FI PS and BA restoration rate (all P < 0.05) were independent risk factors for subsequent fracture. ( Table 2). The ROC curves were used to further determine the degree of influence of each risk factor, and the results showed the prediction accuracy of BA restoration rate was the highest (area under the curve was 0.794). By calculating the threshold values, we found that patients would be more likely to suffer from subsequent fracture after surgery when the BA restoration rate was>0.350. (Fig. 2, Table 3).
Discussion
Subsequent fracture is a serious complication following vertebral augmentation, which carries great burden on the patients and society. In view of the high incidence Several studies demonstrated previous fracture was an indirect reflection of patient's bone quality [7,8]. In a retrospective study, Ji et al. found that previous fracture history was significantly correlated with subsequent fracture following primary OVCF [8]. In another study, Lindsay et al. reported the risk of subsequent fracture was twofold higher after a non-spinal fracture and four times greater following a spinal fracture [7]. Our finding was consistent with previous work, patients with previous fracture were also the target population for the prevention of subsequent fracture after vertebral augmentation.
The presence of IVC is common in OVCF patients, especially in the thoracolumbar segment [9]. Although the relationship between IVC and subsequent fracture has received much attention, the results remain controversial [10,11]. In a retrospective cohort study, Li et al. reported no significant difference in the incidence of subsequent fracture between patients with or without IVC [12]. Conversely, Kim and Yu found a significantly higher incidence of IVC in patients with subsequent fracture [11,13]. Similar result was also observed in our study. It was likely that IVC indicated poorer blood supply and higher risk of cement leakage [14,15], which might account for the increased risk of subsequent fracture. Cement leakage was relatively common but always asymptomatic [16]. Some scholars held the view that cement leakage was generally no clinical significance as they did not find an association linking cement leakage and subsequent fracture [17,18]. On the contrary, Bae et al. revealed a significantly higher incidence of cement leakage in refracture group [19]. Rho et al. suggested cement leakage was a primary predicting factor of subsequent fracture [20]. Moreover, Komemushi et al. indicated the risk of subsequent fracture was 4.6 times higher in patients with cement leakage than those without [21].
Our study again highlighted efforts should be made to reduce the incidence of cement leakage during surgery.
The importance of paraspinal muscles was once disregarded. Until recently, the critical role of paraspinal muscles in maintaining spinal stability and alignment was gradually elucidated. Some studies indicated there might be an association linking paraspinal muscle atrophy with etiology and healing of OVCF [6,22,23]. The finding reported by Deng et al. that postoperative low back muscle exercise could significantly reduce refracture risk, which suggested the protective role of paraspinal muscles [24]. In a multicenter cohort study of 153 patients with OVCF who received conservative treatment, Habibi et al. found that greater FI but not CSA of the paraspinal muscle was significantly associated with the occurrence of subsequent fracture [6]. Our finding further demonstrated greater FI was also an important risk factor for subsequent fracture following vertebral augmentation. The underlying mechanism has not yet been fully elucidated, some researchers hypothesized that fat infiltration of paraspinal muscles might reduce the muscle contractility and strength, which led to sagittal imbalance and increased stress on vertebral structures [25].
Restoring vertebral height and body angle to their original states were once considered to be the most desirable outcome [26]. However, studies on the subject reported controversial results. While Ning and colleagues did not detect a connection between vertebral height restoration rate and subsequent fracture [18]. Kim and Yoo et al. noted greater vertebral height restoration rate contributed significantly to the risk of subsequent fracture after percutaneous vertebroplasty [27,28]. In terms of body angle correction, Takahashi et al. observed the correction degree was significantly greater in the refracture group than in the non-refracture group [29]. Similarly, Lin et al. reported that greater correction significantly increased the risk of refracture [30]. Our study confirmed again, too much correction of the vertebral height and body angle might lead to higher risk of subsequent fracture. Some researchers proposed that overcorrection of vertebral height and body angle would increase paravertebral soft tissue tension, which in turn increased mechanical load on the already weakened vertebrae [26].
In light of previous researches and our own, it is possible that the occurrence of subsequent fracture is generally not caused by a single risk factor, but rather by the interplay of multiple risk factors. Therefore, in patients with above mentioned risk factors, cement leakage and excessive correction should be avoided, and preventive measures should be taken, such as patient education, low back muscle exercise and antiosteoporotic treatment.
This study had some limitations. Firstly, it was a twocenter study, the surgeries were performed by different surgeons, and anti-osteoporotic regimes were not completely consistent, which might influence the outcomes. Secondly, other potential parameters, such as postoperative physical activity, smoking and drinking status, were not evaluated. Thirdly, it was a retrospective study and the number of patients was limited. Therefore, further studies are needed to verify our findings.
|
v3-fos-license
|
2020-05-28T09:16:03.275Z
|
2020-04-01T00:00:00.000
|
219474006
|
{
"extfieldsofstudy": [
"Psychology",
"Physics"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://doi.org/10.1088/1742-6596/1521/3/032002",
"pdf_hash": "1284030842bf959b5674f18d0ff4183f8c558f00",
"pdf_src": "IOP",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44279",
"s2fieldsofstudy": [
"Education"
],
"sha1": "b65e6526de03b50a391a11d0addc01de20f30635",
"year": 2020
}
|
pes2o/s2orc
|
Students’ difficulties in solving trigonometric equations and identities
This study aims to identify the types of difficulties experienced by high school students in solving equations and trigonometric identities. The method used in this research is descriptive qualitative research method to describe the facts of students’ difficulties in solving equations and trigonometric identities. The data collection technique in this study is by using respondents’ ability tests and interviews. Students involved in this study are 72 students of grade XI of senior high schools in the city of Bandung. Based on the results of data analysis, there are three aspects of students ‘difficulties in solving trigonometric equations and also there are three aspects of students’ difficulties in solving trigonometric identity problems. The difficulties of students in solving trigonometric equations, namely the difficulty of students in deciphering the form of the problem, difficulty in factoring in the form of trigonometric quadratic equations, and difficulties using the basic trigonometric equations. Whereas, the difficulties of students in solving trigonometric identity problems, namely the difficulty of students applying general trigonometry formulas, difficulty in describing each of the trigonometric comparison relationships, and difficulties in performing algebraic calculations.
Introduction
Mathematics is a scientific discipline that underlies the development of modern technology which has an important role in advancing human thinking, so that mastering and creating technology in the future requires a strong mastery of mathematics from an early age. An important part in learning mathematics itself is the process of learning mathematics. Jaworksy states that the implementation of mathematics learning is not easy because students experience difficulties in learning mathematics [1]. Difficulties in learning mathematics is what causes students to have low abilities in the field of mathematics studies.
One of the material in the field of mathematics study that is studied at the high school level is trigonometry. Trigonometry is one of the material in mathematics that students must understand to develop their mathematical understanding [2]. In learning trigonometry some students often encounter difficulties caused by students' incomprehension in trigonometric concepts. Students tend to memorize the formula given by the teacher or written in the book without understanding the intent and contents, so students often make mistakes in solving the trigonometric problems [3]. One of the trigonometric material that is considered difficult by students is the similarity and proof of trigonometric identity, because it requires an understanding of the right concepts and high accuracy in their application [4][5][6]. This can be seen from the results of the research of [5], who conducted identification test students with the following question: determine the set of solutions from 2 sin = − sin + 1, for 0°≤ ≤ 360°. One of the answers of students who showed difficulties in answering this question can be seen in Figure 1.
Figure 1.
A student's answer on solving a trigonometric equation In Figure 1 above, the student tends to solve questions using algebra, he has not been able to connect algebraic reasoning with the trigonometric concepts he has learned. Trigonometric concepts are importance in connecting algebraic and geometric reasoning [7]. Therefore, if students are unable to connect algebraic reasoning and geometry in learning trigonometry, then they will have difficulty in solving trigonometric problems.
Another difficulty experienced by students in Figure 1 is the difficulty in deciphering the form of the problems, understanding angles in trigonometry, and the difficulty of calculating/computing to find a set of solutions. These difficulties if left unchecked will cause low student learning outcomes. Therefore, these difficulties need to be identified and known for their causes so that the right solution can be chosen for use in classroom learning.
Methods
The research method used is descriptive qualitative research method, namely research that describes or describes the object of research based on facts that appear or as they are [8]. Qualitative descriptive research seeks to describe all existing symptoms or conditions, namely the state of symptoms according to what they were at the time the research was conducted [9]. The subjects of this study were 72 students class XI science, senior high school in the city of Bandung who had participated in material learning equations and trigonometric identities. While the object in this study is the identification of student difficulties in solving equations and trigonometric identities.
Data collection techniques in this study were the test ability of respondents and interviews. Data obtained by test is the difficulties experienced by students. The data validity checking technique used is the method triangulation. Triangulation method is done by comparing test results and interview data. Data analysis used refers to the analysis of data according to [10], namely data reduction, data presentation, and conclusion drawing. Reduction of data in this study is done by summarizing all the difficulties experienced by students in completing the test, then choosing the things that are the main cause, and focusing on the important things of these difficulties. In presenting data, all important information obtained from data reduction results is presented in the form of a chart. The chart is a presentation of data presentation designed to combine integrated information, so that researchers can analyze what is happening and determine the next steps. Finally, drawing conclusions are drawn from the focus of the study based on the results of data analysis.
Result and Discussion
The researcher gave a test to 72 students class XI science, about the problem of equation and trigonometric identity consisting of two items, namely: (1) determine the set of solutions 2 sin = − sin + 1, for 0°≤ ≤ 360°, (2) prove that = . The test results show the percentage of students who answered correctly number 1 was 18% and the percentage of students who answered correctly number 2 was 36%. The difficulty of students in completing number 1 questions lies in the difficulty of deciphering the form of the problem, factoring in the quadratic trigonometry equation, and difficulty using the basic trigonometric equation solution. Identifying student difficulties in question number 1 is presented in Figure 2 below: Student A Student B Figure 2. Students' answers to question 1 Based on Figure 2, student A has difficulty in factoring the shape of the trigonometric quadratic equation and the difficulty in determining the x angle that satisfies the sin x equation, it only writes the result of sin x, not the value from the x angle. In contrast to student B, he understood how to look for x angles but he misconstrued the factoring of the quadratic equation of the trigonometry and the difficulty of using a basic trigonometric equation. After interviews with these students, student A did not understand the trigonometry concept well, he did not understand that looking for the value of x should be sought using the basic trigonometric equation, so it is not the result of the value of sin x. The following interview excerpt provides evidence for this. The interview excerpt showed that although the student acknowledged that he understood the problem, he did not understand the concept of trigonometry correctly. Instead, student B understood the concept of trigonometry, but he had difficulty in factoring the square of the trigonometry and the difficulty of using the basic trigonometric equation. Interview excerpts with student B as follows: Try In accordance with the results of the study of [11], which states that the trigonometric equation is a material that is difficult to teach and difficult to learn. The difficulty can also be caused because the trigonometric equation material is not liked or not desired by students, so the trigonometric equation material learning becomes more difficult to understand. In addition, many concepts must be mastered by students before learning the trigonometric equation material, for example for the question number 1 above, the concept of reporting quadratic equations must be well-mastered by students.
Furthermore, the difficulty of students in completing question number 2 is located in the difficulty of students applying general trigonometry formulas, difficulties in describing each of the trigonometric comparison relationships, and the difficulty of performing algebraic calculations. Identifying student difficulties in question number 1 is presented in Figure 3 below: Figure 3, students have difficulty describing each of the trigonometric comparison relationships, so that achieving trigonometric evidence becomes complicated. After conducting interviews with these students, which made him difficult in the translation of trigonometric forms, he was unable to describe the relationship of trigonometric comparisons to achieve proof. The following are excerpts from interviews with students C.
I:
What do you think of your answer number 2? Is it already correct? S(C): It hasn't been proven yet, ma'am, I keep simplifying the shape, but instead it turns out the answer back and forth. I: Just look at the steps of your work in the second step, these are in the form of constants 1 and -1. Just looking for the tan x right? S(C): Yes ma'am, to look for the tan x, I simplify the form again. I: Tan x can be broken down so what? Do you know? S(C): Forgot ma'am In the interview above, the student had difficulty in changing the form of trigonometry to tan x because he did not remember the translation of the tan x form, so he continued to look for simple forms of proof of the trigonometry. Other students, many are wrong in applying general trigonometric formulas and wrong in doing algebraic calculations/computations. The most frequent mistakes made by students in solving problems in trigonometry are mistakes of understanding, transformation errors, and process skill errors [12]. Most misconceptions occur when students do not understand how to approach a given trigonometric problem from the concept. Students often misunderstand requests for questions. This may be due to the lack of emphasis by the teacher in teaching the simplification of concepts that arise, perhaps also because students only memorize the trigonometry formula. In accordance with the results of [11] study, one of the causes of students having difficulty in solving trigonometry is because the knowledge they have is only procedural knowledge, they do not master conceptual knowledge. Therefore, we need to review how trigonometric learning is done in the classroom, and look for possible errors or misunderstandings of students before teaching them, so that students' difficulties in solving trigonometry problems can be overcome.
Conclusion
Based on the results and discussion, student's difficulty identification in completing the problem of equation and trigonometry identities are: (a) the difficulties of students in solving trigonometric equation problems, namely the difficulty of students in describing the form of the problem, difficulty in factoring the form of the quadratic equation of trigonometry, and the difficulty of using the basic trigonometric equation solving, (b) students' difficulties in solving trigonometric identity problems, namely the difficulty of students applying general trigonometry formulas, difficulties in describing each of the trigonometric comparison relationships, and the difficulty of doing algebraic calculations/computation.
|
v3-fos-license
|
2017-08-15T05:12:03.708Z
|
1999-03-01T00:00:00.000
|
10096846
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "HYBRID",
"oa_url": "https://www.intechopen.com/citation-pdf-url/25687",
"pdf_hash": "07bf89b68c7ba50f8f9865d03f83ff5fc7b0aae4",
"pdf_src": "MergedPDFExtraction",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44280",
"s2fieldsofstudy": [
"Biology"
],
"sha1": "044d636ea2e513c6708ff9bb4c249bfe06dd4c7f",
"year": 1999
}
|
pes2o/s2orc
|
Insecticide Resistance
Insecticide resistance is an increasing problem faced by those who need insecticides to efficiently control medical, veterinary and agricultural insect pests. In many insects, the problem extends to all major groups of insecticides. Since the first case of DDT resistance in 1947, the incidence of resistance has increased annually at an alarming rate. It has been estimated that there are at least 447 pesticide resistant arthropods species in the world today (Callaghan, 1991). Insecticide resistance has also been developed by many insects to new insecticides with different mode of action from the main four groups. The development of resistance in the fields is influenced by various factors. These are biological, genetic and operational factors. Biological factors are generation time, number of offspring per generation and migration. Genetic factors are frequency and dominance of the resistance gene, fitness of resistance genotype and number of different resistance alleles. These factors cannot be influenced by man. However, such as treatment, persistence and insecticide chemistry, all of which may and therefore timing and dosage of insecticide application should be operational factors. Pesticide resistance is the adaptation of pest population targeted by a pesticide resulting in decreased susceptibility to that chemical. In other words, pests develop a resistance to a chemical through natural selection: the most resistant organisms are the ones to survive and pass on their genetic traits to their offspring (PBS, 2001). Pesticide resistance is increasing in occurrence. In the 1940s, farmers in the USA lost 7% of their crops to pests, while since the 1980s, the percentage lost has increased to 13, even though more pesticides are being used (PBS,2001). Over 500 species of pests have developed a resistance to a pesticide (Anonymous, 2007). Other sources estimate the number to be around 1000 species since 1945 (Miller, 2004). Today, pests once major threats to human health and agriculture but that were brought under control by pesticides are on the rebound. Mosquitoes that are capable of transmitting malaria are now resistant to virtually all pesticides used against them. This problem is compounded because the organisms that cause malaria have also become resistant to drugs used to treat the disease in humans. Many populations of the corn earworm, which attacks many agricultural crops worldwide including cotton, tomatoes, tobacco and peanuts, are resistant to multiple pesticides (Berlinger, 1996). Despite many years of research on alternative methods to control pests and diseases in crops, pesticides retain a vital role in securing global food production and this will remain the case for the foreseeable future if we wish to feed an ever growing population.
Introduction
Insecticide resistance is an increasing problem faced by those who need insecticides to efficiently control medical, veterinary and agricultural insect pests.In many insects, the problem extends to all major groups of insecticides.Since the first case of DDT resistance in 1947, the incidence of resistance has increased annually at an alarming rate.It has been estimated that there are at least 447 pesticide resistant arthropods species in the world today (Callaghan, 1991).Insecticide resistance has also been developed by many insects to new insecticides with different mode of action from the main four groups.The development of resistance in the fields is influenced by various factors.These are biological, genetic and operational factors.Biological factors are generation time, number of offspring per generation and migration.Genetic factors are frequency and dominance of the resistance gene, fitness of resistance genotype and number of different resistance alleles.These factors cannot be influenced by man.However, such as treatment, persistence and insecticide chemistry, all of which may and therefore timing and dosage of insecticide application should be operational factors.Pesticide resistance is the adaptation of pest population targeted by a pesticide resulting in decreased susceptibility to that chemical.In other words, pests develop a resistance to a chemical through natural selection: the most resistant organisms are the ones to survive and pass on their genetic traits to their offspring (PBS, 2001).Pesticide resistance is increasing in occurrence.In the 1940s, farmers in the USA lost 7% of their crops to pests, while since the 1980s, the percentage lost has increased to 13, even though more pesticides are being used (PBS,2001).Over 500 species of pests have developed a resistance to a pesticide (Anonymous, 2007).Other sources estimate the number to be around 1000 species since 1945 (Miller, 2004).Today, pests once major threats to human health and agriculture but that were brought under control by pesticides are on the rebound.Mosquitoes that are capable of transmitting malaria are now resistant to virtually all pesticides used against them.This problem is compounded because the organisms that cause malaria have also become resistant to drugs used to treat the disease in humans.Many populations of the corn earworm, which attacks many agricultural crops worldwide including cotton, tomatoes, tobacco and peanuts, are resistant to multiple pesticides (Berlinger, 1996).Despite many years of research on alternative methods to control pests and diseases in crops, pesticides retain a vital role in securing global food production and this will remain the case for the foreseeable future if we wish to feed an ever growing population.Fig. 1.Pesticide application can artificially select for resistant pests.In this figure, the first generation happens to have an insect with a heightened resistance to a pesticide (red).After pesticide application, its descendants represent a larger proportion of the population because sensitive pests (white) have been selectively killed.After repeated applications, resistant pests may comprise the majority of the population (PBS, 2001).
Insecticides are applied to reduce the number of insects that destroy crops or transmit disease in the field of agriculture, veterinary and public health.Insecticides are not always effective in controlling insects, since many populations have developed resistance to the toxic effects of the compounds.Resistance can be defined an inherited ability to tolerate a dosage of insecticide that would be lethal to the majority of individuals in a normal wild populations of the same species.
Insecticides are in common use in agriculture as well as in houseplant populations, gardens, and other living spaces in an attempt to control the invasion of a seemingly endless array of insects.Insecticides are used to keep populations under the control, but over time insects can build up a resistance to the chemicals used.This is called insecticide resistance.Insecticide resistance is apparent when a population stops responding or does not respond as well to applications of insecticides.In recent years, many of the resistance mechanisms have been detected and resistance detection methods have been developed.These mechanisms have divided into four categories: a) increased metabolism to non-toxic products, b)decreased target site sensitivity, c)decreased rates of insecticide penetration, d) increased rates of insecticide excretion.There are different methods to determine that the mechanisms are available in any given population.We can see the structure of the resistance mechanisms from these assays.There are several thousand species of insect in the world of particular nuisance to man, either as vectors of fatal and debilitating diseases or destroyers of crops.Insecticide resistance is an increasing problem faced by those who need insecticides to efficiently control medical, veterinary and agricultural insect pests.
History of insecticide resistance
In 1914 A. L. Melander reported the first case of insecticide resistance.He studied the effectiveness of lime sulphur, an inorganic insecticide, against an orchard pest, the San Jose scale (Quadraspidiotus perniciousus) in the state of Washington.A treatment with lime sulphur killed all scales in one week in typical orchards, but 90 percent survived after two weeks in an orchard with resistant scales.Although few cases of insecticide resistance were recorded before 1940, the number grew exponentially following widespread use of DDT and other synthetic organic insecticides (http://science.jrank.org)Insects have evolved resistance to all types of insecticides including inorganics, DDT, cyclodienes, organophosphates, carbamates, pyrethroids, juvenile hormone analogs, chitin synthesis inhibitors, avermectins, neonicotinoids, and microbials.In many insects, the problem extends to all major groups of insecticides.Since the first case of DDT resistance in 1947, the incidence of resistance has increased annually at an alarming rate.It has been estimated that there are at least 447 pesticide resistant arthropods species in the world today (Callaghan, 1991).Insecticide resistance has also been developed by many insects to new insecticides with different mode of action from the main four groups.For example, neoniconitoids.Resistance occurs in thirteen orders of insects, yet more than 90 percent of the arthropod species with resistant populations are either Diptera (35 percent), Lepidoptera (15 percent), Coleoptera (14 percent), Hemiptera (in the broad sense, 14 percent), or mites (14 percent).The disproportionately high number of resistant Diptera reflects intense use of insecticides against mosquitoes that transmit disease.Agricultural pests account for 59 percent of harmful resistant species while medical and veterinary pests account for 41 percent.Many species have numerous resistant populations, each of which resists many insecticides.Statistical analyses suggest that for crop pests, resistance evolves most readily in those with an intermediate number of generations (four to ten) per year that feed either by chewing or by sucking on plant cell contents.
Resistant pest species outnumber resistant beneficial species such as predators and parasitoids by more than twenty to one.This pattern probably reflects limited attention devoted to resistance in beneficials as well as biological differences between beneficials and pests.Available evidence contradicts the hypothesis that natural enemies evolve resistance less readily because intrinsic levels of detoxification enzymes are lower in predators and parasitoids than in pests.An alternative hypothesis with more support is that natural enemies evolve resistance less readily because they suffer from food limitation following insecticide sprays that severely reduce abundance of their prey or hosts.According to Georghiou (1986), pesticide resistance occurs in at least 100 species of plant pathogens, 55 species of weeds, 5 species of rodents, and 2 species of nematodes.This article focuses on resistance to insecticides in more than 500 species of insects and mites.Sukhoruchenko and Dolzhenko (2008), presents the results of long-term monitoring of insecticide resistance in populations of agricultural pests in Russia.Over the last 45 years, resistance developments were recorded for 36 arthropod pest species in 11 agricultural crops and pastures in relation to nearly all commonly used plant protection products.Development of group, cross and multiple resistance has been revealed in populations of many economically important pests.Toxicological and phenotypical (for Colorado potato beetle) methods have been devised to monitor the development of pesticide resistance.Based on experience over the last century, systems aimed at preventing the development of pest resistance to insecticides and acaricides are elaborated.These systems are based on resistance monitoring and using plant protection measures which minimize the toxic pressure on agroecosystems.
Mechanisms of insecticide resistance in insects
There are several ways insects can become resistant to crop protection products, and pests often exhibit more than one of these mechanisms at the same time.
Behavioral resistance: Resistant insects may detect or recognize a danger and avoid the toxin.This mechanism of resistance has been reported for several classes of insecticides, including organochlorines, organophosphates, carbamates and pyrethroids.Insects may simply stop feeding if they come across certain insecticides, or leave the area where spraying occurred (for instance, they may move to the underside of a sprayed leaf, move deeper in the crop canopy or fly away from the target area) (www.irac-online) Penetration resistance: Resistant insects may absorb the toxin more slowly than susceptible insects.Penetration resistance occurs when the insect's outer cuticle develops barriers which can slow absorption of the chemicals into their bodies.This can protect insects from a wide range of insecticides.Penetration resistance is frequently present along with other forms of resistance, and reduced penetration intensifies the effects of those other mechanisms.
Metabolic resistance: Resistant insects may detoxify or destroy the toxin faster than susceptible insects, or quickly rid their bodies of the toxic molecules.Metabolic resistance is the most common mechanism and often presents the greatest challenge.Insects use their internal enzyme systems to break down insecticides.Resistant strains may possess higher levels or more efficient forms of these enzymes.In addition to being more efficient, these enzyme systems also may have a broad spectrum of activity (i.e., they can degrade many different insecticides).
Altered target-site resistance:
The site where the toxin usually binds in the insect becomes modified to reduce the insecticide's effects.This is the second most common mechanism of resistance.There are four major mechanisms of resistance in insects.These are: 1. Increased metabolism to non-toxic products 2. Decreased target site sensitivity 3. Decreased rates of insecticide penetration 4. Increased rates of insecticide excretion Of these four categories the first two are by far the most important.Metabolic resistance: The normal enzymatic metabolism of insect is modified to increase insecticide detoxification or prevent activation of insecticides.The enzymes responsible for detoxification of xenobiotics in living organisms are transcribed by members of large multigene families of esterases, oxidases, and GST.Glutathione transferases (GSTs) are a diverse family of enzymes found ubiquitously in aerobic organisms.They play a central role in the detoxification of both endogenous and xenobiotic compounds and are also involved in intracellular transport, biosynthesis of hormones and protection against oxidative stress.Interest in insect GSTs has primarily focused on their role in insecticide resistance.GSTs can metabolize insecticides by facilitating their reductive dehydrochlorination or by conjugation reactions with reduced glutathione, to produce water-soluble metabolites that are more readily excreted.In addition, they contribute to the removal of toxic oxygen free radical species produced through the action of pesticides.Annotation of the Anopheles gambiae and Drosophila melanogaster genomes has revealed the full extent of this enzyme family in insects (Enayati et al,2005).Perhaps the most common resistance mechanisms in insects are modified levels or activities of esterase detoxification enzymes that metabolize (hydrolyze ester linkages) a wide range of insecticides.These esterases comprise six families of proteins belonging to the /ß hydrolase fold superfamily.In Diptera, they occur as a gene cluster on the same chromosome.Individual members of the gene cluster may be modified in instances of insecticide resistance, for example, by changing a single amino acid that converts the specificity of an esterase to an insecticide hydrolase or by existing as multiple-gene copies that are amplified in resistant insects (the best studied examples are the B1 and A2-B2 amplicons in Culex pipiens and C. quinquefasciatus (Brogdon and McAllister,1998).The cytochrome P450 oxidases (also termed oxygenases) metabolize insecticides through O-, S-, and N-alkyl hydroxylation, aliphatic hydroxylation and epoxidation, aromatic hydroxylation, ester oxidation, and nitrogen and thioether oxidation.The cytochrome P450s belong to a vast superfamily.Of the 62 families of P450s recognized in animals and plants, at least four (families 4,6,9,18) have been isolated from insects.The insect P450 oxidases responsible for resistance have belonged to family 6, which, like the esterases, occur in Diptera as a cluster of genes.Members of the cluster may be expressed as multiple (up to five) alleles.Enhanced levels of oxidases in resistant insects result from constitutive overexpression rather than amplification.The mechanisms of oxidase overproduction in resistance are under extensive investigation and appear to result from both cis-and trans-acting factors, perhaps associated with the phenomenon of induction ((Brogdon and McAllister,1998).Altered target site: The site of action has been altered to decrease sensitivity to toxic attack.Alterations of amino acids responsible for insecticide binding at its site of action cause the insecticide to be less effective or even ineffective.The target of organophosphorus (OPs) (e.g., malathion, fenitrothion) and carbamate (e.g., propoxur, sevin) insecticides is acetylcholinesterase in nerve synapses, and the target of organochlorines (DDT) and synthetic pyrethroids are the sodium channels of the nerve sheath.DDT-pyrethroid crossresistance may be produced by single amino acid changes (one or both of two known sites) in the axonal sodium channel insecticide-binding site.This cross-resistance appears to produce a shift in the sodium current activation curve and cause low sensitivity to pyrethroids.Similarly, cyclodiene (dieldrin) resistance is conferred by single nucleotide changes within the same codon of a gene for a -aminobutyric acid (GABA) receptor.At least five point mutations in the acetylcholinesterase insecticide-binding site have been identified that singly or in concert cause varying degrees of reduced sensitivity to OPs and carbamate insecticides.
Physical resistance mechanisms:
The pickup or intake of toxic agent is slowed or reduced by modification to the insect skeleton, or the rate of excretion of the toxic compound is increased.
Insecticide resistance detection techniques
The mode of action of the insecticides, duration life cycle, clutch size and availability of host determine rate of evolution of resistance.Documenting the dynamics of resistance plays another important role in the approach of its mitigation.Reliable, quick and effective techniques to distinguish between susceptible and resistant individuals are necessary (Gunning,1993 andBrown,1981).There are several phenogenetic methods available to diagnose resistance in populations of pest species which enable the assessment of how shifts in composition and structure of a population caused by pesticides, may affect its development geographically and over time.Among these, easy-to-use toxicological methods have gained the most recognition worldwide.They enable the determination of levels of population susceptibility to pesticides used, in relation to the ratio of resistant and susceptible genotypes.In 2004 under the aegis of the Commission on resistance, a method manual was published: 'Monitoring the resistance to pesticides in populations of arthropod pests'.Methods included in this manual enable scientists to evaluate development of resistance in populations of 37 species of insects and mites of great practical importance for agricultural practice and medicine.At present, researchers are trying to identify easy-to-see visual morphological characters which could be used for the diagnosis of resistance.In order to achieve this, adults from populations under investigation are sampled and fractions of different morphotypes (morphs) are determined.Each morphotype recognized is then tested from the viewpoint of its susceptibility to toxicants used (Benkovskaya et al., 2000;Vasilyeva et al., 2004Vasilyeva et al., , 2005;;Fasulati, 2005).The frequency of occurrence of different morphs in the Colorado potato beetle has been shown to be related to their susceptibility to pyrethroids.This has enabled a rapid method to be devised for revealing the resistance to pyrthroids in populations of the pest immediately after appearance of overwintered adults in potato crops (Sukhoruchenko et al., 2006).The above method allows potato growers to rationally schedule the use of these pesticides in seasonal application charts.
Insecticide resistance detection methods
The primary mechanisms of resistance are decreased target site sensitivity and increased detoxification through metabolism or sequestration.Target sites are the molecules in insects that are attacked by insecticides.Decreased target site sensitivity is caused by changes in target sites that reduce binding of insecticides, or that lessen the damage done should binding occur.Metabolism involves enzymes that rapidly bind and convert insecticides to nontoxic compounds.Sequestration is rapid binding by enzymes or other substances with very slow or no processing.Reduced insecticide penetration through the cuticle, and behavioral changes that reduce exposure to insecticide are also mechanisms of resistance.Different mechanisms can occur within an individual insect, sometimes interacting to provide extremely high levels of resistance.Resistance can be determined by using conventional standard bioassay methods published by International Resistance Action Committee (IRAC) and biochemical, immunological and molecular methods.
1) Conventional Detection Methods
The standard method of detection in to take sample of insects from the field and rear them through to the next generations.Larvae or adults are tested for resistance by assessing their mortality after exposure to a range of doses of an insecticide.For susceptible and field populations, LD 50 or LC 50 values were calculated by using probit analysis The results are compared with those from standard susceptible populations.These method includes some differences for the different pest species.These methods are published by Insecticide Resistance Action Committee (IRAC).The other traditional method of detecting insecticide resistance is to expose individual insects to a diagnostic single dose for a set time period in a chamber impregnated with the insecticide or on a filter paper impregnated with the insecticide.These tests only give an indication of the presence and frequency of resistance and limited information can be gained as to the resistance mechanism.Evolution of resistance is most often based on one or a few genes with major effect.Before a susceptible population is exposed to an insecticide, resistance genes are usually rare because they typically reduce fitness in the absence of the insecticide.When an insecticide is used repeatedly, strong selection for resistance overcomes the normally relatively minor fitness costs associated with resistance when the population is not exposed to insecticide.
2) Biochemical detection of insecticide resistance
Biocahemical assays/techniques may be used to establish the mechanism involved in resistance.When a population is well characterised some of the biochemical assays can be used to measure changes in resistance gene frequencies in field populations under different selection pressure.
3) Immunological Detection Methods:
This method is available only for specific elevated esterases in collaboration with laboratories that have access to the antiserum.There are no monoclonal antibodies, as yet , available for this purpose.An antiserum has been prepared against E4 carboxylesterase in the aphid Myzus persicae .An affinity purified 1gG fraction from this antiserum has been used in a simple immunoassay to discriminate between the three common resistant variants of M. persicae found in the UK field populations (Devonshire et al, 1996).
4) Detection of monooxygenase (cytocrome P450) based insecticide resistance.
The levels of oxidase activity in individual pests are relatively low and no reliable micrtitre plate or dot-blot assay has been developed to measure p450 activity in single insects.The p450s are also a complex family of enzymes, and it appears that different cytocromes p450s produce resistance to different insecticides.
Management of insecticide resistance
Resistance monitoring programme should no longer rely on testing the response to one insecticide, with the intention of switching to another chemical when resistance levels rise above the threshold which affects disease control.Effective resistance management depends on early detection of the problem and rapid assimilation of information on the resistant insect population so that rational pesticide choices can be made.After a pest species develops resistance to a particular pesticide, how do you control it?One method is to use a different pesticide, especially one in a different chemical class or family of pesticides that has a different mode of action against the pest.Of course, the ability to use other pesticides in order to avoid or delay the development of resistance in pest populations hinges on the availability of an adequate supply of pesticides with differing modes of action.This method is perhaps not the best solution, but it allows a pest to be controlled until other management strategies can be developed and brought to bear against the pest.These strategies often include the use of pesticides, but used less often and sometimes at reduced application rates.The goal of resistance management is to delay evolution of resistance in pests.The best way to achieve this is to minimize insecticide use.Thus, resistance management is a component of integrated pest management, which combines chemical and non chemical controls to seek safe, economical, and sustainable suppression of pest populations.Alternatives to insecticides include biological control by predators, parasitoids, and pathogens.Also valuable are cultural controls (crop rotation, manipulation of planting dates to limit exposure to pests, and use of cultivars that tolerate pest damage) and mechanical controls (exclusion by barriers and trapping).Because large-scale resistance experiments are expensive, time consuming, and might worsen resistance problems, modeling has played a prominent role in devising tactics for resistance management.Although models have identified various strategies with the potential to delay resistance, practical successes in resistance management have relied primarily on reducing the number of insecticide treatments and diversifying the types of insecticide used.For example, programs in Australia, Israel, and the United States have limited the number of times and periods during which any particular insecticide is used against cotton pests.Resistance management requires more effective techniques for detecting resistance in its early stages of development.Pest resistance to a pesticide can be managed by reducing selection pressure by this pesticide on the pest population.In other words, the situation when all the pests except the most resistant ones are killed by a given chemical should be avoided.This can be achieved by avoiding unnecessary pesticide applications, using non-chemical control techniques, and leaving untreated refuges where susceptible pests can survive. [17][18]Adopting the integrated pest management (IPM) approach usually helps with resistance management.When pesticides are the sole or predominant method of pest control, resistance is commonly managed through pesticide rotation.This involves alternating among pesticide classes with different modes of action to delay the onset of or mitigate existing pest resistance. [19]
|
v3-fos-license
|
2020-05-06T14:49:45.776Z
|
2020-05-06T00:00:00.000
|
218513036
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://bmcgeriatr.biomedcentral.com/track/pdf/10.1186/s12877-020-01559-y",
"pdf_hash": "386ffdc1f10e76377f2e3bbf95b972d28c520b01",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44283",
"s2fieldsofstudy": [
"Medicine"
],
"sha1": "386ffdc1f10e76377f2e3bbf95b972d28c520b01",
"year": 2020
}
|
pes2o/s2orc
|
Pacemaker detected active minutes are superior to pedometer-based step counts in measuring the response to physical activity counseling in sedentary older adults
Background In patients with permanent pacemakers (PPM), physical activity (PA) can be monitored using embedded accelerometers to measure pacemaker detected active hours (PDAH), a strong predictor of mortality. We examined the impact of a PA Counseling (PAC) intervention on increasing activity as measured by PDAH and daily step counts. Methods Thirteen patients (average age 80 ± 6 years, 84.6% women) with implanted Medtronic PPMs with a ≤ 2 PDAH daily average were included in this study. Patients were randomized to Usual Care (UC, N = 6) or a Physical Activity Counseling Intervention (PACI, N = 7) groups. Step count and PDAH data were obtained at baseline, following a 12-week intervention, then 12 weeks after intervention completion. Data were analyzed using independent t-tests, Pearson’s r, chi-square, and general linear models for repeated measures. Results PDAH significantly differed by time point for all subject combined (P = 0.01) but not by study group. Subjects with baseline gait speeds of > 0.8 m/sec were responsible for the increases in PDAH observed. Step counts did not differ over time in the entire cohort or by study group. Step count and PDAH significantly correlated at baseline (r = 0.60, P = 0.03). This correlation disappeared by week 12. Conclusion(s) PDAH can be used to monitor PA and PA interventions and may be superior to hip-worn pedometers in detecting activity. A significant increase in PA, regardless of treatment group, suggests that patient awareness of the ability to monitor PA through a PPM increases PA in these patients, particularly in patients with gait speeds of < 0.8 m/sec. Trial registration ClincalTrials.gov NCT03052829. Date of Registration: 2/14/2017.
Background
The benefits of habitual physical activity [PA], activities of at least moderate intensity defined as ≥3 metabolic equivalents (METs)], are well-recognized. Emerging information from large data sets strongly suggests high levels of sedentary behavior, defined as activities < 1.5 METs (e.g. seated activities such as computer work) increases the risk of diabetes, cardiovascular disease, and death, independent of the amount and intensity of PA [1][2][3][4][5]. Morbidity and mortality associated with non-adherence to PA per guidelines established by US Department of Health and Human services (DHHS) is estimated at $117 billion annually [6][7][8]. The increased risk of sedentary behavior appears to be mediated at least in part by reduced insulin sensitivity, impaired lipid metabolism, increased vascular inflammation, and increased thrombotic tendencies [9][10][11][12][13][14]. Aging is associated with sedentary behavior and only 25% of the adults aged > 50 years are able to achieve PA goals per DHHS guidelines [15]. Patients with permanent pacemakers (PPM) can be a target population for risk modification strategies to increase the PA levels. Pacemaker recipients are typically older [16]. Demographic trends show that the average age of PPM recipients is increasing with greatest increase seen in the rate of placements in those ages 75 and above [17]. Pacemaker recipients also have a higher prevalence of coronary artery disease (CAD) [18,19]. Physical activity counseling (PAC) can be used as an effective strategy to increase activity level and reduce the risk of morbidity and mortality associated with chronic diseases [20]. Feedback mechanisms using devices like pedometers and accelerometers can be useful for tracking physical activity quantity and intensity as well as motivating patients to increase their activity levels [21].
The internal accelerometer embedded in Medtronic pacemakers registers, stores, and reports total "active time" based on a threshold activity intensity level of approximately 70 steps/min (estimated to be > 1.5 METs). The accelerometers in the pacemakers are useful for sensing the activity level and facilitate adaptive rate responsiveness to meet the physiological demands of the patient [22,23]. This implanted accelerometer, combined with the regular follow-up required appropriate changes in the pacemaker settings in these individuals, provides an excellent opportunity to determine the impact of sedentary behavior on mortality and cardiovascular events. We recently reviewed the medical records of 96 individuals who underwent de novo Medtronic EnRhythm™ PPM implantation for sinus nodal dysfunction or complete heart block. 13 Following a 6-month blanking period post implantation to allow for patient acclimation to their PPM and early programming changes, accelerometer data obtained from interrogations were abstracted and averaged over a 1-year period. Individuals were categorized as having failed to reach (n = 40) or having met/exceeded (n = 54) the activity threshold for ≥2 h/day. Of those who failed to achieve the activity threshold for ≥2 h/day, 35% died (14/40) vs. 7.4% (4/54) of the group who achieved this threshold. Survival analyses demonstrated a significantly greater mortality for those with increasing sedentary time (P < 0.001 by log rank test). Following adjustment of sex, prevalent CAD, and LVEF < 50%, low activity remained a significant risk factor for death.
Overall, these data suggest an easy to implement, pointof-care PA intervention designed to reduce inactive time as measured by Medtronic pacemaker accelerometer data, that could potentially reduce risk in patients with implanted permanent pacemakers. However, prior to a larger, outcomes-based study, there is a need to establish that active time, as measured by the pacemaker accelerometer, tracks changes in PA with an intervention. Our pilot study tested whether a point of care method that combines informing at-risk patients of our published findings and their own active time amounts, combined with an intervention to increase moderate intensity activity in daily living will result in detectable increases in pacemaker-measured active minutes. We compared these findings to physical activity as measured by an externally worn pedometer, a commonly used tool for measuring physical activity in clinical studies.
Subject recruitment
All study procedures were reviewed and approved by the Medical College of Wisconsin's Institutional Research Board. Under a HIPAA waiver of authorization, medical records from individuals attending Froedtert and Medical College of Wisconsin's Electrophysiology Clinic were screened for potential enrollment. Figure 1 illustrates study enrollment. The electrophysiology providers for subjects meeting inclusion and exclusion criteria were contacted to introduce the study to the potential subjects and obtain approval for the study team to contact the potential subject for further details. Study inclusion criteria included the following: age > 55 years, presence of a Medtronic Azure, Advisa, Revo, or EnRhythm PPM (to ensure in-device accelerometers reporting pacemaker detected active hours (PDAH) with an identical algorithm), ability to ambulate 650 steps over 10 min, LVEF ≥50% on their most recent echocardiogram, and an average PDAH of ≤2 h over the three-month period prior to enrollment (as estimated from the 12-month graphical output from the device interrogation as previously reported) [24]. Subjects were excluded if they had a life expectancy of less than 1 year at the time of enrollment/ implantation, known history of cognitive impairment or inability to follow study procedures, or post-pacemaker implantation follow-up at a non-study center.
Study procedures Screening, randomization, and intervention
The study screening visit included a detailed medical history, including a medication history. Potential subjects had their height and weight measured and their heart rate and blood pressure measured in triplicate and averaged. A walk test was administered to assure the potential subject could walk well enough (at least 650 steps in 10 min) to be included in the study. Gait speed for each subject was calculated by using the amount of time it took for a subject to take 650 steps and using published age-and sex-specific normative values for step length for older adults [25]. Individuals passing the screening visit were subsequently randomized to either a physical activity counseling intervention (PAC) or usual care (UC). The PAC employed a 5 A's (Assess, Advise, Agree, Assist, Arrange) model previously reported to be effective for other health behaviors [26]. Subjects randomized to the PAC arm met immediately following randomization with an intervention expert on the study staff and advised of their activity levels. This included a review of the average active hours over the past 3 months along with a review of our previously reported data on the association of lower pacemaker-measured active time with increased mortality [24]. PAC subjects migrated through the 5 A's intervention model for 12 weeks. The intervention began with an initial educational visit. This visit consisted instruction on the use of objective, uploadable enhanced stepcount monitors (Omron HJ-112, Kyoto, Japan), and, access and use of the specially designed web-mediated individually tailored physical activity and health web platform. The web platform leverages strategies (frequent feedback, realistic goal-setting, rewarding, and selfregulation) proven to successfully integrate of lifestylebased physical activity into the daily lives of older adults [27][28][29][30][31]. PAC subjects were also sent weekly information on cognitive and behavioral strategies to increase health enhancing lifestyle practices and physical activity through the interactive website. Generally, the intervention was designed to encourage PAC subjects to increase steps by 10% per week as measured by their daily pedometer-based step counts which they record on a calendar supplied to them. The weekly web-mediated interactions were phased. Phase 1 (weeks 1-6) was designed to provide a cognitive understanding of the benefits associated with physical activity, current physical activity recommendations that are associated with healthful behaviors, and objective self-awareness of their current physical activity levels, obtained through body worn uploaded step count data. Phase 2 (weeks 6-12), continued to build upon educational information, and required each subject to intrinsically set daily physical activity goals for themselves. On a weekly basis, subjects uploaded their physical activity information and were subsequently provided with graphical representations of daily steps and how such values correspond with intrinsically set goals. At this stage of the program, each subject was either in compliance with set goals (defined as meeting physical activity goal targets 5 out of 7 days), or they were not in compliance. Subjects who successfully achieved their weekly goal were congratulated by the software and given guidance for setting the goals for the ensuing weeks. If the subject failed to reach their goal, the software attempted to ascertain the barriers associated with the inability to reach goal and gave pre-specified motivational messages offers strategies for succeeding based on identified barriers. In addition, PAC subjects received bi-weekly telephone check-in carried out by a trained behavioralist who reviewed their activity and offered support, advice, and solicited feedback. The 12 week length of the intervention period was selected as this is the period of time has been previously established to demonstrate favorable effects on vascular structure and function known to be associated with favorable reductions in cardiovascular risk in older adults [32][33][34].
At the end of the 12-week active intervention, PAC subjects entered a 12-week maintenance phase during which time no calls were made to the subjects by the study team nor was any information on increasing physical activity shared with the subjects by the study team. A twelve-week maintenance period was chosen to allow for the study team to determine near term ability of the intervention to increase activity levels in this at-risk population. Subjects in the UC arm did not receive any intervention. They were mailed a pedometer and a step count calendar to record their steps for the one-week periods corresponding to the beginning of the study, the 12th week in the study, and the 24th week in the study (time points corresponding to the start and end of the PAC groups intervention and maintenance periods).
Pacemaker derived active hours (PDAH) extraction
PDAH was obtained from pacemaker interrogations performed during clinical visits or trans-telephonic transmission of pacemaker information, corresponding to weeks 1, 12, and 24. PDAH was calculated as an average of the daily active hour time over the 3-month period prior to each measurement timepoint estimated as previously described and validated. 13 Statistical analyses SPSS 24 and SigmaStat 12.5 were employed for data analyses. Data were analyzed on a per protocol basis given that the goal of this pilot study was to determine how well PDAH tracked activity over time rather than PACI efficacy. Baseline characteristics were compared between groups using unpaired t-tests, chi-square, or Fisher's Exact test as appropriate based on variable type and number of events. Differences in step counts and PDAH over the study period were investigated using general linear models for repeated measures with the randomization group assignment as the between subjects variable and a three factor within subjects comparisons representing the three measurement time points with the Tukey test applied for post-hoc comparisons if significance of the overall models was detected. Additional analyses were carried out comparing those with a gait speed above versus below 0.8 m/sec at baseline, a gait speed cut-off associated with overall frailty and increased mortality [35,36]. Correlations between step count measurements and PDAH measurements were performed using Pearson's r test. P < 0.05 were considered significant.
Subject demographics
Overall enrollment data are summarized in Fig. 1. The study was initially designed to enroll 30 subjects but was stopped secondary to challenges with enrollment. One hundred forty-four individuals fit our inclusion and exclusion criteria based on IRB-compliant pre-screening of the electronic medical record. We were given permission to approach 72 potential subjects about the study by their providers. Of these subjects, a total of 21 subject agreed to study screening. Two subjects failed screening and were not enrolled. Six subjects dropped out following randomization due to health issues for them or their significant others not related to the study protocol, leaving a total of 13 subjects (N = 7 in the PDAH arm, N = 6 in the UC arm) who completed the study protocol. Subject characteristics for the entire study cohort and the cohort by randomized study group are presented In Table 1. The UC group was significantly younger than the PDAH group (P = 0.01), but otherwise overall attributes were roughly similar despite small numbers. While the left ventricular ejection fraction was statistically significantly lower in the UC group than the PDAH group (P = 0.02), the average left ventricular ejection fractions were within the normal range in both groups. There were no significant differences between groups with respect to calculated gait speed (P = 0.85).
Results of the intervention
PDAH and step counts as recorded by the study subjects for weeks 1, 12, and 24 of the study periods are presented in Tables 2, 3, and Fig. 2.
Over the 12-week intervention period, PDAH increased by 35.1% in the PACI group and 32.5% in the UC group. PDAH significantly increased over time, and this increase did not significantly differ between groups (P = 0.01 for time, P = 0.69 for time x study group interaction). Post-hoc analyses determined that the combined study groups had significantly greater PDAH during the 12-week interventional study period than during the three-month period prior to beginning the intervention (P = 0.005). No significant differences were seen between the pre-study period and the three-month maintenance phase (P = 0.15). There was a trend toward a decrease in PDAH during the three-month maintenance phase compared to the 12-week intervention period (P = 0.052).
No differences were seen between any time point by step count (P = 0.08 for time, P = 0.19 for time x study group interaction). Multiple imputation techniques were used to account for the missing step count data points (two individuals in the PAC group did not turn in week 12 step count data and one individual in the PAC group did not turn in week 24 step count data). The data analyzed following multiple imputation did not differ from the raw data (data not shown).
Three subjects in the PAC group and one subject in the UC group used walking assist devices at least intermittently. The average gait speed of these subjects was not significantly different than those who did not use any assist devices (0.89 ± 0.10 vs. 0.84 ± 0.18 m/sec, P = 0.57). The results for these subjects was not significantly different from those who did not use assist devices in the pattern of activity time and step counts throughout the study and did not affect the overall results (data not shown).
Analysis of Results Based on Gait Speed
To determine whether baseline gait speed impacted our results, we stratified our study cohort into two groups with a cut-off gait speed of 0.8 m/sec. Three subjects in the PAC group and two subjects in the UC group had baseline gait speeds ≤0.8 m/sec. There was a significant change in PDAH over time (P = 0.01) with a significant interaction between time and gait speed (P = 0.02). As shown in Table 4, those subjects with baseline gait speeds > 0.8 m/sec showed significant improvements in PDAH at the end of the 12 week intervention period which remained improved 12 weeks following the cessation of the intervention. Subjects with a baseline gait speed of > 0.8 m/sec significantly increased PDAH from baseline by week 12 of the intervention (P < 0.001) and maintained that increase 12 weeks following the end of the intervention phase (P = 0.01). There was no significant drop in PDAH between weeks 12 and 24 in those with baseline gait speed > 0.8 m/sec (P = 0.16). PDAH was significantly higher in those with gait speed > 0.8 m/ sec compared to those ≤0.8 m/sec at week 12 (P = 0.007) and there was a strong trend toward greater PDAH at week 24 in those in the faster gait speed group (P = 0.06). No changes were observed over the 24-week study period in those in the lower gait speed group (P > 0.94 for all comparisons of PDAH between all 3 time points in the lower gait speed group).
Similar differences were not detected by pedometerbased step counts (P = 0.23 for changes in step counts over time, P = 0.93 for time/gait speed interaction, Table 5). Reanalysis of the PDAH data by study group (PAC vs. UC, Table 6) showed a pattern similar to the overall study with a significant increase PDAH with time regardless of study group (P = 0.002 overall, P = 0.70 for study group/time interaction).
Correlations between step counts and PDAH
The correlations between step counts and PDAH count at each study time point are presented in Table 7. While PDAH correlated reasonably well with step count prior to beginning the interventional study (r = 0.60, P = 0.03), PDAH did not correlate with step counts at the week 12 and week 24 timepoints (r = 0.17, P = 0.96 and r = − 0.04, P = 0.90 for week 12 and 24 timepoints, respectively). Given the significant correlation between step count and PDAH prior to the intervention, we performed additional correlations between PDAH at this time point and clinical variables including age (r = 0.36, P = 0.24), systolic blood pressure (r = 0.53, P = 0.06), diastolic blood pressure (r = 0.36, P = 0.23), LV ejection fraction (r = − 0.10, P = 0.75), LV end-systolic dimension (r = 0.04, P = 0.91), and LV enddiastolic dimension (r = − 0.02, P = 0.94). None of these measured significantly correlated with PDAH.
Discussion
In this small pilot study, we found that PDAH was able to capture increases in activity levels with enrollment and participation in the study, and that these increases were independent of the intervention in this study. While numerically still low, the observed increases were significant with activity levels increasing by approximately 33% from baseline in each study arm. We were unable to visualize any significant changes in physical activity over the study period using externally worn pedometer captured step counts. As discussed below, the reasons for this discrepancy is likely multifactorial and driven by the unique challenges to accurate step counts by pedometer of our older study population (average age 80 ± 6 years) as well as study subjects participating in activities in which wearing a pedometer was not feasible. The finding of no correlation between step counts and PDAH following the intervention and maintenance periods supports the hypothesis that the increase in PDAH was not detected by step count measurements. In addition, we found that participating in the study resulted in significant increases in PDAH in only those with a baseline gait speed > 0.8 m/sec. Overall, these data suggest that implanted pacemakers with embedded accelerometers can track increases in activity levels in this challenging study population, and may do so more reliably than externally worn pedometers. In addition, the data suggest physical activity levels in this population can be influenced by provider attention to overall physical activity, particularly in those with a gait speed > 0.8 m/s. Prior work from our group demonstrates that PDAH is a strong predictor of mortality in patients with pacemakers Two individuals in the PAC group did not turn in post-12-week intervention step count logs and (n = 5 for PAC at that time point). One individual in the PAC group did not turn in post-12-week maintenance period step count logs (n = 6 for PAC group at that time point). Data presented as mean (SD). PDAH-pacemaker derived active hours
Fig. 2 PDAH and
Step Count Results for each study period. a PDAH results showed an overall increase in PDAH comparing the 12-week intervention period to baseline that did not significantly differ between the PACI and UC arms. b Step Count results showed no differences in measured step counts over the time course of the study and no differences in step counts between the PACI and UC groups. See Table 2 and the results section for additional details. PDAH-Pacemaker-Derived Active Hours. PACI-Physical Activity Counseling Intervention. UC-Usual Care. Data presented as mean (SEM) [24]. An increased prevalence and severity of frailty has also been associated with reduced activity levels as measured by implanted cardiac devices including pacemakers [37]. These data from pacemaker-based studies are consistent with prior work in those with reduced left ventricular ejection fractions evaluating devices placed for resynchronization and/or protection from lethal arrhythmia [38]. Our data extend these findings by demonstrating that an intervention designed to increase physical activity can also be detected by implanted pacemakers, which may be superior to externally worn devices given their lack of susceptibility to human error and reduced sensitivity of external pedometers at low gait speeds (more common in this population) for detection of steps and activity [39][40][41]. These advantages of an implanted device add to the significant advantages of device-quantified PA including its ability to monitor circadian rhythms and better characterize PA architecture and intensity than self-reported activity levels even in older adults [42][43][44].
We found that physical activity as detected by PDAH increased during the 12-week intervention group regardless of study arm. This suggests our study cohort's activity levels may have been influenced in part by having a care provider pay attention to their activity level. The increase in activity, while not detected by the external pedometers, may still also be an effective mechanism for feedback to patients to increase activity levels. Our findings also suggest that the PACI could potentially be improved with better tailoring of the intervention to the target population. Some subjects reported that the amount of walking suggested by the intervention was not feasible based on their orthopedic concerns and preferred activities such as swimming for these reasons. Others voiced concerns that the locations suggested by the PACI (e.g. local malls, sports centers, gyms) were too difficult to travel to on a regular basis, particularly during inclement weather. These issues may be in part unique to the older population in this study relative to other physical activity studies and suggest the superiority of implanted devices in tracking the types physical activities in which this older population engages.
Interestingly, we found that participation in the study lead to significant increases in activity levels almost exclusively in those with a gait speed > 0.8 m/sec. Gait speed is well-known as a powerful predictor of mortality with lower gait speeds associated with increased frailty [35,36]. The threshold of 0.8 m/sec we selected has been shown in multiple studies to stratify mortality risk [35,36]. A gait speed of 0.8 m/sec is associated with median life expectancy for both men and women with gait speeds ≥1.0 m/sec as consistently associated with better than median survival [35,36]. Our data suggest targeting those with PDAH under 2 h per day for activity interventions are likely to be most effective in those with a gait speed > 0.8 m/sec while those with slower gait speed may need other interventions to improve strength and/or mobility prior to attempts to increasing overall active time.
A major challenge with the current study was recruitment and subject retention. In screening and discussing the study with potential subjects, we found the average age of individuals with implanted pacemakers and preserved left ventricular ejection fractions averaging less than 2 active hours per day was approximately 80 years of age. Those who were contacted and declined to enroll commonly cited their personal health issues, lack of time, need to care for another family member, already having too many appointments to track, or lack of selfefficacy regarding increasing their activity levels. The six individuals who dropped out of the study following randomization cited new unrelated health issues or feeling overwhelmed with logging steps and participating in study activities. For future projects of this nature working with similar age-groups and populations, these issues merit significant consideration to increase enrollment and retention of study subjects.
This study has some limitations. Our study has a small sample size. However, this study was designed a priori as a pilot study for feasibility and this study achieved its goal with 80% power to detect a difference in PDAH in the intervention group the size of that detect in this study with a standard deviation of 0.45 at α = 0.05. While these data demonstrate that changes in physical activity can be tracked by the specific pacemakers specified in our enrollment criteria, we cannot yet generalize these findings to other pacemakers with different accelerometer algorithms. In addition, whether improvements in PDAH are associated with reduced future adverse cardiovascular events and/or mortality remains unknown. We also cannot determine whether increases in activity seen following the full 24 weeks (including 12 weeks of time without active intervention in the PAC group) would be sustained over longer periods of time. Balanced against these limitations are the unique study population enrolled in this study and the novel findings that in this population PDAH appears superior to pedometer measurements in quantifying improvements in physical activity and that paying attention to physical activity levels in those with relatively preserved gait speed can result in significant improvements in activity levels.
Given the critical importance of maintaining physical activity in older adults to preserve and enhance muscle strength, mental acuity, and physical health, these data may particularly helpful in encouraging and monitoring PA on older adults with implanted devices [45][46][47][48].
Conclusions
Overall, we found that a 12-week intervention to increase physical activity could increase activity based on pacemaker-based accelerometer measurements by approximately one-third. In addition, this amount of increase occurred regardless of the intensity of intervention suggesting that increased attention to physical activity in this patient population could lead to increased physical activity. Our findings also suggest that further tailoring a physical activity intervention for the type of study population enrolled in this study as well as targeting sedentary patients with gait speeds > 0.8 m/sec for physical activity interventions may greater benefits than seen this study. PDAH also appears superior pedometer-based step counts to measure changes in activity in this study population, likely due to the multiple reasons previously cited related to subject specific characteristics unique to older adults. Further work will be necessary to best delineate how to encourage increasing physical activity in this population and also to improve methods for subject recruitment and retention prior to larger studies looking at the efficacy of following PDAH to reduce mortality.
Authors' contributions MEW obtained funding for the study. MEW, SJS, and MB conceived the study design, offered insights on manuscript revisions, and helped execute the study. VKP wrote in the initial draft of the manuscript and helped with execution of the study protocol. JF helped with study conception, data collection, and offered insights on manuscript revisions. KWA provided analyzed the data and offered insights on manuscript revisions. AA,BCH, and ST helped with study protocol execution and offered insights on manuscript revisions. All authors have read and approved the manuscript.
Funding
This work was funded by an investigator-sponsored research grant to from Medtronic, Inc. (CR-3476) to Dr. Widlansky. Medtronic read the initial draft of the manuscript and offered suggestions on the interpretation of the accelerometry but had no other role in the generation of this manuscript. Specifically, the funding source had no role in the study design, data collection, data analysis, or writing of the manuscript. Dr. Widlansky is additionally supported by HL125409, HL128240, HL144098, and R38HL143561, and AHA Strategically Focused Research Network grant (Hypertension Network). Drs. Tyagi and Puppala were supported by T32GM089586. Dr. Hofeld is supported by R38HL143561. Dr. Strath is supported by CA215318 and R21HD094565. The content is solely the responsibility of the authors and does not necessarily represent the official views of the National Institutes of Health.
Availability of data and materials
The datasets used and/or analysed during the current study are available from the corresponding author on reasonable request.
Ethics approval and consent to participate All study procedures were reviewed and approved by the Medical College of Wisconsin's Institutional Research Board. All participants supplied written informed consent.
Consent for publication
Not applicable.
|
v3-fos-license
|
2023-07-11T15:09:40.037Z
|
2023-07-01T00:00:00.000
|
259509037
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://doi.org/10.1016/j.heliyon.2023.e17980",
"pdf_hash": "a1fa5d422dd95cf6893950df6a3d68d24d2294da",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44284",
"s2fieldsofstudy": [
"Environmental Science",
"Agricultural and Food Sciences",
"Chemistry"
],
"sha1": "f8d616808fb7c62598bff0e822ebec94d943cc09",
"year": 2023
}
|
pes2o/s2orc
|
Antioxidant activity of selected plants extract for palm oil stability via accelerated and deep frying study
Antioxidants are organic compounds that help to prevent lipid oxidation and improve the shelf-life of edible oils and fats. Currently, synthetic antioxidants were used as oil stabilizing agent. However, synthetic antioxidants have been causing various health risks. As a result, natural antioxidants such as most parts of olive plant, green tea, sesame, medicinal plants were plays an important role to retard lipid oxidation. The palm oil was continuously frying at 180 °C for 6 days using Lepidium sativum (0.2%w/v) and Aframomum corrorima (0.3%w/v) seeds extracts as antioxidant. The physicochemical properties of oil in the herbal extract additive group significantly maintained the oil quality during frying compared to the normal control and the food sample containing group. The L. sativum extract had a greater oil stability compared to A. corrorima extract. However, the frying oil without herbal extract significantly increase the physicochemical properties of oil such as iodine value, acid value, free fatty acid, total polar compounds, density, moisture content, pH etc. during repetitive frying. The antioxidant activity of the plant extract was outstanding, with an IC50 value in the range of 75–149.9 μg/mL when compared to the standard butyl hydroxy anisole, which had an IC50 value in the range of 74.9 ± 0.06–96.7 ± 0.75 μg/mL. The total phenolic and flavonoid content of the extract for L. sativum was 128.6 ± 0.00 mg GAE/g, 127.0 ± 0.00 mg QE/g, and 130.16 ± 0.001 mg GAE/g, 105.76 ± 0.02 mg QE/g, respectively. The significant effect of the plant extract on the degradation of oil and the formation of free fatty acids was confirmed by Fourier transform infrared spectroscopy. The result of these study revealed that the ethanolic crude extract of L. sativum and A. corrorima had a potential natural antioxidant to prevent the degradation of palm oil.
Introduction
Lipid oxidation is a major cause of degradation during the storage and processing of edible fats, oils, and fat-containing products. It modifies critical quality control standards for fats and oils [1]. It also causes a variety of physical and chemical changes, resulting in significant decomposition [2,3]. Oxidation is a principal cause of quality deterioration, promotes rancidity and food degradation [4]. Oxidation reactions generate free radicals that set off chain reactions [3,5]. Furthermore, oxidative stress causes several fatal diseases in humans, such as cancer. Frying is a popular and long-standing culinary technique used to prepare meals all around the world [2,5]. The frying process degrades oils, causes them to lose their nutritional content, and produces toxic chemicals that are harmful to one's ethanol was selected for further extraction. The powdered plant materials (100 g for each) were soaked with ethanol (1L) for 24 h at room temperature using maceration technique. The solvent extracts were filtered and concentrated using a rotary evaporator at a temperature of 60 • C with a speed of 90 rpm to have a solid consistency and dried at room temperature. Finally, the crude extract was packed in air-tight glass bottles with proper labels and kept in a refrigerator at 4 • C until used for the next experiment [23]. The qualitative phytochemical screening of the plant crude extracts such as saponin, alkaloids, steroid, tannins, flavonoids, phenolic, terpenoids, glycosides and quinones were investigated using a standard methods reported from the previous similar study [14].
Antioxidant activity of Lepidium sativum and Aframomum corrorima seed extract 2.4.1. DPPH assay
The DPPH radical-scavenging activity of plant extracts was determined by adding various concentrations of test extracts to 2.9 mL of a 0.004% (w/v) ethanol solution of DPPH. After 30 min of incubation at room temperature, the absorbance was measured at 517 nm against a blank [24]. The IC 50 values (concentration of sample required to scavenge 50% of free radicals) were calculated from the regression equation. Butylated hydroxy anisole (BHA) was used as a positive control, and all tests were performed in triplicate. DPPH's free radical inhibition (I%) was calculated using equation (1).
Where A is Absorbance
Ferric ions (Fe 3+ ) reducing antioxidant power assay
The reducing power assay was performed using the method described by previous similar study report with minor modifications [25]. An aliquot of 0.2 mL of various concentrations of the extracts (25-125 μg/mL) were mixed separately with 0.5 mL of phosphate buffer (0.2 M, pH 6.6) and 0.5 mL of 1% potassium ferricyanide. The mixture was incubated in a water bath at 50 • C for 20 min. After cooling to room temperature, 0.5 mL of 10% trichloroacetic acid was added, followed by centrifugation (769.23g) for 10 min. The supernatant (0.5 mL) was collected and mixed with 0.5 mL of distilled water. Ferric chloride (0.1 mL of 0.1%) was added to it, and the mixture was left at room temperature for 10 min. The absorbance was measured at 700 nm, and BHA was used as a positive control. The ability of the extracts to reduce Fe 3+ to Fe 2+ was calculated using the following equation (2): Reducing power assay (%) = Acontrol − Asample Asample x 100 Where A is Absorbance
Hydrogen peroxide scavenging assay
The extract's ability to scavenge hydrogen peroxide (H 2 O 2 ) was determined using similar method from the previous report with a slight modified [26]. An aliquot of 0.1 mL of extracts (25-125 μg/mL) was transferred into the Eppendorf tubes, and their volume was made up to 0.4 mL with 50 mM phosphate buffer (pH 7.4) followed by the addition of 0.6 mL of H 2 O 2 solution (2 mM). The reaction mixture was vortexed, and after 10 min of reaction time, its absorbance was measured at 230 nm. BHA was used as the positive control and the ability of the extracts to scavenge the H 2 O 2 was calculated using the following equation (3): Percentage of H ₂ O₂ scavenging activity % = Acontrol − Asample Asample x 100 Where A is Absorbance
Phosphomolybdenum assay
The phosphorus molybdenum assay was conducted by preparing an aliquot of 0.1 mL of sample solution of different concentrations μg/mL) treated with 1 mL of reagent solution (0.6 M sulfuric acid, 28 mM sodium phosphate and 4 mM ammonium molybdate) [27]. The tubes were incubated at 95 • C in a water bath for 90 min. The samples were cooled to room temperature and their absorbance was recorded at 765 nm. BHA was used as the positive control. BHA was used as the positive control and the scavenging ability of the extracts was calculated using the following equation (4): Where A is Absorbance
Assay for total phenolics
The total phenolic content (TPC) was determined using the Folin-Ciocalteu reagent [28]. Briefly, 0.01 g of the crude extract was dissolved in 10 mL of ethanol and vortexed until the mixture became a homogenous stock solution. For the stock solution, 0.2 mL of the supernatant was mixed with 0.8 mL of distilled water. 0.1 mL of Folin-Ciocalteu reagent was added and left for 3 min at room temperature. Then, 0.8 mL of 20% (w/v) Na 2 CO 3 was added into the mixture and incubated for 2 h in the dark. The absorbance was measured using a Uv-Vis spectrophotometer at 765 nm. Gallic acid was used as a standard, and the absorbance, y, obtained after analysis for each plant sample was used in the equation y = 0.0098x -0.2228, R 2 = 0.9991 where x is standard concentration (Fig. 1A in the attached supplementary material). Then, the value obtained, x, was substituted in C1 in equation C--C1 x V/m, where C = is the total phenolic content in mg GAE/g, C1 = is the concentration of gallic acid established from the standard curve, and V is the volume and m is the mass of the extract used.
Assay for total flavonoid
The total flavonoid content (TFC) was assessed by the aluminum chloride colorimetric method [29]. Briefly, 0.01 g of the crude extract was dissolved in 10 mL of ethanol and vortexed until it became a homogenous stock solution. 0.2 mL of the extract supernatant was mixed with 0.15 mL of 5% NaNO 2 and incubated in the dark for 6 min at room temperature. Then, 0.15 mL of 10% (w/v) AlCl 3 was added to the mixture and kept in the dark for 6 min at room temperature. After that, 0.8 mL of 10% (w/v) NaOH was added into the mixture and incubated in the dark for 15 min at room temperature. The absorbance was measured using a Uv-Vis spectrophotometer at 510 nm. Quercetin (80% (v/v) ethanol) was used as a standard, and the absorbance, y, obtained after analysis for the plant sample was used in the equation y = 0.066x -0.0142, R 2 = 0.9991 obtained from the standard curve (Fig. 1A in the in the attached supplementary material supplementary material).
Deep frying protocol
The oil was fried for 6 h per day for a total of 6 days. Deep frying was carried out in a stainless steel electrical open fryer (10 L oil capacity) [31]. The treatments were conducted simultaneously in Group-I (oil without any additives), Group-II (normal oil with 0.02% BHA), Group-III (normal oil with 0.2% w/v L. sativum extract and food), Group-IV (normal oil with 0.3% w/v A. corrorima extract and food), and Group-V (normal oil with food). A sample before frying was taken to represent the sample for day 0. The remaining oil was heated to 180 ± 2 • C and allowed to equilibrate at this temperature for 30 min. About 14 batches of 80g food were fried for 2.5 min per day at 30 min intervals for 6 h. Approximately 100 mL of oil samples were collected from each fryer and introduced into amber bottles at the end of each day. All oil samples were flushed with slow bubbles of nitrogen from the bottom of the bottles and stored at − 20 • C prior to physical and chemical analysis. Rapid measurements were taken while there was no moisture (bubbles) in the frying oil after each cycle. The effect of repetitive frying of oil samples on the physicochemical parameter was evaluated [32].
Physicochemical and quality assessment of deep frying oils
The physicochemical characteristics of deep-frying oil in each cycle were investigated. The physicochemical properties, such as acid value, refractive index, iodine value, saponification number, moisture content, and pH, were analyzed by the standard protocol of oil analysis, the AOAC official method (969.17). Several methods for the determination of the quality of deep-frying oils have been developed based on physical and chemical parameters. The oxidation parameters such as free fatty acids (FFAs), peroxide value (PV), iodine value (IV), conjugated dienes (CD), and conjugated trienes (CT) were the major parameters to determine the frying oil deterioration [33]. The density and anisidine value of the oil were determined by the method reported in the previous literature [34,35]. The conventional analytical methods, including titrimetric and spectrophotometric techniques, were adopted to overcome the physicochemical analysis result following the guidelines of the official methods of the American Oil Chemists' Society (1998) [33,36].
Statistical analysis
The commercial statistical packages (SPSS, version 25) were used to the plant extracts antioxidant activities and physicochemical parameters of oil were performed in triplicates. The statistical differences and homogeneity among the groups were verified using one way ANOVA. The normality of the data were verified using Kurtosis statistics test. The analysis of variance for individual parameters was performed using a Tukey post hoc test to identify the statistical difference on each groups differ from other on the basis of mean values to analyze the significance of multiple comparison measurement of the data at confidence level of 95% (p < 0.05).
Total phenolics and flavonoid content
The TPC of the ethanolic seed extracts of L. sativum and A. corrorima were 128.6 ± 0.00 and 127.0 ± 0.00 mg GAE/g gallic acid equivalents, respectively ( Table 2 There was a significant amount of total flavonoid content (TFC) in both L. sativum and A. corrorima, which were estimated to be 130.16 ± 0.01 and 105.76 ± 0.02 mg QE/g, respectively.
Antioxidant activity
The antioxidant activity of the plant extracts was evaluated using the DPPH free radical scavenging activity assay, hydrogen peroxide inhibition assay, phosphor-molybdenum assay, and ferric reducing power assay ( Table 2). The inhibitory concentration (IC 50 ) of the different assays was carried out using the regression equation. A lower IC 50 value indicates a greater potential to scavenge free radicals. The IC 50 value of BHA (the positive control) was lower than the corresponding plant extract. However, the IC 50 values of L. sativum (75.9 ± 0.31 μg/mL) and A. corrorima extract (77.3 ± 0.58 μg/mL), which were lower than BHA (83.4 ± 0.26 μg/mL) at the hydrogen peroxide scavenging assay, indicated that the plant extract had stronger antioxidant activity compared to BHA.
Free radical scavenging activity
The percentage free radical scavenging activity of ethanolic extracts of L. sativum and A. corrorima increased with increasing concentration (25-125 g/mL). For the scavenging activity, the hydrogen-donating ability of the extract toward the DPPH free radical was evaluated. Both plant extracts increase the free radical scavenging activity (%) with increasing concentration (Fig. 1a). The higher scavenging activity was recorded on the ethanolic extract at 125 μg/m for L. sativum (66.03 ± 0.774 μg/mL), which is comparable to the positive control BHA (71.38 ± 0.834 μg/mL). Moreover, the DPPH quenching ability of A. corrorima increases significantly with increasing concentration (Fig. 1a).
Reducing power assay
The reducing power of the extract was measured for a concentration up to 125 μg/mL and showed a significant increment as the concentration increased (Fig. 1b). Among the two tested plants, L. sativum seed extract possessed the highest free radical reducing activity (57.89 ± 0.254 μg/mL) compared to A. corrorima (49.68 ± 0.763 μg/mL) at 125 μg/mL. However, when compared to BHA (58.68 ± 0.39 μg/mL), the two plant extracts have a lower ability to reduce free radicals (Fig. 1b).
Phosphomolybdenum assay
The free radical inhibition of plant extracts increases with concentration ( Fig. 1c). The antioxidant activity of the L. sativum (60.30 ± 0.151 μg/mL) extracts was significantly increased compared to the A. corrorima extract (44.72 ± 0.362 μg/mL) at 125 μg/mL. In addition, the molybdenum ion concentration reduction of the ethanolic extract of L. sativum showed comparable radical scavenging activity with the positive control BHA (79.39 ± 0.69 μg/mL).
Hydrogen peroxide scavenging assay
The hydrogen peroxide scavenging activity of L. sativum and A. corrorima seed extracts was investigated within the range of concentrations (25-125 μg/mL) as shown in Fig. 1d. The ethanolic extract of the L. sativum and A. corrorima seeds (125 μg/mL) displayed a strong percentage H 2 O 2 scavenging activity (70.60 ± 0.72 μg/mL) and (71.86 ± 0.63 μg/mL) respectively), whereas it was 82.03 ± 0.69 μg/mL for the positive control BHA group.
Accelerated oxidative study
To evaluate the effect of frying during an accelerated oxidative study, different concentrations of plant extract (0.1-0.4%) were investigated and compared with the positive control BHA for 0-72 h. For the optimization of the effect of plant extract, the physicochemical properties of frying oil, such as acid value, saponification value, iodine value, and peroxide value were depicted as supplementary materials in Fig. 3A, respectively. Furthermore, the total polar compound and conjugated diene and conjugated triene levels of frying oil during the accelerated oxidative study were depicted as supplementary material in Fig. 4A. Based on the optimized concentration, L. sativum (0.2%) and A. corrorima (0.3%) showed a comparable protective effect with the positive control BHA. Therefore, these two concentrations of plant extract were selected for the deep-frying protocol as oil stabilizers.
Deep frying study
During the accelerated oxidative study, the deep-frying protocol was followed for six days in a row, and the concentration of plant extract was optimized based on the physicochemical analysis of frying oil at various concentrations. During the optimization process L. sativum (0.2% w/v) and A. corrorima (0.3% w/v) plant extracts were chosen for the deep-frying study.
Saponification value, acid value and free fatty acid content
The SV, AV, and FFA content of oils during repetitive frying were evaluated and depicted in Table 3. The saponification values of deep-frying oil increased with the increase in the frying cycle. There was no significant difference between the normal controls (Group 1) and all the others. Significant variations in SV between each group during the initial day of frying were recorded. There was a significant difference between the normal control and the plant extract-containing group on day 1 (P < 0.05). However, there is no significant difference between the positive and normal control groups. The SV in group I was statistically significant compared to groups III, IV, and V (P < 0.05), but non-significant compared to the positive control (group II). At day 2, the SV of the normal control group was significantly different compared to groups IV and V (P < 0.05). However, the positive control group was statistically significant compared with groups III, IV, and V. The SV of group V was significantly different compared to the entire group (P < 0.05). Furthermore, the SV of group-I at 3, 4, 5, and 6 days was significantly different compared to the entire group. The acid value (AV) of repetitive frying oil was studied for six continuous days. There was a significant increment in the AV of palm oil during the frying period for the entire group. Initially, the AV of group V (1.30 ± 0.01) was higher than the normal, positive control and the plant extract additive group (P < 0.05). However, after the 1st day of frying, the AV of the normal control group was significantly higher compared to the plant additive group and decreased non-significantly compared to the positive control and food sample-containing group (group V). After one day, the AV of each frying oil was significantly different. Furthermore, significant increases in groups I and V were observed when compared to the positive and plant antioxidant-containing groups. The FFA (%) content of oil in each group increased with the frying period. However, after day 1, the FFA (%) content of the frying oil with food sample was significantly higher than the positive control and the plant antioxidant additive group. However, there was no significant difference between the normal control and group V. After days 1 through 6, the FFA (%) of the normal control group was statistically significant compared to the entire group.
Peroxide value, p-anisidine value and total oxidation
In this study, the PV, p-AV, and TOXOX values of frying oil in each group increased throughout the study period (Table 4). There was a significant difference between the normal control group and the entire group during the initial period (P < 0.05). The p-AV of oil increases at random as the frying time increases for the entire group. The higher p-AV of the oil was observed in groups I and V throughout the frying periods. The PV of frying oil under positive control significantly decreased compared to the other group. After day 1, the PV of frying oil dramatically increases with increased frying time. There was a significant difference observed between the normal control group and the entire group in the initial period (p < 0.05). However, group V had the highest p-AV of frying oil from the Note: AC = A. corrorima, LS = L. sativum, SV = saponification value, AV = acid value, FFA = free fatty acid. The experimental data was in triplicate and the statistical value significant at (P < 0.05).
first (6.06 ± 0.02 to the final (36.22 ± 0.31) day of frying). In this study, the higher peroxide values in the frying oil after 6 days that contained a food sample (Sambussa) (37.00 ± 0.95 meq O 2 /kg) were an indication of the higher degree of oxidation. The TOXOX value results were also compiled and depicted in Table 4. During the initial period of frying, the TOTOX in the normal control group was statistically significant compared to the groups II, III, and V (P < 0.05).
Total polar compounds (TPc) and iodine value (IV) of frying oil
The TPc of oil in the food sample containing group (group V) was higher than the overall TPc (Table 5). A significant difference was observed between the normal control and the rest of the others (P < 0.05) in the initial and day-1 periods of frying. However, there was no significant difference between the TPc of the plant extract additive and the normal control group (P > 0.05) at day 2. At day 2, there was no significant variation observed between the positive control and the plant extract additive group. However, 0.2% w/v significantly decreases the TPc of frying oil compared to the 0.3% w/v additive group. Moreover, the plant extract additive group significantly decreases the TPc of oil compared to the normal control and frying oil with food (group V). The IV of frying oil decreased throughout the study period. There was no significant difference between the normal, positive control, and plant extract additive groups in the initial day of frying (P > 0.05). However, there is a significant difference between normal cooking and frying oilcontaining foods. Similarly, up to day 3 (Table 5), the IV of frying oil in the positive control and the 0.2% w/v L. sativum and 0.3% w/v plant extract additive groups were comparable. However, after day 3, there was a significant difference between the 0.3% w/ Note: AC = A. corrorima, LS = L. sativum, PV = peroxide value, P-AV = anisidine value, TOTOX = total oxidation. The experimental data was in triplicate and the statistical value significant at (P < 0.05). v A. corrorima additive and the positive control group.
Conjugate diene and conjugate trienes
The conjugate diene and conjugate triene levels of the frying oil were investigated using UV-visible spectroscopy with respect to the extinction coefficient (K). It can be observed that all samples exhibited a steady increase in absorbance between 232 nm and 270 nm, indicating an increase in the formation of both conjugated dienes and trienes during repeated frying (Fig. 2).
Effect of pH, density and moisture content
The pH, density, moisture content, and refractive index of the oil were evaluated throughout deep frying and are depicted in Fig. 3. The pH of a cooking-quality vegetable oil is normally kept neutral and usually ranges from 6.9 to 6.7 (Fig. 3a). The density of the frying oil was also evaluated during the deep-frying study and is shown in Fig. 3b. Increase the frying cycle to increase its density for all samples. The density of the food sample containing group was greater than the entire group, which was 0.89 ± 0.35. However, in the final period of frying, the density was lower on the positive control (0.82 ± 0.34) and A. corrorima plant extract (0.3%) additive group (0.88 ± 0.97) (Fig. 3b). The parentage moisture content of deep-frying oil showed a great deal of variation between the groups throughout the study periods, as depicted in Fig. 3c. The moisture content of food samples containing oil (group V) and the normal control (group I) was higher than that of the plant extract additive group. However, the moisture content of the positive control group was lower than that of the food and plant extract additive group. The greater moisture content of group V at day 6 of frying was 43.09 ± 0.99. The RI value of the oil was investigated, and it was observed to be in the range of 1.432-1.462 throughout the deep-frying study (Fig. 3d).
Fourier transform infrared spectroscopy of oil after 6 th day of frying
After heating and frying, the level of oxidation was assessed using FTIR spectroscopy. In this study, five samples (one sample for each group) were investigated. The FTIR spectra of frying palm oil showed a significant difference in the band at room temperature (Fig. 4).
Discussion
In this study the effect of plant extract on the stability of palm oil were evaluated and the palm oil was continuously frying at 180 • C for 6 days. The antioxidant activity, TPC and TFC of L. sativum and A. corrorima seed extracts were discussed. The antioxidant and oil stability of potential of the plant might be due to the secondary metabolite found in the extract. Secondary metabolites are natural products that are primarily produced by bacteria, fungi, and plants. They are low-molecular-weight molecules with a range of biological importance, including antioxidants and antimicrobial activity [37,38]. The phytochemical screening result of the plant extract revealed that, the presence various secondary metabolite that helps to stabilize oil during frying. The ethanolic extract of L. sativum showed a better solvent to extract various secondary metabolite compared to other solvent extracts. This study confirmed the previous study's findings [37] and stated that L. sativum was rich in alkaloids, glycosides, phenols, terpenoids, flavonoids, and other secondary metabolites. Similarly, the ethanolic crude extract of A. corrorima showed a positive result for phenol, flavonoid, tannin, and glycoside. Secondary metabolites, particularly phenol and flavonoids, have been shown to have significant radical scavenging activity [38]. The antioxidant activity of the plant extract might be due to the bioactivity potential of the secondary metabolite [14]. Flavonoids, which are phenolic compounds present in medicinal plants, exhibit antioxidant activity [39].
The phytochemicals such as alkaloids, flavonoids, and terpenes are essential in antioxidant, analgesic, neuroprotective, antimicrobial, and antimalarial actions [14,38]. They also serve as anticancer and antidiabetic agents [24]. In general, the phytochemical screening results of the plant seed extracts might have promising medicinal applications since tannin; terpenoids, saponins, phenols, and flavonoids are among the major phytochemicals of the plant seed extracts [14]. The successful screening of phenolic and flavonoid compounds can be influenced by a number of factors, including sample size, storage conditions, weather, extraction method, the presence of any interfering substances, and the solvent [38,39]. However, no single solvent or mixture of solvents has been shown to effectively extract phenolic compounds from these two species.
The phenolic hydroxyl groups have a remarkable ability to scavenge free radicals [28]. On the other hand, flavonoids are biologically important compounds with a broad spectrum of biological activities such as antioxidant, anticancer, anti-inflammatory, anti-allergic, anti-angiogenic, and anti-allergic. The TPC of the methanolic and ethanolic extracts of L. sativum were 94.48 ± 1.82 mg GAE/g and 86.48 ± 0.22 mg QE/g, respectively [14]. The variation in the ethanolic extract might be due to the maturation period, geographical location, and method of extraction. The total phenol content of the methanolic extract from the seed of L. sativum at 46 mg GAE/g [40]. The variation might be due to the solvent, method of extraction, and geographical location of the study plant [41]. Furthermore, the total phenolic content of A. corrorima seed extracts was comparable to those of L. sativum extract. This indicated that the extracts were responsible for the free radical scavenging activity associated with oxidative stability [42,43].
The primary mechanism underlying the antioxidant activity of phenolic compounds is their redox properties, which can be helpful in absorbing and neutralizing free radicals, quenching singlet and triplet oxygen, or dissolving peroxides [14]. The TPC of A. corrorima seed extracts demonstrated significantly higher levels than previously reported in the literature. The TPC of the A. corrorima hydro distillation extract was 3.98 ± 0.27 mg GAE/g for the seed and 1.32 ± 0.07 mg GAE/g for the pod [17]. The disagreement might be due to the method of extraction, the solvent used, the plant seed harvesting period, the different geographical distribution of the plant, and other environmental factors [23]. The principal antioxidants or free radical scavengers in plants are correlated with phenolic compounds [44]. The study results revealed that the two plant extracts might have strong radical scavenging activity due to their greater phenolic content. The bioactivity of phenolic compounds might be associated with their ability to chelate metals, inhibit lipoxygenase, and scavenge free radicals. Moreover, the TFC of L. sativum was 37.63 ± 2.14 mg QE/g [14]. The disagreement might be due to the geographical distribution of the plant, the method of extraction and solvent used [39]. Numerous studies have found that flavonoids found in herbs contribute significantly to their antioxidant effects [45]. Flavonoids are extremely powerful scavengers of most oxidizing compounds, including single oxygen and different free radicals [14,46]. The TFC of the hydro-methanolic extract of A. corrorima is 19 ± 0.4 mg QE/g which was lower than the current study result [46]. The phenolic compounds are major secondary metabolites that consist of a large group of biologically active compounds. Due to their redox properties, phenolics act as antioxidants and reducing agents.
The antioxidant activity of the plant seed extract was evaluated using the DPPH, ferric reducing power, phosphor-molybdenum, and hydrogen peroxide scavenging assays. Increase the concentration of plant extract to increase the percentage of free radical inhibition. The antioxidant activity of the L. sativum was strongly correlated with the positive control. This might be due to the presence of various phytochemicals such as flavonoids and phenolic compounds [22]. The antioxidant activity of different extracts was found to correlate significantly with their total phenolic content, and that L. sativum seeds could be used in food supplement preparations or as a food additive, for caloric gain or to protect against oxidation in nutritional products [14]. The percentage inhibition of the A. corrorima ethanolic extract was lower when compared to the L. sativum and the positive control. This might be due to the lower concentration of flavonoids and phenolic compounds since those compounds have been reported to scavenge free radicals, superoxide, and hydroxyl radicals by transfer [47].
Plant-derived flavonoids possess antidiarrheal, antimicrobial, antioxidant, and anti-inflammatory properties [38]. Polyphenolic compounds and flavonoids form complexes with bacterial cell walls and exert biological functions [48]. Moreover, the antioxidant capacity can be attributed to the extract's chemical composition and polyphenol content [49]. The radical scavenging activity of the plant extract might be due to the secondary metabolites (phenol, tannin, flavonoid, alkaloids, etc.) responsible for molybdenum ion percentage reduction [12,49]. The naturally occurring amounts of H 2 O 2 in the air, water, human body, plants, microorganisms, and food are at low concentration levels. It is quickly decomposed into oxygen and water to create hydroxyl radicals that can initiate lipid peroxidation [50]. These indicated that L. sativum and A. corrorima ethanolic extracts exhibited better H 2 O 2 scavenging activity, which might be attributed to the presence of various secondary metabolites that could donate electrons to hydrogen peroxide, thereby neutralizing it into H 2 O [40]. In general, increasing the concentration of plant extract increases the percentage free radical inhibition.
The ethanolic extract of the L. sativum and A. corrorima seed (125 μg/mL) displayed a strong percentage H 2 O 2 scavenging activity.
These indicated that L. sativum and A. corrorima ethanolic extracts exhibited better H 2 O 2 scavenging activity, which might be attributed to the presence of phenolic groups that could donate electrons to hydrogen peroxide, thereby neutralizing it into H 2 O [51].
The saponification value represents the number of saponifiable units (acyl groups) per unit weight of oil [52,53]. A high SV indicates that the oil contains a higher proportion of low molecular-weight fatty acids or vice versa [52]. The SV, which is expressed in milligrams of potassium hydroxide, is used to calculate the average molecular weight of oil (mg KOH g − 1 oil) [53]. The saponification values of deep-frying oil increased with the increase in the frying cycle. However, there was a significant variation observed between the groups on each day of frying. The findings of this study were in close agreement with the previous study report and stated that at the elevated cooking temperature of 350 • C, the SV increased to 250 mg of KOH per 100 g of oil and produce more FFA during frying [54]. Furthermore, the results of these studies are supported by the previous similar study report and state that a high SV results in a high level of short-chain fatty acids and higher glycerol content [55].
The acid value of frying oil rises as the frying cycle lengthens. When compared to the positive and plant antioxidant additive groups, groups I and V showed a significant increase. The highest mean AV of 4.82 mg KOHg − 1 was recorded on the fifth day of frying [56]. The difference in the AV oil might be due to the type of oil used for frying. The AV of oil generally rises with increased frying times. On the sixth day of frying, a food sample containing a group had a higher AV than the other group. An increase in AV could be attributed to the moisture content of the fried product, which accelerates the hydrolysis of oil. It is known that water can promote the hydrolysis of triacylglycerol to form FFA [57]. The FFA (%) content of the oil was randomly increased with increased frying time. FFA levels were found to rise as the number of frying cycles increased, both for heating and frying. In addition to that, the plant extract-containing group and the positive control group showed a significantly lower FFA (%) content compared to group V. These might be due to the transfer of water from the food sample to the oil, which would accelerate the hydrolysis of triglycerides. The increase in AV and FFA was caused by the cleavage and oxidation of double bonds to form carbonyl compounds, which then oxidized to low-molecular weight fatty acids during frying [58]. The study also found that the plant extract additive group and the positive control significantly inhibited FFA enhancement in frying oil.
Peroxide values represent the primary reaction products of lipid oxidation, which can be measured by their ability to liberate iodine from potassium iodide [57]. PV is the most widely used test for determining the state of oxidation in fats and oils. It also indicates the fat's or oil's rancidity or degree of oxidation, but not stability [57,59]. A carbonyl bond such as aldehyde was generated during the secondary lipid oxidation, and it can react with the anisidine value (0.25% in glacial acetic acid) solution, forming a yellow-colored solution. Significant decrements in the PV of oil were observed after the addition of the plant extract compared to the normal control and the frying oil with food (group V) throughout the study period. This indicated that the plant extract prevent the oxidation of oil up on frying. These might be due to the bioactive secondary metabolite that responsible to prevent oil oxidation. Moreover, the herbal extract with food sample significantly prevents the degradation of oil during excessive frying (6 day). The food sample with herbal extract significantly maintains the degradation of oil. The PV increases during the first 20 frying cycles at 160 • C, and then decreases [60]. The significant variation might be due to the frying cycle and oil type. Based on the amendments made to the Malaysian Food Act 1983 by the Food (Amendment) (No. 3) Regulations 2014, the maximum PV of cooking oil is 10 meq O 2 /kg of oil (Food Act, 1983). The higher the peroxide values, the more oxidized the oil. Additionally, the process of oil breakdown is significantly influenced by the water content or humidity. Moreover, high frying temperatures make peroxide unstable; it quickly breaks down and becomes a dimer and a volatile chemical [61]. Therefore, the greater PV on the fried food containing oil was due to the deterioration and degradation of the oil.
Anisidine analysis is the appropriate method for evaluating secondary lipid oxidation. The p-AV in frying oil is an indication of organic peroxides that decompose into secondary products, including alcohols, carboxylic acids, aldehydes, and ketones. The quality of oil can be determined by evaluating the absorbance at 350 nm of the solution [62]. Aldehydes formed during oxidative degradation are secondary decomposition products, and the non-volatile portion of carbonyls remains in the frying oil [63]. The higher p-AV of frying oil was revealed throughout the study. This indicated the formation of primary and secondary oxidation products. Similarly, the plant extract additive group significantly decreased the p-AV of frying oil compared to the normal control and the food sample (group V). A lower p-AV indicates that less rancid oil is produced [61]. However, the plant extract and the positive control group significantly retarded the oxidation of frying oil compared to the food sample and the normal group throughout the study period. The Pandanus amaryllifolius leaf extract significantly decreased the level of p-AV throughout the study period, which was due to the secondary metabolite in the plant extract [30]. The results of these studies revealed that the thermal degradation of the aldehydes formed at higher temperatures results in a lower accumulation of oil at the higher frying temperature [64]. Moreover, the results of these studies are not in good agreement with the previous study report from Felix A. et al. (1998), which stated that the maximum p-AV was reached on the second day of frying for both frying temperatures and then decreased consistently until the end of frying time. The disagreement might be due to oil replenishment in the previous study, different oil types used in the frying cycle, and temperature. Moreover, the TOTOX value of the frying oil was evaluated primary and secondary oxidation of oil. The TOTOX index is a good indicator of the total deterioration of fats and oils. The lower the TOTOX value, the better the frying oil quality [57,63]. Oxidation proceeds very slowly at the initial stage, taking time to reach a rapid increase in oxidation rate. The TOTOX value is a common approach to determining the resistance to oxidative rancidity of edible oils. The TOTOX value after the initial days of frying showed a significant difference between the plant extract-containing group, the normal control, and group V (frying with food) throughout the study period (P < 0.05). The TOTOX value of all the oil was extremely greater than the proposed limit, which is an indication of oil oxidation. The variation might be due to the oil type, the frying condition, and the frying cycle. A similar study was also conducted by Morienga oliefera and showed lower p-AV and TOTOX values than either soybean oil or palm olein heated at 185 • C for 30 h. Since TOTOX was correlated with PV and p-AV, the oil-containing plant extract significantly reduced the TOTOX value compared to the normal control and the food containing frying oil. The TOTOX value was given a more accurate description related to the oxidative conditions of the cooking oil after repeated frying. The lower the TOTOX value, the better the oil quality. The good-quality vegetable oils have TOTOX values of ≤4 [59].
The measurement of total polar compounds is useful in estimating heat misuse in frying oils [5]. Evaluating total polar compounds has been characterized as one of the best indicators of the overall quality of oils, and it provides critical information about the total amount of newly formed compounds having a higher polarity than triacylglycerol [64]. The formation of total polar compounds, which indicates oil deterioration, is strongly related to the primary and secondary oxidation that takes place during frying [65]. The result of this study revealed that the total polar compounds of groups I and V were greater than the standard limit after five days of frying. However, the plant extract additive and the positive control group were less than the standard limit throughout the study period. This indicates that the oil should be avoided after five days of frying, but the plant extract showed a positive effect on the reuse of the oil even after six days of frying. This might be due to the plant antioxidant that prevents the oil from thermal degradation [59] . When the amount of total polar components reaches 25%, oil is considered to be thermally degraded and should be replaced with fresh oil [64,66]. The IV was a direct determination of the unsaturation level of the oil. Iodine was used to halogenate the double bonds in the unsaturated fatty acids. Commonly, frying leads to a reduction in unsaturation, thus indicating a decrease in double bonds. The IV of frying oil in the positive control and the 0.2% w/v L. sativum and 0.3% w/v plant extract additive groups were comparable. However, after day 3, there was a significant difference between the 0.3% w/v A. corrorima additive and the positive control group. A decrease in the IV throughout the cycles is consistent with the decrease in double bonds as oil becomes oxidized. The oils such as olive, soybean, and sunflower had lower iodine values [67]. However, the addition of plant extract did not appear to reduce the oxidation as the cycle progressed compared to the normal control and the food sample-containing group. As a result, the reduction in the iodine value of the oil up to the sixth frying day was caused by complex physicochemical changes in the oil, which resulted in an unstable characteristic against susceptible oxidative rancidity. These study results were also in line with the previous study report, which stated that the decrease in the IV of the oils after frying shows relatively higher oxidation [2]. Furthermore, the current study is consistent with previous research by Pineda et al. [68], who discovered a decrease in the IV of olive oil, high oleic sunflower oil, and sunflower oil while frying. It might be caused by a decline in the oil samples unsaturation.
The increase in oxidation rate can also be observed in the change of specific absorptivity at 232 and 270 nm, which measures the contents of conjugated dienes (CDs) and conjugated trienes (CTs). The K 232 is associated with the generation of primary oxidation products (i.e., conjugated dienes), and the K 270 is used to determine fat oxidation, with parameter values varying depending on oxidation conditions (i.e., conjugated trienes). Conjugated dienes and trienes are a good measure of the primary oxidation of the oil [57]. Double bonds in lipids are changed from non-conjugated to conjugated bonds upon oxidation [69]. The CD and CT levels of frying oil increased with increased frying cycles throughout the study period. The CD of three oil samples increased with a longer frying cycle at 160 • C [68]. The result of this study was also in line with a previous similar report and stated that the CD value of sesame oil increased throughout the frying period [63]. The CD and CT levels of groups I and V were greater than the positive and the plant extract additive groups. However, the A. corrorima seed extract additive group showed a lower CD and CT level compared to the L. sativum extract additive group. The CD value of sesame oil increased throughout the frying period. In general, the lower CD and CT of the plant extract additive group indicate the potential antioxidant activity of the plant extract, which helps stabilize the oil during repetitive frying. The formation of hydroperoxide from polyunsaturated fatty acids leads to the conjugation of the penta-diene structure. This causes the absorption of UV radiation at 230-234 nm for conjugated dienes. When hydrogen abstraction happens on two active methylenes on C-11 and C-14, it produces two pentadienyl radicals, which result in the production of a mixture of conjugated dienes and trienes [57,63]. High extinction coefficients (K 232 and K 270 ) are an indication of advanced oil deterioration [59]. This leads to an increase in UV absorption at 270 nm attributable to conjugated trienes, apart from 232 nm for CD.
The greatest reductions in pH values were observed in groups I and V. These might be due to the degradation and hydrolysis of oil to form FFAs. The formation of FFAs during thermal treatment is an important dynamic of vegetable oils that may be related to the decrease in pH [70]. The plant extract additive group significantly maintains the reduction of pH compared to the normal control and food control groups. These indicate the oil was stabilized by plant antioxidants. The pH value of frying oil decreases as the frying cycle (days) is increased throughout the entire group. The greater amount of density that was recorded in group V might be due to the formation of high-molecular-weight polymeric compounds upon frying [55]. The density of frying oil that contained food samples was due to the transfer of food samples to the oil, which increased with each frying cycle. The moisture content of oil increases with the length of the frying cycle. This might be due to the mass transfer that occurs during the frying process, which includes water loss, oil absorption, and heat transfer. The presence of water in food and oil speeds up the hydrolysis of the oil and protects it from oxidation during frying. The increase in the moisture content of the oil could be caused by the oil being exposed to food moisture and air humidity from its surroundings, which could aid in rancidity and oxidative stress [35]. The higher moisture content in this group might be due to the accumulation of water from the food sample on the oil and the exposure of the oil to food moisture and air humidity from the environment, which could facilitate rancidity and oxidative stress on the oil [71]. The frying process involves mass transfer, including water loss, oil absorption, and heat transfer [35,71]. The water content in food and oil accelerates the hydrolysis of the oil and also provides protection against oxidation of the oil during frying. Thus, the longer the frying cycle, the longer the oil is exposed to the humidity of the environment. In a nutshell, both plant additive groups significantly inhibited the elevation of moisture content compared to the normal and food sample-containing groups.
The RI is a parameter that is related to molecular weight, fatty acid chain length, degree of unsaturation, and conjugation. To detect adulteration in edible oils, the refractive index can be used as a quality control technique [72]. The RI is affected by the content of saturated and unsaturated fatty acids. Increasing the frying cycle decreases the refractive index value. This indicated that the unsaturated part of the oil was removed and more saturation was formed during frying. The result of this study is not correlated with the previous study report and stated that the RI increase is believed to be related to the high saturated fatty acid content and the non-hydrogenation of palm oil, making it less resistant to heat [55]. The deviation might be due to the reaction conditions and the frying cycle. However, the study was consistent similar study report and illustrated that the RI values decrease as the temperature is increased [73]. This might be due to the formation of Tran's fatty acid upon oxidation, which affects the change in the RI value. Furthermore the trans-acids formed during hydrogenation affect refractive index values but not iodine values. There was no statistical variation in the RI value between the groups. However, the plant additive group and the positive control did not significantly prevent the enhancement of the RI value compared to the normal control and the oil with food throughout the study period. The RI value of the plant extract additive group is nearly constant compared to the other groups except the positive control. This indicates that the antioxidant potential of the plant extract inhibits the physicochemical changes of deep frying.
The plant extract and the positive control affected the position of the band, and it showed a shift when the proportion of fatty acids changed. The FTIR spectra of the normal control and the food sample-containing group showed strong OH bands at 3300-3500 cm − 1 , respectively ( Fig. 4a and e). These indicated that the triglyceride molecules were degraded and FFA was formed during frying. However, there was no significant variation between the positive control ( Fig. 4b) and the plant-treated group. The percentage transmittance of the food sample additive and normal control group were higher, indicating that absorbance was decreasing. This could be attributed to oil hydrolysis and degradation into FFA. Continuous frying of corn and mustard oil samples increases the transmittance and hydrolyzes the triglyceride molecule [74]. The FTIR spectra also showed intense bands in the region of 2950-3000 cm − 1 that were assigned to sp 3 carbon stretching, which was for the terminal methyl group of the fatty acid chain. The medium peak at 2800 cm − 1 was assigned for aliphatic CH 2 stretching. Carbonyl stretching of the ester functional group (C--O) stretching is assigned to the medium peak at 1600-1700 cm − 1 . However, the peak shift to the greater wave number that was observed in this region in groups I and V might be due to the effect of frying. The strong, intense peaks at 980-1100 cm − 1 indicate C-O (CH 3 O − ) stretching of the ester. The medium peaks at 1480 cm − 1 indicate CH bending of sp 3 carbon (alkane) [75,76].
Conclusions
The quality of the palm oil used in this study was significantly affected by accelerated and deep frying, as revealed by assessing the physicochemical parameters. The plant extract additive group showed a significant improvement in oil quality throughout the study period. The FTIR spectra of frying oil revealed the formation of free fatty acids in the normal control and food sample-containing groups. The positive control and plant extract-treated groups significantly retarded oil degradation and maintained oil quality. Therefore, the optimum concentration of L. sativum (0.2% w/v) and A. corrorima (0.3% w/v) extract were recommended to the restaurants or street food vendors used as an alternative antioxidant. Moreover, the identification of organic compounds that retard the degradation and oxidation of oil needs to be investigated. Further study on various medicinal plants should be investigated to enhance the oil's stability and investigate the potential substitution of synthetic antioxidants.
|
v3-fos-license
|
2018-12-08T14:29:12.528Z
|
2018-03-20T00:00:00.000
|
55099104
|
{
"extfieldsofstudy": [
"Materials Science"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "http://downloads.hindawi.com/journals/amse/2018/5924614.pdf",
"pdf_hash": "f262ccaa83661d77078790892d985f81f684de38",
"pdf_src": "Anansi",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44290",
"s2fieldsofstudy": [
"Geology"
],
"sha1": "f262ccaa83661d77078790892d985f81f684de38",
"year": 2018
}
|
pes2o/s2orc
|
Preparation of a New Borehole Sealing Material of Coal Seam Water Infusion
To improve the borehole sealing effect of coal seam water infusion, especially that of coal seam with low permeability and high rigidity, this study investigated the performance test optimization of two cement-based sealing materials. (e borehole sealing effect of this coal seam requires high-pressure water infusion. Result shows that when the water-cement ratio is 0.4 and the amount of fiber expansive agent is 10%, the new borehole sealing material displays microexpansion. In addition, the 1-day compressive strength reaches 16MPa. (is result satisfies the material compressive strength requirement under 30MPa high-pressure water infusion. (e sealing performance is also excellent. According to the scanning electron microscopy analysis of new borehole and traditional borehole sealingmaterials, the surface of new borehole sealingmaterial shows no holes and possesses compactness.(e sealing effect is superior to that of other traditional sealing materials. (is effect can satisfy the sealing requirement of coal seam water infusion. (e new borehole sealing material is considerably significant for the improvement of the water infusion effect.
Introduction
As one of the major resources in China, coal accounts for over 70% of primary energy [1].A considerable amount of dust, generated during coal mining production threatens the physical health of mining workers and safe production of mines to a substantial degree [2,3].Water infusion acts as the most efficient method of dust prevention in coal face.e key process of coal seam water infusion is borehole sealing, whose quality influences the effect of coal seam water infusion directly [4,5].Nevertheless, in the process of coal mine drilling, the fissure network inside boreholes further develops.
e stress field of roadway surrounding rocks underground coal mines also exerts remarkable influence on fissure development.us, borehole sealing is a significant challenge.
Optimization of borehole sealing material determines the success of borehole sealing [6].Currently in the process of underground coal mine production, all kinds of borehole sealing materials are used; these materials mainly include clay material, high-water material, polymer material, and cement-based sealing material.Clay material is easy to operate and convenient for borehole sealing at low cost.However, the rigidity of clay material should be moderate because high softness or high hardness can both lead to poor sealing effect.As a new type of special cement composite material, high-water material displays high condensation rate, quick development of compressive strength, and microexpansion.Nonetheless, given the abundant composition, high-water material is costly.As a typical example of polymer material, polyurethane shows the advantages of high expansibility, high borehole sealing rate, and convenience, but it displays weak cementing power and low compressive strength.Polyurethane can also be toxic and expensive.With a long history of application and investigation, cement-based borehole sealing material also exhibits many advantages, such as a wide range of raw material sources, low cost, and simple operation.Moreover, cement-based borehole material demonstrates special advantages in the practice of borehole sealing; for example, its excellent mechanical property can support the borehole wall and resist the disturbances caused by geological factors in mining activities.Under the condition of borehole pressurized sealing, cement slurry can permeate the fissures of borehole wall and seal leaky fissures effectively.erefore, cement-based borehole sealing material is widely used in the practice of coal mining borehole sealing [7,8].
In spite of these advantages, cement-based borehole sealing material suffers from shrinkage-cracking.e compressive strength of this material also develops slowly, and the setting time is long.In view of these drawbacks of cement-based borehole sealing material, local and international scholars carried out many studies.Some scholars found that addition of a certain amount of fly ash into cement-based borehole sealing material reduces the material hydration heat, restrains shrinkage-cracking, and improves the material synthetic performance.According to Termkhajornkit et al. [9] and Atis ¸ [10], addition of a certain amount of fly ash into concrete helps restrain shrinkage considerably and improve the compressive strength.Nath and Sarker [11] and Chindaprasirt et al. [12] studied the durability of fly ash and cement slurry comprehensively in terms of compressive strength, shrinkage, Chloride ion adsorption, and permeability.ey also identified that appropriate addition of fly ash contributes to the improvement of cement slurry durability.Nevertheless, the influence of fly ash on cement slurry is complex.Fly ash overdose results in negative effects.Hence, synthetic performance, inconvenient operation, and low strength during the early stage should be taken into account.Lim et al. [13] believed that sand gradation also exerts a significant effect on the properties of cement slurry; the strength and durability of solidified finesand-cement slurry are better than those of coarse sand under high water-cement ratio.Cement mortar is cheap and easy to operate, but it easily suffers from shrinkage and cracking.Ni et al. [14] explored the microcharacteristics of borehole sealing composite material that is composed of polyurethane and expansive cement.Ge et al. [15] proposed a kind of borehole sealing material, which is a mixture of cement, early-strength water-reducing admixture, polypropylene fiber, and water.
e material shows low shrinkage rate and high compressive strength, but the required borehole sealing compressive strength can be satisfied for at least 3 days after sealing.e construction period is also extended.Zhai et al. [16] analyzed the sealing performance of flexible gel sealing material.is material exhibits excellent compactness, stability, fluidity, and permeability.However, given the multiple composition and complicated configuration, the material cost is high.
Although local and international scholars conducted a large number of research on cement-based borehole sealing material, the required performance and cost cannot be satisfied simultaneously [17][18][19].e present study uses 32.5R Portland cement and 52.5R sulfoaluminate cement as major ingredients.Fiber expansive agent and early-strength water-reducing admixture are also added into the cement.Material optimization against problems of cement borehole sealing, such as low strength in the early stage, high shrinkage rate, and poor impact resistance, is also investigated.According to the experimental field result of Ge et al. [20] which plugging of 30 MPa water infusion pressure requires a borehole sealing material compressive strength of 14.4 MPa, the preset value of the compressive strength should be at least 14.4 MPa.Additionally, the required expansion property is microexpansion.To satisfy these two performance requirements, borehole sealing performance is also explored through tests under different water-cement ratios and additives amounts.e optimum material proportion is determined so that a new type of borehole sealing material can be developed.
Test
2.1.Raw Materials.On the basis of the performance requirements of borehole sealing, two different kinds of cement are selected as major ingredients of two sealing materials (materials 1, and 2).
e major ingredients of materials 1 and 2 are ordinary 32.5R Portland cement and 52.5R sulfoaluminate cement, respectively.ese cements were both purchased from China Gezhouba Group Cement Co. Ltd. (Hubei, China).Fiber expansive agent and earlystrength water-reducing admixture are the minor ingredients of material 1.Only the fiber expansive agent is selected as the minor ingredient of material 2.
e fiber expansive agent (Shanxi Qinfen Building Material Co., Ltd.) is a compound of calcium silicate and polypropylene fiber; the design amount of this admixture is 8.0%-12.0% of gel material amount.
e performance parameter of this expansive agent is shown in Table 1.e early-strength waterreducing admixture (Qingdao Hongsha Admixture) is composed of calcium lignosulfonate and fly ash, and its design amount is 2%-8% of gel material amount.
Test Plan.
To obtain the borehole sealing material proportion that satisfies the abovementioned requirements, the borehole sealing material performances, such as expansion performance and compressive strength, are evaluated under different water-cement and admixture ratios.e water-cement ratios of material 1 are 0.4, 0.5, and 0.6 in sequence.
e fiber expansive agent and early-strength water-reducing admixture are added through combinations of composite ratio.e water-cement ratio of material e amounts of fiber expansive agent and earlystrength water-reducing admixture are shown in Table 2.
Expansion Performance Test.
In the expansion performance test, well-prepared slurry (300 ml) is poured into a 500 ml beaker.Five test specimens are prepared for each composition ratio and shown in Figure 1.In the laboratory curing condition of 25 °-30 °C lab, the volume is read every 2 min and recorded as V1, V2, V3, V4, and Vn sequentially, where Vn is the final volume.e recorded Vn is the average of five specimens.
Uniaxial Compressive Strength Test.
e wall of the mold (70.7 mm × 70.7 mm × 70.7 mm) is coated with a layer of release agent and poured slurry, which is well stirred into the mold.A vibrating table is used to densify the slurry, which is poured into the curing box of 95% relative humidity and 20 °C temperature until the scheduled age.Subsequently, the sample is demolded.e pressure testing machine of electrohydraulic servo is used to assess the uniaxial compressive strength of the sample after 1-day curing with a loading speed of 0.3 MPa/s.e test is terminated when the sample is broken.e test equipment is shown in Figure 2.
Microstructure Comparisons.
JSM-6510LV high-and low-vacuum scanning electron microscope is used to observe the structure and fissure of borehole sealing material in the reaction process.e microscope amplification ranges from quintupling to 300,000 times.e actual product is shown in Figure 3.
e scanning electron microscope is used to observe and analyze the microstructure of polyurethane, cement mortar, and new borehole sealing material.e test procedures are as follows: (1) Prepare two groups of polyurethane, cement mortar, and new borehole sealing material in beakers and record them as groups A and B. Place them in the curing box at 30 °C and 101.325 kPa with group A for 1 day and group B for 7 days.(2) To observe the microstructure of specimens, polish well-cured sample into a cylinder sample (10 mm, radius, and 1 mm, thickness).( 3) Soak the sample in alcohol for 5 min to remove dust from the sample surface.Further, purge the sample with an ear washing bulb to ensure that the surface is free from dust.Afterward, spray gold on the sample.(4) Observe the material microstructure with scanning electron microscope.
Expansion Performance Test.
For material 1, the slurry with a water-cement ratio of 0.4 shows high viscosity and low fluidity, thereby causing difficulty in pumping.When the water-cement ratio increases to 0.6, slurry bleeding occurs.Consequently, a large amount of free water is exuded on the slurry surface.e setting and hardening of material are affected, and the water retention capacity of material decreases.A large quantity of free water evaporates after setting, which reduces the material volume.As a consequence, the water-cement ratio of material 1 is 0.5.e variation of expansion performance with admixture is shown in Figure 4. e expansion performance and compressive strength of material 2 are evaluated, when the watercement ratio of the slurry is 0.4, 0.45, and 0.5, and the mixing amount of fiber expansive agent is 8%, 9%, 10%, 11%, and 12%.e variation of expansion performance with watercement ratio and amount admixture is shown in Figure 4.
According to Figure 4 (the icon being early-strength water-reducing agent), under fixed water-cement ratio and invariable amount of early-strength water-reducing agent, the final volume of material 1 increases with the amount of fiber expansive agent.When the early-strength waterreducing agent is 7%, the final expansive volume of material 1 increases the fastest.us, an improved condition is created for the expansion of material to fissures around after grouting.Material composition ratios above the dotted line expand the material slightly.To further optimize the composition ratio of material 1, the compressive strength test results should be analyzed to draw the final conclusion.
Figure 5 shows that the final volume of material 2 decreases with the increased water-cement ratio.Under the Advances in Materials Science and Engineering same water-cement ratio, the nal volume of material 2 increases with the amount of ber expansive agent.Watercement ratio is the main factor a ecting the expansion performance of material 2. When the water-cement ratio is 0.5, the favorable uidity of material 2 is conducive to grouting, but the microexpansion requirements are unsatis ed.When the water-cement ratio is 0.4 or 0.45, the adjustment of the amount of ber expansive agent helps expand material 2. Consequently, the compressive strength of material 2 is evaluated under the water-cement ratio of 0.4 and 0.45 so as to further optimize composition ratio of material 2. Given the long setting time of 32.5R cement, the compressive strength after 1 day setting is considerably less than the nal strength.At least 3 days of setting time is needed after borehole sealing to satisfy the strength requirement of water infusion so that the construction period will be delayed.Figure 7 presents that the compressive strength at 0.4 water-cement ratio is constantly higher than that at 0.5, when the amount of ber expansive agent of material 2 is less than 12%.Furthermore, many ssures are present on material surface when the water-cement ratio is 0.45.Decreased compactness is harmful to borehole sealing.erefore, the selected water-cement ratio of the new material is 0.4.When the amount of ber expansive agent is 10%, the compressive strength is the highest at 16 MPa, which is higher than the preset value of 14.4 MPa. e preset value requirements of the compressive strength of borehole sealing are fully satis ed.
Compressive Strength Test.
According to the expansion performance and compressive strength tests of materials 1 and 2, material 2 is the new borehole sealing material.When the water-cement ratio is 0.4 and the amount of ber expansive agent is 10%, material 2 expands slightly and 1-day compressive strength reaches 16 MPa.
e material compressive strength requirements under 30 MPa water infusion pressure and the site construction needs are both satis ed.us, the composition ratio is the optimum composition.
Microstructure Comparisons.
e microstructure comparisons of the new material, polyurethane, and cement mortar are as shown in Figure 8.
Figure 8(a) shows that the interior structure of polyurethane presents honeycomb-like reticular formation when magni ed 50 times with large interior pore space.Cavity array is formed, and the diameter of each cavity is 0.1-0.5 mm.Interconnecting holes between adjacent cavities also occur, which result in poor overall compactness.Figure 8(c) presents the ampli ed picture (500x) after 1 day setting of cement mortar.Many pores can be observed on its surface, and the compactness is poor.Figure 8(e) is illustrates the ampli ed picture (500x) of the new material.
e surface of the new material is compact without any holes and ssures.Consequently, leakage can be restrained e ectively when water passes through the material in the process of coal seam water infusion.In addition, the inuence of borehole sealing material on water infusion e ect can be prevented.As shown in Figure 8(b), macroholes are formed on the surface of polyurethane 7 days later.Hence, the PU compactness is poor.Figure 8(d) shows that the cement mortar surface is uneven with many pores and poor Advances in Materials Science and Engineering compactness after 7 days curing.e surface of the new material after 7 days curing is more compact with better borehole sealing effect than that of the new material after 1day curing (Figure 8(f)).
Conclusion
To solve the problems on borehole sealing and microfissure sealing through application of material composite technology, research is conducted for the development of laboratory materials, performance test, and comparisons with traditional borehole sealing material.e following conclusions are obtained: (1) Cement types exert a significant effect on the sealing performance of cement-based borehole sealing materials.erefore, cement should be selected carefully before determining the cement-based borehole sealing materials.
(2) 52.5R sulfoaluminate cement is selected as the major ingredient and the fiber expansive agent as minor ingredient to produce a new kind of borehole sealing material.Performance tests' results demonstrate that when the water-cement ratio is 0.4 and the amount of fiber expansive agent is 10%, the optimum expansion performance and compressive strength are obtained.e compressive strength after 1 day reaches 16 MPa, thereby satisfying the compressive strength requirement of borehole sealing material under 30 MPa water infusion pressure.
(3) e microstructures of the new and traditional materials are observed and analyzed with a scanning electron microscope.Amplified images show that the surface of the new material is compact without interconnecting holes.Moreover, polyurethane forms honeycomb-like reticular formation and cavity array; the large pore space of cement mortar results in poor compactness.e expansion performance, 6 Advances in Materials Science and Engineering compressive strength, and compactness of new borehole sealing material are all superior to those of traditional borehole sealing materials.e new material provides a new approach for underground coal seam infusion.
Table 1 :
Fiber expansive agent performance parameter.
2 is 0.4, 0.45, and 0.5 in sequence.e design amounts of the two kinds of fiber expansive agents are consistent with each other.
|
v3-fos-license
|
2021-11-03T15:14:28.537Z
|
2021-01-01T00:00:00.000
|
240452483
|
{
"extfieldsofstudy": [],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://jsssjournal.com/article/download/4368",
"pdf_hash": "212ac94feb84a87e44c1a462fe52b8a9a2fef373",
"pdf_src": "MergedPDFExtraction",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44291",
"s2fieldsofstudy": [
"Computer Science"
],
"sha1": "f2605d8c0b4b81cab42470d6e37760de24c10626",
"year": 2021
}
|
pes2o/s2orc
|
A comprehensive survey of fingerprint presentation attack detection
Nowadays, the number of people that utilize either digital applications or machines is increasing exponentially. Therefore, trustworthy verification schemes are required to ensure security and to authenticate the identity of an individual. Since traditional passwords have become more vulnerable to attack, the need to adopt new verification schemes is now compulsory. Biometric traits have gained significant interest in this area in recent years due to their uniqueness, ease of use and development, user convenience and security. Biometric traits cannot be borrowed, stolen or forgotten like traditional passwords or RFID cards. Fingerprints represent one of the most utilized biometric factors. In contrast to popular opinion, fingerprint recognition is not an inviolable technique. Given that biometric authentication systems are now widely employed, fingerprint presentation attack detection has become crucial. In this review, we investigate fingerprint presentation attack detection by highlighting the recent advances in this field and addressing all the disadvantages of the utilization of fingerprints as a biometric authentication factor. Both hardwareand software-based state-of-the-art methods are thoroughly presented and analyzed for identifying real fingerprints from artificial ones to help researchers to design securer biometric systems.
INTRODUCTION
As the utilization of digital devices and applications continues to grow, the need for more complex solutions to authenticate legitimate users is becoming essential. In this context, biometric-based authentication systems are being increasingly employed as they can be found almost everywhere, including in smartphones, laptops and so on. Fingerprint recognition is based on the Galton points [ Figure 1], named after Sir Francis Galton, who in the late nineteenth century used the so-called Galton points to categorize the attributes of a finger that are utilized to identify a person. Later on, in the late 1960s, with the mechanization of fingerprint matching and the advancement of computer science, these points were renamed to minutiae points and became the standard in the feature extraction stage of fingerprint systems [2] . Fingerprint recognition has some of the lowest false rejection rates (FRRs) and false acceptance rates (FARs) compared to other biometric authentication systems [3] . In 2015, it held 58% of the global market share of biometric authentication systems [3] . Although fingerprints are one of the most utilized biometric factors, security weaknesses still arise. To enhance security, multi-fingerprint systems have been proposed. These systems use more than one finger to produce the biometric identity of a user.
As more biometric systems are utilized, presentation attacks (PAs) are also increasing. PAs can be defined as "presentation to the biometric data capture subsystem with the goal of interfering with the operation of the biometric system" [4] . Artificial fingerprints can be created using low-cost hardware and software, meaning that a skilled person who wants to compromise the security of a system is very likely to succeed.
Security attacks on fingerprint authentication systems can be classified into three categories: (1) attacks with the use of PA instruments at the sensor; (2) attacks on a module of the system; and (3) attacks on the communication channel between the modules of the system. Many types of security violations on fingerprint scanners occur during the communication between the modules of the authentication scheme. Attackers try to modify the data on the communication channel between the sensor and the feature extraction module. Data modification also happens when they circumvent the channel amongst the feature extraction module database and the matching module or between the database and the matching module. In this kind of attack, the attackers modify the data of the channel and manipulate the routines of the various modules of the system. Moreover, perpetrators can insert fake data that the system recognizes as genuine. This attack takes place directly at the modules of the authentication system, i.e., the data storage, the signal processing and the comparison decision [ Figure 2], by modifying the data. The aforementioned vulnerabilities can be counteracted with the use of software-or hardware-based encoding and decoding techniques on the communication channels between the modules of the fingerprint authentication system [5] .
Liveness detection refers to the analysis of the features of a finger to determine whether the input finger is live or not. Presentation attack detection (PAD) is the automated determination of a PA [4] , while liveness detection is a sub-category of PAD. The term PAD will be utilized throughout this survey, although authors in the presented state-of-the-art research may use the previous term, i.e., liveness detection or "antispoofing attacks". A taxonomy of the state-of-the-art methods is shown in Figure 3. To detect whether a fingerprint is artificial, additional hardware is used to detect the heartbeat, blood pressure, skin impedance and other biometrics of the fingerprint like odor. Nevertheless, these approaches are usually expensive. To tackle this, researchers are using software-based methods to extract PAD features directly from the acquired sample from the main sensor. These approaches are based on the fact that certain characteristics of live fingerprints cannot be duplicated. These distinct features can be extracted and selected with the use of further analysis of the acquired sample. There are two kinds of methods that are employed, i.e., "dynamic" methods that perform feature extraction by investigating multiple frames of the acquired sample and Figure 1. A fingerprint image [1] . Figure 2. Types of security attacks on a fingerprint authentication system [4] .
"static" methods that use a single image of the impression of the fingertip [6] .
This survey presents a comprehensive literature review of PAD methods. The performance and quantitative analysis of the methods presented in the following sections have also been given by the utilized metrics in the discussed studies, such as the accuracy, fake error rate and so on.
Kundargi et al. [7] refer only to textural feature-based fingerprint PAD methods, while we make an overall presentation. Comprehensive surveys are given in Refs. [8,9] but these are now outdated. The detailed surveys in Refs. [10][11][12] only provide the PAD methods proposed for LivDet competitions [13] . This work (1) provides a detailed literature review of state-of-the-art fingerprint PAD methods; (2) is focused on, but not limited, the last decade; (3) refers to all PAD categories (i.e., both software-and hardware-based approaches) and not only to specific ones (e.g., only texture-or hardware-based approaches); and (4) is up to date, including current research trends, such as deep learning techniques. . Presentation attack detection methods, inspired by [8,9] .
All discussed studies were retrieved by Google Scholar. Initially, the search terms "fingerprint presentation attack detection" and "fingerprint liveness detection" were given and this resulted in the retrieval of 17,200 and 6830 records, respectively. To lessen the number of retrieved studies, a new search was performed under the limitation that the search term should be part of the title of the publication, thereby making them relevant to the subject of interest. The search queries were: (1) allintitle: "fingerprint liveness detection", which resulted in 199 publications; (2) fingerprint "presentation attack", which retrieved 51 publications; and (3) allintitle: "fingerprint spoof detection", which resulted in 31 publications.
A thorough search was also made regarding the LivDet competitions from 2009 to 2019 [13] , as the most relevant events for PAD. Furthermore, a recent survey [14] (included in Ref. [15] ) was considered and taken into account. By combining the aforementioned search results and by reading the abstract of each publication to determine whether it was relevant to our research, we eventually reached a total of more than 190 publications, which are discussed in the following sections.
The main contributions of this article can be summarized as follows: (1) A comprehensive review is performed regarding state-of-the-art methods for fingerprint PAD.
(2) The literature review presents both hardware-and software-based methods, thereby updating other similar works [8,9] and those where only one category was examined [7] .
(3) The software-based methods include the latest trends, such as deep learning approaches.
(4) A quantitative and qualitative analysis of the datasets that were utilized in the relevant literature is presented. (5) In addition to the presentation of the methods according to the taxonomy in Figure 3, tables are given for each category that summarize the most important features of each method. (6) The research challenges and potential research directions are outlined.
The rest of the review is organized as follows. In TERMS, DEFINITIONS AND EVALUATION METRICS, the terms, definitions and evaluation metrics are presented, while DATASETS refers to the most utilized and publicly available datasets. HARDWARE-BASED PRESENTATION ATTACK DETECTION presents the hardware-based methods, while the software-based methods are given in SOFTWARE-BASED PRESENTATION ATTACK DETECTION. A discussion along with potential research directions is presented in DISCUSSION. Finally CONCLUSIONS outlines the derived conclusions from this review.
TERMS, DEFINITIONS AND EVALUATION METRICS
In this section, we report the terms and evaluation metrics utilized by authors in their publications. It must be noted that some of them are no longer utilized, as they no longer exist in the ISO/IEC 30107-3 standard [16] . Nevertheless, since they were utilized in previous studies, it was deemed appropriate to report them. The most common evaluation metrics are the accuracy, maximum accuracy (ACC), equal error rate (EER), average classification error (ACE) [Equation (1)], attack presentation classification error rate (APCER) and bona fide presentation classification error rate (BPCER). These estimators were used on the ATVS [17] , the LivDet 2009-2019 [13] [ Figure 4], the MSU FPAD [18] and the Precise Biometrics Spoof-Kit dataset [18] . The terms and evaluation metrics are defined as follows: -Attack presentation/presentation attack: "presentation to the biometric data capture subsystem with the goal of interfering with the operation of the biometric system".
-Presentation attack instrument (PAI): "biometric characteristic or object used in a presentation attack".
-PAI species: "class of presentation attack instruments created using a common production method and based on different biometric characteristics".
-Bona fide presentation: "interaction of the biometric capture subject and the biometric data capture subsystem in the fashion intended by the policy of the biometric system".
-Bona fide presentation classification error rate: "proportion of bona fide presentations incorrectly classified as presentation attacks in a specific scenario".
-Attack presentation classification error rate: "proportion of attack presentations using the same PAI species incorrectly classified as bona fide presentations in a specific scenario".
The accuracy and average classification rates can be described as: -Accuracy: rate of correctly classified genuine (live) and fake fingerprints given a threshold of 0.5.
-True detection rate (TDR): the percentage of PA samples correctly detected.
-False detection rate (FDR): the percentage of bona fide samples incorrectly classified as PA.
-False live rejection (FLR): the percentage of spoof (PA) samples that are misclassified as live.
-Equal error rate: rate at which FerrLive and FerrFake are equal.
-False acceptance rate: percentage of imposters accepted by the system.
-False rejection rate: percentage of genuine users rejected by the system.
-Spoof, spoofing and anti-spoofing: terms used in the literature instead of PAI, PA and PAD before the adoption of the ISO/IEC 30107-3 standard.
Although APCER and BPCER are the standard metrics in ISO/IEC 30107-3 [16] , ACER is also mentioned due to its utilization in LivDet competitions.
DATASETS
Fingerprint PAD is a binary classification problem, i.e., the acquired samples are classified as bona fide or artificial. The datasets that train and test the classifiers play an important role. For this reason, it is important to refer to the structure and the characteristics of the most common and publicly available datasets. The procedure to create a dataset is to capture samples from unique individuals and afterwards to create the artificial ones. The resulting dataset is then split into a training and a test set, which are utilized to train and test the proposed classifiers, respectively.
To create the artificial fingerprint images, two different approaches are deployed, namely, the cooperative and non-cooperative methods. In the cooperative method, the subject pushes their finger into a malleable material, e.g., plastic, or wax, to create a mold, which is then filled with a material, such as gelatin, Play-Doh or silicone. In the non-cooperative method, an attacker typically enhances a latent fingerprint left on a surface, e.g., a CD, photographs it and finally prints the negative image on a transparency sheet.
A dataset needs to be both qualitatively and quantitatively good. The quantitative part concerns the number of existing bona fide and artificial samples. The more samples in the dataset, the better it should be. However, in addition to the absolute number of bona fide/artificial samples, a key factor is the distribution of samples in each one of the two categories. Ideally, datasets must be balanced. A balanced dataset is a dataset where each output class, bona fide and artificial in this case, is represented by the same number of input samples. As deep learning methods have been extensively utilized in recent years, the number of samples in the dataset, along with their distribution (number of bona fide/artificial samples), are important factors in the design of efficient classification algorithms.
Another qualitative element of a dataset is the number of subjects that were used to capture the bona fide samples and the number of visits that were needed to capture the bona fide samples. In some cases, the bona fide samples were captured during one visit, while in others in multiple visits. It is important to include a large number of unique subjects, as well as multiple visits for the training bona fide samples [19] .
Moreover, the number of optical sensors (scanners), their resolution and the size of the captured images are other factors that need to be addressed. Good quality of the captured images is a prerequisite for good system performance, but it also allows for subsequent reliable processing of the data. Furthermore, the method with which the images were acquired should also be considered. Bona fide samples with wet and dry fingers and with high and low pressures should be present in the dataset.
Finally, it must be noted that an efficient algorithm should recognize artificial samples from unknown, i.e., not present in the training set, materials. Thus, the proposed algorithms are evaluated against overfitting and how well they generalize unseen data (artificial samples of unknown materials). For this reason, the test set should be adjusted accordingly.
The most common datasets that were utilized from the presented state-of-the-art methods are the ATVS, LivDet 2009-2019, MSU-FPAD and the Precise Biometrics Spoof-Kit Dataset. However, although the aforementioned datasets are considered as benchmarks, especially the ones introduced in the LivDet competitions, there are many proposed methods that utilize custom datasets [20][21][22] . The qualitative and the quantitative elements of the most common publicly available datasets are presented in Table 1.
HARDWARE-BASED PRESENTATION ATTACK DETECTION
Hardware-based PAD methods require explicit sensors to be integrated into the fingerprint biometric system to detect whether signals, such as the fingerprint temperature [23] , pulse oximetry [24] , blood pressure [25] or odor [26] , are real or not. Biometric systems that make use of these hardware sensors capture both the subject's fingerprint and one or more of the signals to authenticate the user. Thus, authentication is more accurate, PAs are prevented and the FAR will be higher. However, on the contrary, the system becomes more complex and expensive.
The skin temperature is considered normal when it lies between 26 and 37 °C. However, there are people whose blood circulation is problematic and this fact finally leads to a larger deviation in skin temperature. Moreover, the environmental conditions, e.g., temperature, that exist during sample acquisition must be considered in order to make the biometric system operational under different conditions. Therefore, the temperature range must become larger, but this inevitably increases the likelihood that the system will be deceived.
Baldisserra et al. [26] proposed an odor-based PAD system. The acquisition of the odor is made of chemical sensors that detect the characteristic pattern of an odor. The achieved EER measured during the experiments was 7.48%. The conducted experiments showed that when the odor sensors were exposed to skin or gelatin, the voltage decreased, while it increased when the sensors were exposed to silicone or latex. However, the drawback of the method was that some artificial fingers, such as gelatin, show analogous sensor responses to real fingers and therefore the biometric system can be fooled.
Pulse oximetry measures the saturation of oxygen of hemoglobin (%Sp02) by calculating the red and near infrared light absorption characteristics of oxygenated and deoxygenated hemoglobin.
Blood pressure was also proposed [25] as a biosignal capable of discriminating between bona fide and artificial fingerprints. Normal adult blood pressure is in the range of 80 mmHg, when the heart relaxes (diastolic pressure), to 120 mmHg, when the heart beats (systolic pressure). Lower values mean that the person suffers from hypotension. Conversely, when the systolic blood pressure is equal to or above 140 mmHg and/or a diastolic blood pressure is equal to or above 90 mmHg, the person suffers from hypertension. Critical pressure values are 140 mmHg for the diastolic blood pressure and 300 mmHg for the systolic blood pressure. Therefore, diastolic and systolic blood pressure values range from 80 to 140 mmHg and from 120 to 300 mmHg, respectively [27] . Thus, blood pressure values outside these ranges may indicate an artificial fingerprint. The limitation of this method relies on the case that if an attacker "wears" an artificial fingerprint on their finger, the measured blood pressure value may lie within the accepted range. Therefore, an attacker with hypertension could bypass this PAD method.
Another approach to detect whether or not an artificial material was utilized is the deployment of optical coherence tomography (OCT) [28] . OCT is an imaging technique that allows some of the subsurface characteristics of the skin to be imaged and extracts relevant features of multilayered tissues up to a maximum depth of 3 mm.
Cheng and Larin [29] proposed the adoption of OCT by acquiring averaged B-scan slices to reduce speckle noise and form a one-dimensional curve that represented the distribution of light into the skin. Afterwards, they applied autocorrelation analysis to detect repeating structures. Homogeneous signals yield high absolute autocorrelation coefficients, while inhomogeneous signals yield autocorrelation coefficients close to zero. The authors assumed that real human skin exhibits inhomogeneity while artificial fingerprints do not.
Cheng and Larin [30] extended the previous work and instead of obtaining a B-scan slice, they acquired a number of lateral scans to create a volumetric representation of the finger. Using this method, they were able to visually analyze the topography of the fake layer and the underlying real fingerprint to detect PAs.
Other similar works utilized a frequency-domain OCT system [31] , a spectral-domain OCT system [32] , an en face and time-domain OCT system [33] and a swept source OCT system [34] as PAD methods. These techniques include the ability to detect differences between fake layers placed on a finger and the real finger below, to map eccrine sweat glands on the fingertip, to detect additional layers placed on top of the skin and to extract reliable subsurface information from a real finger, which are not present in artificial fingerprints.
In addition to the aforementioned OCT-based PAD methods, other research has utilized short-wave infrared (SWIR) images, special lighting microscopes and terahertz time-domain spectroscopy (TDS).
Hussein et al. [35] proposed a novel hardware-based method based on two sensing modalities, i.e., MS illumination in the SWIR spectrum (wavelength range from 1200 to 1550 nm) and laser speckle contrast imaging (LSCI). The authors evaluated the effectiveness of both modalities by developing a touchless prototype fingerprint imaging system that was designed to capture images in the visible domain for verification, and in the SWIR domain and LSCI for FPAD. The capture device was used to collect data from 778 finger samples (551 bona fide and 227 PA), covering 17 different attack species. To evaluate the effectiveness of the capture device, the authors utilized a patch-based convolutional neural network on the two sensing modalities and the results were promising.
Tolosana et al. [36] proposed a novel fingerprint presentation attack detection method based on convolutional neural networks (CNNs) and SWIR multi-spectral images. Based on an analysis of the intra-and interclass variability, two SWIR wavelengths and their combination were selected as input for the network. The experimental evaluation yields a BPCER of 0% (i.e., a highly convenient system) and an APCER of 0% simultaneously (i.e., highly secure). Although the results are excellent, more experiments should be made on a larger database, comprising more PAIs and more bona fide samples, in order to further test the performance of the algorithm for both known and unknown attacks.
Gomez-Barrero et al. [37] introduced another PAD scheme that utilized SWIR spectral images of the finger and the inside of it using LSCI technology. For the classification of the fingerprint as bona fide or artificial, the fusion with a weighted sum of several features and classifiers depending on the input of the scheme was used. They evaluated their method on a custom dataset whilst including unknown PA showed a less than 0.1% BPCER and an APCER of ~3%.
Tolosana et al. [38] , in an extension of their work in [36] , introduced a novel capture device capable of acquiring fingerprint samples in the SWIR spectrum. They experimented with three CNN architectures: (1) a residual CNN both trained from scratch and pretrained; (2) VGG19 [39] ; and (3) MobileNet [40] . The optimal performance was exhibited with the fusion of the residual CNN trained from scratch and the VGG19 architecture on a dataset comprised of 4700 samples and with the assumption that five PAI species were not used in the training and were considered as unknown PAI. This architecture achieved an APCER of ~7% for a BPCER of 0.1% if user convenience was the top priority and a BPCER of 2% for any APCER under 0.5% when security took precedence.
Goicoechea-Telleria et al. [41] proposed a low-cost PAD subsystem using special lighting microscopes with only one wavelength (575 nm) with a filter (610 nm) and took only the red channel. This resulted in low APCER and BPCER values, i.e., an APCER of 1.78% and a BPCER of 1.33% at 70% training. Moreover, all iterations of classifying the 480/510 nm wavelength of the blue channel have shown a BPCER of 0.00%. Furthermore, it was discovered that Play-Doh artefacts were very easily detected with this approach. Although the aforementioned results are promising, the system has to be thoroughly tested with a larger dataset.
To facilitate the exploration of novel fingerprint PAD techniques involving both hardware and software, Engelsma et al. [42] designed and prototyped a low-cost custom fingerprint reader, known as RaspiReader, with ubiquitous components. RaspiReader has two cameras for fingerprint image acquisition. One camera provides high-contrast frustrated total internal reflection (FTIR) fingerprint images and the other outputs direct images of the finger in contact with the platen. Using both of these image streams, the discriminative color local binary patterns (CLBP) from both raw images were extracted which, when fused together, matched the performance of state-of-the-art PAD methods (CNN). Moreover, fingerprint matching experiments between images acquired from the FTIR output of RaspiReader and images acquired from a commercial off-the-shelf (COTS) reader verified the interoperability of the RaspiReader with existing COTS optical readers.
Pałka and Kowalski [43] used a TDS setup in a reflection configuration for the non-intrusive detection of fingerprint PAs. More specifically, the authors studied the interaction of terahertz radiation with the friction ridge skin of finger pads and with artificial samples. Moreover, five common PA materials were used and their complex refractive indices were determined. It was proved that both the reflected time signals and the reflectance spectra of the imitations differ significantly from the living fingers of 16 people. Based on the conducted analysis, two PAD methods were proposed. The first method was based on a time-frequency feature analysis and achieved a TDR of 87.9% and an FDR of 3.9%. The second method was based on a deep learning classifier applied to reflectance spectra, with a second criterion based on reflected signals in the time domain. The second method with five-fold cross validation provided excellent classification with a TDR of 98.8%. The second method was also validated using the cross-material scenario and achieved slightly lower results with a TDRs of 98.7% and 93.2% for silicone, latex and plasticine samples (Group I) and gelatin, Play-Doh and water-based samples (Group II), respectively.
Spinoulas et al. [44] explored the effectiveness of PAD schemes in front-illumination imaging using shortwave-infrared, near-infrared and laser illumination and back-illumination imaging using near-infrared light. Their architecture utilized a memory efficient fully convolutional neural network (FCN). The effectiveness of the FCN was first validated on the LivDet 2015 dataset. They concluded that in the case of unknown PAIs, front-illuminated multi-spectral images (visible, NIR and SWIR) presented the best performance, either individually or in fusion with other modalities used in this study to capture the fingerprint image.
Other hardware-based solutions, such as the capture means of biological signals of life, like blood flow and pulse rate detection [45] and electrocardiogram (ECG) [46] or electroencephalogram (EEG) signals [47] , are also discussed in the literature. However, all the biological signals either require expensive capture equipment or in some cases [48] may add a time delay to the user authentication process.
PAD methods make biometric systems securer and resistant to attacks. Nevertheless, due to their limitations and given the fact that they are not immune to PAs, software-based solutions have been proposed to enhance security and avoid any modifications to the hardware.
SOFTWARE-BASED PRESENTATION ATTACK DETECTION
Software-based PAD methods utilize algorithms to detect artificial fingerprints once the sample has been acquired by the sensor. A typical PAD follows the procedure of capturing the sample, preprocessing and decision, as shown in Figure 5.
It must be noted that in many cases, the feature extraction from the captured data and the classification can be performed from the same system, e.g., from a convolutional neural network.
The categorization of software-based PAD approaches presented here was inspired by Refs. [8,9] . Softwarebased PAD methods can be divided into two major categories: dynamic and static.
Dynamic methods
Dynamic PAD methods utilize dynamic features. These features change over time and for their extraction, a time-series sequence of images or video of the fingerprint is required. This is opposed to static methods that classify bona fide or artificial presentations according to the data acquired from a single image. Dynamic methods present the ability to detect and measure vitality signs that can distinguish between bona fide and artificial fingerprints. These signs include the perspiration phenomenon, skin elasticity deformation and the displacement of blood that occurs when a finger is under pressure. Table 2 summarizes the discussed methods.
Abhyankar and Schuckers [20] proposed a PAD method based on the perspiration phenomenon and utilized two samples captured in a time series of 0 and 5 s. The coefficients of Daubechies wavelet analysis using the zoom-in property were utilized to reflect the perspiration pattern. A threshold was applied to the first difference of the information in all the sub-bands. A dataset consisting of 30 live, 30 artificial and 14 cadaver fingerprint samples was utilized, where half of the data were used for training and the other half for evaluation. The proposed method achieved an FLR of 0% and an FSA of 0% at threshold levels of 44.55, 40.75 and 31.6 for an optical scanner, a capacitive DC scanner and an opto-electrical scanner, respectively.
Antonelli et al. [49] proposed a PAD method based on skin elasticity/distortion. This method requires that the user moves their finger while applying force to the scanner in order to maximize skin distortion. The proposed method was evaluated on a dataset comprising of ten image sequences of each finger (thumb and forefinger of the right hand) of 45 volunteers and ten image sequences of 40 artificial fingers, and achieved an EER of 11.24%.
Jia et al. [50] introduced a novel PAD method based on skin elasticity. This method was based on two features that represent skin elasticity acquired from a two-image sequence. The utilized features were the correlation coefficient of the fingerprint area, the average image signal intensity and the standard deviation of the fingerprint area extension in the x and y axes. The classification was accomplished with Fisher's linear discriminant analysis. This method achieved an EER of 4.78% on a dataset comprised of 770 image sequences.
Zhang et al. [51] proposed a PAD method based on finger skin elasticity analysis and utilized the thin-plate spline model. This method exhibited an EER of 4.5% on a dataset of image sequences recorded from 120 artificial fingers from 20 volunteers and the corresponding real fingers.
DeCann et al. [52] proposed a PAD method that quantified the perspiration phenomenon and it was based on a time-series sequence of acquired fingerprint samples with a 1 s interval. It used region labeling on the first image capture whilst the second capture was comprised of three images (absolute, positive and negative). A neural network was utilized for classification. A dataset of 1526 bona fide and 1588 artificial fingerprints from 150 volunteers was used for evaluation and the proposed method achieved an EER of 4.5%.
Ref. Year Dataset Method Results
Abhyankar and Schuckers [20] 2004 Own dataset Detection of perspiration phenomenon FLR of 0% and FSA of 0% at threshold levels of 44.55, 40.75 and 31.6 of optical, capacitive DC scanner and optoelectrical scanner, respectively Antonelli et al. [49] 2006 Own dataset Skin elasticity/distortion EER of 11.24% Jia et al. [50] 2007 Own dataset Skin elasticity EER of 4.78% Zhang et al. [51] 2007 Own dataset Skin elasticity EER of 4.5% DeCann et al. [52] 2009 Own dataset Detection of perspiration phenomenon EER of 4.5% Nikam and Agarwal [53] 2009 Own dataset Detection of perspiration phenomenon Classification accuracy of 97.92% for live and of 99.10% for artificial samples Abhyankar and Schuckers [54] 2009 Own dataset Detection of perspiration phenomenon EER of 0.03% while incorporated to verifinger scanner. Exhibited a 13.82% improvement to the scanner's EER performance Abhyankar and Schuckers [55] 2010 Own dataset Detection of perspiration phenomenon Classification rate of 93.7% Marcialis et al. [56] 2010 Own dataset
Detection of active sweat pores Qualitative interpretation of the results
Husseis et al. [59] 2020 Own dataset Extraction of eight global measures that included intensity, contrast, and randomness BPCER of 18.1% at 5% APCER for thermal sensor BPCER of 19.5% at 5% APCER for optical sensor Husseis et al. [60] 2021 Own dataset Extraction of five spatiotemporal features BPCER of 3.89% at 5% APCER for thermal sensor BPCER of 1.11% at 5% APCER for optical sensor Abhyankar and Schuckers [54] proposed a PAD method that detected the perspiration changes along the fingerprint ridges from fingerprint samples captured at 0 and 2 s. The measures used to classify a fingerprint as artificial or not were the Daubechies wavelet transform, the multiresolution analysis and the wavelet packet transform, which were used to isolate the changes in perspiration based on the total energy distribution. A dataset of sequences of images of 58 live, 50 artificial and 28 cadaver fingerprints was utilized. The method exhibited an EER of 0.03% when it was incorporated into the commercially available "verifinger" matcher. The "verifinger" matcher without this incorporation achieved an EER of 13.85%.
Abhyankar and Schuckers [55] utilized a time-series fingerprint image, captured with a 2-time interval in order to use the perspiration phenomenon to classify fingerprints as bona fide or not. This method was based on the detection of the signal changes of singularity points that were found with the use of wavelets. The method was evaluated on a dataset of 58 bona fide, 50 artificial and 28 cadaver time-series fingerprint samples and achieved a 93.7% correct classification rate.
Marcialis et al. [56] proposed a PAD method based on the detection of sweat pores. This was a two-step procedure that evolved the time series capture of two samples in a 5 s interval. The authors utilized the difference in the number of pores of each region of interest (ROI) between the captured samples (four features), along with the average Euclidean distances among pores in the second sample (three features). Therefore, a seven-dimensional feature vector was formed, which was utilized to train a K-NN classifier and a multi-layer perceptron (MLP). The experimental results on their own dataset consisting of 8960 bona fide and 7760 artificial fingerprint samples showed that analyzing the location of pores for PAD is a promising technique.
Memon et al. [57] developed a system that can detect active sweat pores to determine whether a fingerprint is bona fide or artificial. Two fingerprint images with a 2 s time interval were utilized in conjunction with an image processing algorithm that depends on high pass and correlation filtering, followed by binarization. The efficiency of this method is negatively correlated to the threshold value of binarization. Alternatively, the discrimination ability is positively correlated with the threshold. The algorithm was tested on the NIST4 [58] and BFBIG-DB1 [57] databases.
Husseis et al. [59] used eight global measures that included intensity, contrast and randomness. These features were extracted from fingerprint videos. For the evaluation of the proposed method, 792 bona fide presentations and 2772 attack presentations were collected from thermal and optical sensors and used as a dynamic dataset. An SVM, linear discriminant analysis (LDA) and ensemble learning were used for classification. This PAD method achieved a performance of 18.1% BPCER for the thermal subset and 19.5% BPCER for the optical subset at 5% APCER when an SVM was used for classification.
Husseis et al. [60] , in an extension of their previous work [59] , utilized videos of fingerprints to extract five spatiotemporal features that allowed them to detect PAIs. An SVM with a second-degree polynomial kernel was used for classification. This method achieved a BPCER of 1.11% for an optical sensor and a BPCER of 3.89% for a thermal sensor at 5% APCER for both sensors on their own dataset.
Static methods
Static PAD methods rely on features extracted from a single fingerprint image. These features are unique and do not change over time. Depending on the technique or the type of features or the type of classification method used (e.g., neural networks, deep learning and so on), static methods can be further divided into those that utilize anatomical or physiological features, image quality features, textural features, neural networks, fusion of features and generalization efficient/wrapper methods. The latter category describes the methods that focus on performance against PAIs not seen in training. Some of them can be combined with the ones from the first five categories to improve the overall system performance, especially against PAI species made with unknown materials.
Anatomical or physiological features
This section describes state-of-the-art PAD methods, where anatomical or physiological features, such as sweat pores and perspiration, are utilized to determine if a fingerprint is real or artificial. The detection schemes are mainly focused on sweat pores [ Figure 6] or perspiration. The term sweat pore describes tiny openings in the skin where sweat reaches the surface from their respective glands below. Perspiration is a phenomenon where sweat starts from the pores and scatters along ridges. Thus, regions between pores become darker. Modern PAD methods exploit this property and by observing numerous samples, they can capture a perspiration pattern that indicates whether a finger is real or not. Table 3 summarizes the presented methods.
Ref. Year Dataset Method Results
Tan and Schuckers [21] 2010 Own dataset Perspiration. Ridge signal and valley noise analysis EER of 0.9% Espinoza and Champod [61] 2011 Own dataset Pores of the skin 21.2% FAR and 8.3% FRR Memon et al. [57] 2011 NIST4, BFBIG-DB1 Active sweat pores Qualitative interpretation of the results Marasco and Sansone [62] 2012 LivDet 2009 Feature set combined of (1) residual noise (2) first order statistics (3) the intensity distribution and (4) individual pore spacing ACE of 12.5% Pereira et al. [63] 2012 Own dataset Combination of features sets proposed in Refs. [64,65] Improved performance by 33.56% ACE of 4.17% for the SVM and of 4.27% for the MLP (Single attempt for acceptance scenario) Marcialis et al. [67] 2012 LivDet 2009 Features that are encountered in the production of artificial fingers Promising results Johnson and Schuckers [68] 2014 LivDet 2011, 2013 Perspiration based presentation attack detector EER of 12% on the LivDet 2011 and of 12.7% on the LivDet 2013 Lu et al. [69] 2015 Tan and Schuckers [21] proposed a detection scheme based on perspiration and utilized ridge signal and valley noise analysis. Gray level patterns in spatial, frequency and wavelet domains in combination with classification trees and neural networks were used for the detection. The proposed scheme achieved an EER of 0.9% on their own dataset comprised of 644 bona fide and 570 artificial fingerprint samples.
Espinoza and Champod [61] proved that the pores of the skin of a fingerprint can be used as a feature that can discriminate bona fide and artificial fingerprints. The discriminative factor in their PAD scheme was the difference between the total number of pores in bona fide and artificial fingerprint samples. Their method achieved a 21.2% FAR and an 8.3% FRR using their own dataset.
Marasco and Sansone [62] utilized a feature set combined of the residual noise of the fingerprint image to detect the coarseness of the artificial fingerprint, first-order statistics based on the gray level of each pixel, the intensity distribution to detect PAIs and the individual pore spacing, which is unique to every human. An SVM, decision tree, MLP and Bayesian classifier were chosen as classifiers, depending on the best performance per sensor. This method outperformed other approaches, exhibiting an ACE of 12.5% on the LivDet 2009 dataset, and offered significant gains in speed.
Pereira et al. [63] used a combination of feature sets proposed in Refs. [64,65] . The 17-dimensional feature vector was minimized with the sequential forward selection technique [66] . A SVM and a MLP were used. The authors concluded that the proposed feature set improved performance by 33.56% and the ACE was increased as more attempts for authentication were allowed. The best performance was reached on a single attempt for the acceptance scenario and it exhibited an ACE of 4.17% for a SVM and 4.27% for a MLP classifier. Furthermore, the SVM performed better in general as a classifier, except on biometrics acquired from elderly people where the MLP performed better.
Marcialis et al. [67] proposed the utilization of features that are encountered in the production of artificial fingers. They utilized the Fourier power spectrum of the fingerprint image and concluded that these features can be relevant for discriminating bona fide fingerprints from artificial ones. Moreover, the fusion of features extracted from PAIs and bona fide presentations showed a significant improvement in performance. Their method was evaluated on the LivDet 2009 dataset and showed promising results.
Johnson and Schuckers [68] proposed a perspiration-based PAD. After detecting the pores, a small surrounding area of the pore was inspected to ascertain the perspiration activity. A SVM classifier with a radial basis function kernel was utilized and the method was evaluated on the LivDet 2011-2013 datasets. Experimental results showed that the method performed well, especially when combined with other PAD techniques. More specifically, the best approach exhibited an EER of 12% on the LivDet 2011 and an EER of 12.7% on the LivDet 2013.
Lu et al. [69] proposed a method where after the extraction of pore information using a Mexican hat (Mexh) wavelet transform and adaptive Gaussian filters, five statistical pore distribution features were utilized. These features were the pore number (total amount of pores), pore density, mean pore space, variance and the variation coefficient. A SVM was used for classification. This technique exhibited an ACE of 7.11% on the LivDet 2011 and an ACE of 11.4% on the ATVS datasets. Table 3 summarizes the aforementioned PAD techniques.
Image quality features
In this section, we present the state-of-the-art PAD methods that seek to find detectable differences in the scans of a living finger in comparison to an artificial one. Measures like continuity, clarity and strength of valleys and ridges are being utilized for anti-spoofing. Typically, features extracted from ridge-valleys are utilized but other features were proposed as well, as shown in Table 4. The advantages of these methods are the simplicity, low computational complexity and fast response times [14] .
Tan [70] proposed a PAD method that utilized noise analysis along the valleys in the ridge-valley structure of fingerprint images. Wavelet decomposition was utilized to acquire statistical features in multiresolution scales. Decision trees and neural networks were used for classification, while two datasets were used for evaluation. The first one was comprised of 58 live, 80 artificial and 25 cadaver samples, whilst the second included 28 bona fide and 28 artificial fingerprints. This method exhibited a correct classification rate from 90.9% to 100% depending on the technology of the scanner.
Galbally et al. [71] proposed a PAD method that utilized ten different quality features of the image that depend on the ridge strength, continuity and clarity. LDA was used as a classifier. This method achieved an ACE of 6.56% on the LivDet 2009 dataset.
Ref. Year Dataset Features Results
Tan [70] 2008 Own dataset Noise analysis along the valleys in the ridge-valley structure of fingerprint images Correct classification rate from 90.9% to 100% depending on the technology of the scanner Galbally et al. [71] 2009 LivDet 2009 Ten different quality features of the image that depend on Ridge-strength, Ridge-continuity and Ridge-clarity ACE of 6.56% Lee et al. [72] 2009 Own dataset The standard deviation of the fractional Fourier transform of a line that was detected when a fingerprint image was transformed into the spatial frequency domain Error rate of 11.4% Jin et al. [73] 2011 BERC Fusion of spectral band, middle ridge and valley line Classification error rate approx. 6% Galbally et al. [75] 2012 Galbally et al. [76] 2014 Lee et al. [72] proposed a novel PAD method that measured the standard deviation of the fractional Fourier transform of a line, which was detected when a fingerprint image was transformed into the spatial frequency domain. This transformation was accomplished with the use of a two-dimensional fast Fourier transform. For a dataset of 3750 bona fide and artificial fingerprint samples in total, this method exhibited an error rate of 11.4% when a certain region was utilized after the fractional Fourier transform.
In Ref. [73] , a method based on the fusion of the spectral band, middle ridge and valley line was proposed. SVMs and quadratic classifiers were used for the classification. This system was tested on the Biometrics Engineering Research Center dataset [74] and exhibited better security than other fingerprint recognition schemes. It also has the advantage that it uses only one fingerprint. A classification error of ~6% was exhibited with the utilization of all three proposed features.
In Ref. [75] , image quality features that depend on the ridge strength, directionality, continuity and clarity and the integrity of the ridge-valley structure or estimated verification were measured to classify a fingerprint as artificial or not. The classification was performed by a LDA classifier. More specifically, their method showed ACEs of 12.5% and 5.4% on the LivDet 2009 and ATVS datasets, respectively. Compared to other similar methods, the fact that only one sample is needed, makes the sample acquisition process faster and less invasive.
In another work of Galbally et al. [76] , 25 image quality features were utilized to discriminate bona fide samples from artificial ones. Their method was tested on the LivDet 2009 dataset and achieved an APCER of < 13% and a BPCER of ≤ 14%.
Sharma and Dey [77] proposed an architecture that utilized five novel and eight existing quality features that depend on the ridge-valley shape and are sensor independent. Thus, a 13-dimensional feature vector was formed and a sequential forward floating selection (SFFS) and a random forest feature selection algorithm were deployed to select the optimal feature set. A SVM, random forest and gradient boosted tree were utilized for classification. This approach showed ACEs of 5.3% on LivDet 2009, 7.80% on LivDet 2011, 7.4% on LivDet 2013 and 4.2% on LivDet 2015.
Textural features
Researchers have found that the textural features [ Figure 7] of the fingertip, such as smoothness and morphology, can be used to distinguish real fingerprints from artificial ones [78,79] . Methods belonging to this category make use of such textural features. The extracted features are then presented as input to a classifier, which in most cases is an SVM. These methods are described below and summarized in Table 5.
Nikam and Agarwal [22] utilized curvelet features, such as energy, co-occurrence and fused signatures, to discriminate bona fide samples from artificial ones. To limit the dimensionality of the feature vector, they applied an SFFS algorithm. An ensemble classifier, on the basis of the "majority voting rule", of three independent classifiers, namely, AdaBoost.M1, an SVM and an alternating decision tree, was used for classification. Their method was tested on their own dataset comprising of 185 bona fide and 240 artificial samples and on the FVC 2004 [80] dataset comprised of only bona fide samples. When the fusion of energy and co-occurrence signatures occurred, the proposed technique achieved a 99.29% correct classification rate, which outperformed the wavelet and power spectrum PAD techniques.
Ghiani et al. [81] proposed a new feature set, extracted by the deployment of the textural analysis of the acquired image spectrum, known as rotation invariant local phase quantization (LPQ). This method exhibited an average EER of 12.3% on the LivDet 2011 dataset.
Gragnaniello et al. [82] used textural classification by utilizing the Weber local descriptor (WLD). A linear kernel SVM classifier was used for classification. The classifier was trained on discriminative features build from the joint histograms of the differential excitation and orientation of every pixel of the acquired sample. This method presented good performance and it was further improved with the integration of other LPQ descriptors, especially if the latter relied on different image attributes. The ACE concerning WLD was 2.95%, while when the WLD and LPQ were combined, the ACE was 1.14% on the LivDet 2009 dataset. Experimental results on the LivDet 2011 dataset showed an ACE of 15.33% for the WLD, while the ACE for the fusion of the WLD and LPQ descriptors was 7.86%.
Jia et al. [83] developed a descriptor known as multi-scale block local ternary patterns that depend on the average value of the pixel blocks. The differences amongst the pixels and the threshold constitute the ternary pattern. This method showed an ACE of 9.8% on the LivDet 2011 dataset.
Pereira et al. [84] proposed spatial surface coarseness analysis (SSCA). SSCA is based on wavelet analysis of the fingerprint surface with the addition of spatial features. A polynomial kernel SVM was used for classification. SSCA exhibited an ACE of 12.8% on the LivDet 2011 dataset.
Ghiani et al. [85] introduced a novel descriptor known as binarized statistical image features (BSIFs). This descriptor encodes the local fingerprint texture on a feature vector. This method was evaluated on the LivDet 2011 dataset and achieved an EER of 7.215% when a 7 × 7 window size was used in conjunction with a 4096-dimensional feature vector.
Zhang et al. [88] proposed a method that depends on wavelet analysis and LBP. Wavelet analysis was applied to produce the denoised image and the residual noise image. The LBP histograms were constructed based on the residual noise and denoised images. An SVM based on a polynomial kernel was deployed for classification. This approach offered ACEs of 11.47% on the LivDet 2011 and 11.02% on the LivDet 2013.
Gottschlich et al. [89] proposed a descriptor known as histograms of invariant gradients. Fingerprint discrimination was based on multiple histograms of invariant gradients, computed from spatial neighborhoods within the fingerprint. The best variation of the proposed method achieved an ACE of 12.2% on the LivDet 2013 dataset.
Jiang and Liu [90] used co-occurrence matrices from image gradients for feature extraction. The image was first quantized to decrease the dimensionality and increase the usefulness of the feature vector. Afterwards, the image differences were calculated from adjacent quantized pixels along the horizontal and vertical axes. These differences were then truncated in a range within a specific threshold. Finally, the truncated differences were used to produce the co-occurrence matrices, which were utilized as features. An SVM was trained on two datasets and the proposed method achieved ACEs of 6.8% and 10.98% on the LivDet 2009 and 2011 datasets, respectively.
Gragnaniello et al. [91] developed a new local descriptor known as the local contrast phase. After the analysis of the acquired sample in the spatial and frequency domains, information on the local amplitude contrast and local behavior of the image was gathered depending on the selected transform coefficients. The twodimensional contrast-phase histogram crafted from the information collected in the previous stage was used as a feature vector. A linear-kernel SVM classifier was utilized and an ACE of 5.7% was found for the LivDet 2011 dataset.
Gottschlich [92] introduced a novel local image descriptor known as the convolution comparison pattern. This descriptor utilized rotation invariant image patches to compute the discrete cosine transform (DCT) and the comparison of pairs of two DCT coefficients to obtain binary patterns, which are summarized into histograms, comprised of the relative frequencies of pattern occurrences. The feature vector was acquired by the concatenation of multiple histograms and the classification was performed by an SVM. This descriptor, with the use of a specific configuration, achieved an accuracy of 93% on the LivDet 2013 dataset.
Dubey et al. [93] used low level gradient features collected with the utilization of speeded-up robust features and the pyramid extension of the histograms of oriented gradient in conjunction with textural features acquired with the use of Gabor wavelets. This architecture exhibited an EER of 3.95% on the LivDet 2011 and achieved an ACE of 2.27% on the LivDet 2013 dataset.
Yuan et al. [94] proposed the angular second moment, entropy and inverse differential moment and correlation to form the feature vector. These parameters were used as textural features and were extracted from eight difference co-occurrence matrices. Kumpituck et al. [98] proposed the wavelet-based local binary pattern, which is based on the utilization of the LBP for capturing the local appearance of the sub-band images. Prior to the utilization of the LBP, the fingerprint image was decomposed by the two-dimensional discrete wavelet transform. An SVM was used for classification. The proposed method achieved an ACE of 9.95% on the LivDet 2009-2013 datasets.
Xia et al. [99] proposed a method that utilized co-occurrence matrices constructed from image gradients. Second and third-order co-occurrence matrices were used to train an SVM classifier. When third-order cooccurrence matrices were utilized as features, the proposed architecture achieved ACEs of 6.2% on the LivDet 2009 and 6.635% on the LivDet 2011. . LCP: Local coherence pattern.
González-Soler et al. [100] proposed a method based on the bag of words algorithm. After extracting the scale invariant feature transform (SIFT) features, they encoded them with a spatial histogram of visual words. The classification was performed by a SVM through a feature map. This architecture achieved an ACE of 4.7% on the LivDet 2011 dataset.
Kundargi and Karandikar [101] proposed a LBP texture descriptor with a wavelet transform that utilized the textural characteristics that differ in bona fide and artificial samples due to variations at the gray level of the image. The fingerprints were classified by linear and RBF kernel SVM classifiers. This method offered an ACE of 8.3% on the LivDet 2011 dataset.
Jiang and Liu [102] introduced a method that utilized the uniform local binary pattern in three-layer spatial Gaussian pyramids. This method achieved a 21.205% ACE on the LivDet 2011 dataset.
Mehboob et al. [103] proposed a novel descriptor known as the combined Shepard magnitude and orientation. This method extracts the global features of the fingerprint by computing the relation between the perceived Shepard magnitude and initial pixel intensities in the spatial domain. To achieve this, the fingerprint is considered as a two-dimensional vector. The descriptor first constructs perceived spatial stimuli by combining the logarithmic function of initial pixel intensities and the Shepard magnitude (SM) in the spatial domain. Next, the phase information (CO) is computed in the frequency domain. Finally, the SM and CO are concatenated and represented as a two-dimensional histogram. The rotation invariant version of LPQ was utilized for characteristic orientation computation. An SVM was used for classification and average error rates of 5.8%, 2.2% and 5.3% on the LivDet 2011, 2013 and 2015 datasets were achieved, respectively.
Xia et al. [104] also suggested a novel local descriptor entitled the Weber local binary descriptor (WLBD) that consists of two components that were used to extract intensity-variance and orientation features. The first is the local binary differential excitation module that captures the spatial structure of the local image patch and the second is the local binary gradient orientation module that is designed to extract gradient orientation from center-symmetric pixel pairs. The output of these components formed a discriminative feature vector [ Figure 9] that was used as the input for an SVM classifier. The WLBD achieved ACEs of 9.67% on the LivDet 2015 dataset, 1.89% on the LivDet 2013 dataset and 5.96% on the LivDet 2011 dataset. [104] .
In Ref. [105] , a method utilizing the guided filtering and hybrid image analysis was proposed. After performing ROI extraction and guided filtering to acquire the denoised image, the co-occurrence of adjacent LBPs (CoALBPs) [106] descriptor was utilized to form the feature vector from the original and the denoised image. An SVM with an RBF kernel was used and the proposed method exhibited average accuracies of 94.33% on the LivDet 2011 and 98.08% on the LivDet 2013 datasets. Moreover, the computation time was 4.5 times faster in comparison to deep learning methods.
Kumar and Singh [107] proposed a fingerprint authentication system with a PAD module, utilizing supervised learning with minutiae extraction and classification with an SVM. Features like homogeneity, contrast, energy, entropy and mean last histogram were extracted. The proposed module achieved an average performance of 96.06% on the FVS [108] and ATVS datasets.
Neural networks
Neural networks have been used for PAD with great success. Most of these methods share the similarity of the segmentation of the background or foreground of the fingerprint image and the extraction of local patches of the image that include the ROI. In recent years, researchers utilized deep learning methods to detect PAs. Deep neural networks like CNNs have been extensively utilized for several security tasks like steganalysis [109,110] , but nowadays they are also used for fingerprint PAD. CNNs are either utilized as feature extractors or even perform the classification. There are also proposed methods in the literature that use transfer learning. Transfer learning is the reuse of a pre-trained neural network on a new problem, i.e., it exploits the knowledge gained from a previous task to improve the generalization to another. Other methods exploit either generative adversarial networks (GANs) or restricted Boltzmann machines for PAD. Table 6 summarizes the state-of-the art methods that utilize a neural network (shallow or deep) as a feature extractor or classifier.
Menotti et al. [111] proposed a detection technique that utilized neural networks. One of their approaches utilized an optimized convolutional neural network and an SVM for classification. A second approach was based on the backpropagation algorithm for filter optimization. They concluded that the combination of the two approaches performed better and achieved an ACC of 98.97% on the LivDet 2013.
Nogueira et al. [112] proved that pretrained CNNs achieve high accuracy in PAD. In their work, they analyzed the false fingerprint detection performance of VGG [39] and AlexNet [113] [ Figure 10] that were trained on natural images and further tuned with fingerprint samples. The LivDet 2009, 2011 and 2013 datasets were used and the proposed CNN-VGG showed an ACE of 2.9% and won the LivDet 2015 competition.
Kim et al. [114] proposed an architecture based on a deep belief network (DBN) with multiple layers of a restricted Boltzmann machine, trained on a set of bona fide and artificial samples. The proposed DBN, when trained with augmented data, achieved an average accuracy of 97.10% on the LivDet 2013 dataset.
Marasco et al. [115] experimented on the effectiveness of CNNs as PAD methods. They tested three CNNs that achieved the best accuracy on the LivDet 2013 dataset, namely, CaffeNet (96.5%), GoogLeNet (96.6%) and Siamese (93.1%). They have also indicated that CNNs exhibited the ability to successfully adapt to PAD problems (if they were pre-trained on the ImageNet dataset [116] ) and achieved an area under the curve in the range of -3.6% to +4.6%.
Pala and Bhanu [117] proposed a method based on a triplet of CNNs. Their method employs a variant of the triplet objective function that considers representations of fingerprint images, where the distance between feature points is used to discriminate artificial and bona fide samples. Their architecture scored an ACE of 1.75% on the LivDet 2009, 2011 and 2013 datasets.
Chugh et al. [18] proposed a method that was based on CNNs and minutiae information. For the training of the CNN, aligned and centered local patches (96 × 96 pixels) were utilized. They defined a "spoofness" score, which is the output of the softmax layer of the trained CNN. The "spoofness" score had a range of 0 to 1, where 1 denoted that the sample was artificial. They also crafted the Fingerprint Spoof Buster, which is a graphical user interface that permits the visual evaluation of the fingerprint by the human operator of the scanner. The proposed CNN was evaluated on the LivDet 2011, 2013 and 2015 datasets, as well as the MSU-FPAD and Precise Biometrics Spoof-Kit datasets. Experimental results showed a significant reduction in error rates, with an APCER lower than 7.3% and a BPCER of 1%.
Jung and Heo [118] proposed a method, where the proposed CNN was trained directly from fingerprints, used the squared error layer and had no fully connected layer. The proposed architecture presented a higher average accuracy of 95.3% on the LivDet 2015 dataset compared to other existing methods.
Pinto et al. [119] utilized deep learning and concluded that it achieves very good performance in PAD tasks. The main drawback was the poor performance of deep learning when encountered PAs with instruments that were either not included at all or not included with a satisfactory number of samples in the training data. They also concluded that the best reliability is presented when models are based on both "handcrafted" and "data-driven" solutions.
Park et al. [120] proposed a patch-based PAD method that utilized a fully convolutional neural network, the so-called Fire module of SqueezeNet, which had fewer parameters and demands, with only 2.0 Mb of memory required. This method utilized an optimal threshold, as opposed to the voting method, which decreased the misdetection rate. This architecture achieved an ACE of 1.35% on the LivDet 2011, 2013 and 2015 datasets, when the training was carried out through data augmentation and the patch size was 48 × 48 pixels.
Park et al. [121] proposed a convolutional network known as the patch-based CNN because it utilizes patches (small ROIs) of the fingerprint image. They used three categories for classification, i.e., live, artificial and background. The proposed architecture, when used on 48 × 48 pixel patches, achieved an ACE of 1.43% on the LivDet 2011, 2013 and 2015 datasets in a radically reduced execution time.
Zhang et al. [122] proposed a lightweight residual convolutional neural network (Slim-ResCNN) that requires less processing time and is robust to overfitting. Local patch extraction was based on statistical histograms and the center of gravity. The proposed method exhibited an overall accuracy of 95.25% on the LivDet 2017 dataset.
Yuan et al. [123] proposed a real-time fingerprint PAD method based on autoencoders. Automatic feature extraction was performed with the use of a stacked autoencoder. Unsupervised learning was used for pretraining, while the detection was performed with the use of supervised training. A Softmax classifier was utilized for classification. This method achieved ACEs of 19.62% on the LivDet 2013 and 18% on the LivDet 2015.
Pereira et al. [124] tackled the PAD generalization problem, especially for PAs performed with materials not seen in training, with the use of an adversarial training methodology. The proposed method was evaluated with a MLP classifier and a CNN. The regularized CNN approach achieved an APCER of 0.60% on the LivDet 2015 dataset.
Uliyan et al. [125] utilized discriminative restricted Boltzmann machines (DRBMs) in combination with a deep Boltzmann machine (DBM) to extract deep features from the acquired samples. A K-NN classifier was used for classification. The proposed DRBM-DBM architecture exhibited an ACE of 3.6% on the LivDet 2013 dataset.
Zhang et al. [126] proposed a lightweight CNN known as FLD to achieve improved PAD on new materials and to minimize complexity. To address the issue of global average pooling, an attention pooling layer was used. Moreover, a novel block structure (Block D&R) was introduced where the residual path is integrated into the original dense block. FLDNet exhibited ACEs of 1.76%, over all sensors, on the LivDet 2015 dataset and 0.25% on the LivDet 2013 dataset.
Jian et al. [127] proposed a densely connected convolutional network (DenseNet) along with a genetic algorithm utilized for network optimization. The genetic algorithm can automatically optimize DenseNet by finding the optimal structure from the solution space. The proposed model achieved 98.22% accuracy on a testing set containing the LivDet 2009-2015 datasets.
Fusion of features
In this category, approaches that combine features from the different aforementioned categories are presented. Moreover, hybrid methods that utilize both static and dynamic features will be discussed [128][129][130][131][132][133][134] . These methods exploit the advantages and minimize the drawbacks of the aforementioned PAD methods, as these were presented in Dynamic methods, Anatomical or physiological features, Image quality features, Textural features and Neural networks. A summary of the presented methods is given in Table 7.
Derakhshani et al. [128] proposed a method that detected the perspiration phenomenon and it was based on static and dynamic features acquired from two fingerprint samples captured with a time interval of 5 s. One static and four dynamic features were used as input to a back-propagation neural network, which was utilized for classification. The dataset for training and testing was comprised of 18 sets of fingerprint samples from live individuals, 18 from cadavers and 18 from spoof materials. On this dataset, the proposed scheme achieved an accuracy of 100%.
Parthasaradhi et al. [129] proposed a method that depends on static features and on the changes due to perspiration to fingerprint images taken at 0, 2 and 5 s. By deploying a weight decade method during the training of a neural network classifier, they concluded that there was significant improvement in performance. On an image sequence dataset of 33 live subjects, 14 cadaver fingers and 33 artificial samples, their method achieved an FLR of 0% and an FSA in the range of 0%-18.2% depending on the sensor technology.
Ref. Year Dataset Feature extraction Results
Derakhshani et al. [128] 2003 Own dataset Detection of perspiration phenomenon Accuracy of 100% Parthasaradhi et al. [129] 2004 Own dataset Detection of perspiration phenomenon FLR of 0% and FSA in the range of 0%-18.2% depending on the sensor technology Parthasaradhi et al. [130] 2005 Own dataset Detection of perspiration phenomenon FLR in the range of 6.77%-20%, FSA in the range of 5%-20% for optical, FLR in the range of 0%-26.9%, FSA in the range of 4.6%-14.3% for capacitive and FLR in the range of 6.9%-38.5%, FSA in the range of 0%-19% for electro optical Tan and Schuckers [131] 2005 Own dataset Static and dynamic features acquired from intensity histograms FLR of 0%, FSA of 8.3% for optical sensor, FLR of 6.7%, FSA of 0% for capacitive sensor and FLR of 7.7%, FSA of 5.3% for electro optical sensor Tan and Schuckers [132] 2006 Own dataset Static and dynamic features acquired from intensity histograms In the range of 90% to 100% for some scanners Jia and Cai [133] 2007 Own dataset Detection of perspiration and skin elasticity EER of 4.49% Plesh et al. [134] 2019 Own dataset Detection of perspiration, skin elasticity and displacement of blood Mean APCER of 3.55 % at 1.0% BPCER, mean APCER of 0.626% at 0.2% BPCER and standard deviation of 1.96% at 1.0% BPCER Nogueira et al. [136] 2014 LivDet 2009LivDet , 2011LivDet , 2013 CNN and LBP for feature extraction ACE of 4.71% Yuan et al. [137] 2019 LivDet 2013 BP neural network ACE of 6.78% Agarwal and Chowdary [139] 2020 LivDet 2011 dataset Stacking and bagging ensemble learning approaches Stacking average accuracy was 80.76% Bagging average accuracy was 75.12% Anusha et al. [140] 2020 Li et al. [144] 2020 In an extension of their previous work [129] , Parthasaradhi et al. [130] used several classification methods, a shorter time window, a more diverse dataset and included other fingerprint sensor technologies. One static and six dynamic measures were used for classification and all classifiers achieved ~90% accuracy. More specifically, on a dataset of 75 samples per scanner, the proposed method achieved an FLR in the range of 6.77%-20% and an FSA in the range of 5%-20% for an optical scanner, and an FLR of 0%-26.9% and an FSA of 4.6%-14.3% for a capacitive scanner. Finally, the same method exhibited an FLR of 6.9%-38.5% and an FSA of 0%-19% for electro-optical scanners.
Tan and Schuckers [131] proposed a PAD method that depends on the static and dynamic features acquired from intensity histograms of the 0 and 5 s images of a fingerprint. On a dataset of sequences of images (30 bona fide, 40 artificial and 14 cadaver fingerprints) captured by three different scanners, their method exhibited an FLR of 0% and an FSA of 8.3% for an optical sensor. The same method exhibited an FLR of 6.7% and an FSA of 0% for a capacitive sensor, and an FLR of 7.7% and an FSA of 5.3% for an electro-optical sensor.
Tan and Schuckers [132] extended their previous work [131] , by reducing the time between capturing fingerprint samples to 2 s instead of 5 s. On a dataset of 58 live, 50 artificial and 25 cadaver fingerprints, the augmented method showed an accuracy of 90%-100% for some scanners by using a classification tree.
Jia and Cai [133] proposed an extension of their previous work [50] . In this study, five features were utilized. Two of them represented skin elasticity, whilst the other three represented perspiration. This method computed two static features, while the remaining three were dynamic features. The proposed scheme was tested on a dataset comprising of 770 image sequences and achieved an EER of 4.49%.
Plesh et al. [134] used a sensor with time-series and color sensing capabilities to capture a gray scale static image and a time-series color capture simultaneously. A dynamic color capture has the ability to measure signs, such as the perspiration, skin elasticity deformation and the displacement of blood that occurs when a finger is pressed. In their work, the second capture (the dynamic one) was utilized for classification with two methods. Initially, static-temporal feature engineering was utilized and then the InceptionV3 CNN [135] trained on ImageNet was used for classification. In their work, the classification performance was evaluated with the use of a fully connected DNN utilizing solely static or dynamic features and a fusion of the two feature sets. On a custom dataset comprising of over 36,000 image sequences and a state-of-the-art set of PA techniques, the approach that utilized the fusion of both static and dynamic features achieved a mean APCER, at a 1.0% BPCER operation point, of 3.55 %, a mean APCER, at a 0.2% BPCER operation point, of 0.626% and a standard deviation, at 1.0% BPCER, of 1.96%.
Nogueira et al. [136] evaluated the efficiency of two feature extraction techniques with data augmentation on an SVM classifier. After the preprocessing step, two feature extraction techniques, i.e., a CNN and LBP, were conducted and randomized PCA was utilized for dimensionality reduction. The proposed CNN, with the use of augmented data, exhibited an ACE of 4.71% on the LivDet 2009, 2011 and 2013 datasets.
Yuan et al. [137] suggested a backpropagation neural network that utilized gradient values calculated by the Laplacian operator. They also presented a system that used the methods shown in Ref. [94] and Ref. [138] to create more productive input data and category labels. This architecture offered an ACE of 6.78% on the LivDet 2013 dataset.
Agarwal and Chowdary [139] proposed the utilization of stacking and bagging ensemble learning approaches in fingerprint PAD. The suggested algorithms considered the similarities of datasets utilized for PAD. Moreover, they are adaptive because they comply with the features extracted from bona fide and artificial fingerprint samples. The proposed algorithms achieved better performance in accuracy and false positive rate than the best individual base classifier. More specifically, the stacking average accuracy was 80.76%, while the bagging average accuracy was 75.12% on the LivDet 2011 dataset.
Anusha et al. [140] proposed an architecture that utilized global image and local patch features. It used LBP and Gabor filters in the preprocessing process to extract features using DenseNet [141] . For the extraction of local patch features, a second DenseNet was used in combination of a channel and spatial attention network module [142] . For patch discrimination, a novel patch attention network was proposed. This network was also used for feature fusion. This method showed average accuracies of 99.52%, 99.16% and 99.72% on the LivDet 2017, 2015 and 2011, respectively.
Agarwal and Bansal [143] proposed the fusion of pores, perspiration and textural features for PAD. The dimensionality of the extracted feature vector was reduced with the use of a stacked autoencoder pretrained in a greedy layer wise manner. A supervised trained Softmax classifier was utilized for classification. In terms of performance, this method achieved ACEs of 0.1866% on the LivDet 2013 and 0.3233% on the LivDet 2015 dataset.
Li et al. [144] used features extracted with three algorithms, i.e., SIFT, LBP and HOG. As a result, the benefits of these algorithms were combined and the overall performance was increased. A fusion rule was developed to fuse the features and the produced feature vector was then classified by an SVM. This method showed ACEs of 4.6% on the LivDet 2011 dataset, 3.48% on the LivDet 2013 dataset and 4.03% on the LivDet 2015 dataset.
Sharma and Selwal [145] proposed a method that utilized majority ensemble voting based on three local and adaptive textural image features. The features were acquired with a new LBP variant descriptor called the local adaptive binary pattern (LABP), combined with features gathered with the use of a CLBP and BSIF descriptors. This method achieved ACERs of 4.11%, 3.19%, 2.88% and 2.97% on the LivDet 2009, 2011, 2013 and 2015 datasets, respectively.
Generalization efficient/wrapper methods
In this category, PAD methods are presented that focus on the efficiency against PAIs made with materials not used during training. Moreover, the performance of methods that were evaluated on novel PAI materials are reported. Some of the presented PAD methods can be used as add-ons or wrappers to any PAD method, in order to improve the performance against PAIs of unknown materials. A summary of the presented methods is given in Table 8.
Rattani and Ross [146] proposed the creation of a novel material detector that detects PAIs made of novel materials. These samples were then used to automatically retrain and update the PAD method. To keep the computational complexity low, the automatic adaptation procedure was executed when the presentation attack detector was offline. This scheme was evaluated on the LivDet 2011 dataset and it exhibited an average correct detection rate of up to 74% and an up to 46% improvement in presentation attack performance when the adaptive approach was utilized.
Jia et al. [147] , in order to address the issue of lack of knowledge of the materials used for artificial fingerprints, suggested a one-class SVM with negative examples (OCSNE). The OCSNE showed an ACE of 23.6% on a modified dataset according to the needs of the evaluation the LivDet 2011, which was better than an SVM.
Rattani et al. [148] proposed to handle PAD as an open set recognition problem. The authors claimed that their approach is useful, because during deployment novel materials different than the ones that the system was trained on may be used to construct artificial fingerprints. In their work a Weibull-calibrated SVM (W-SVM) was used as a novel material detector and as a PAD. They also developed a scheme to automatically
Ref. Year Dataset Method Results
Rattani and Ross [146] 2014 LivDet 2011 Automatic adaptation to novel materials by the use of a novel material detector Average correct detection rate up to 74% and an up to 46% improvement in performance Jia et al. [147] 2014 Modified version of the LivDet 2011 One-class SVM with negative examples ACE of 23.6% Rattani et al. [148] 2015 LivDet 2011 Weibull-calibrated SVM 44% improvement in performance than other methods Sequeira and Cardoso [149] 2015 LivDet 2013 Semi-supervised classification based on a mixture of Gaussians models ACE of 8.35% Nogueira et al. [112] 2016 Gajawada et al. [151] 2019 LivDet 2015 Universal Material Translatorgenerative adversarial network BPCER1000 of 21.96% on unknown PAIs Chugh and Jain [152] 2019 MSU-FPAD A deep convolutional neural network that utilized local patches centered and aligned using fingerprint minutiae Average generalization performance of TDR = 75.24% when the leave-one-out method was used TDR of 97.20% with an FDR of 0.2% when all PAIs were used in training.
Sequeira and Cardoso [149] evaluated several classification methods and concluded that semi-supervised classification based on a mixture of Gaussians models yields better results. Moreover, they proposed the isolation of the fingerprint from the background by adding an automatic segmentation stage to the detection algorithms. The best method they evaluated exhibited an ACE of 8.35% on the LivDet 2013 datasets.
Nogueira et al. [112] tested their PAD method against attack with PAIs not seen in training and their method based on a CNN-VGG achieved average ACEs of 16.1% on the LivDet 2011 dataset and 5.45% on the LivDet 2013 dataset.
Ding and Ross [150] , in their work based on performance metrics on the LivDet 2011 dataset, proved that using an ensemble of one-class SVMs based on descriptors that utilize different features achieves better accuracy than binary SVMs and also competed in performance with other state-of-the-art PAD algorithms immune to fabrication materials. Another advantage of this method is the limited number of artificial fingerprints used for training. The proposed method achieved an average correct detection rate on known PAI of 86.1%, while it presents an average correct detection rate on unknown materials of 84.7%, which is higher than the automatic adaption method presented in [148] . Pala and Bhanu [117] also evaluated their PAD scheme against unknown attacks and their method achieved average ACEs of 10.05% on the LivDet 2011 and 3.35% on the LivDet 2013.
Gajawada et al. [151] , to improve the efficiency of any PAD, especially against new materials, developed the Universal Material Translator (UMT) as a deep learning augmentation wrapper. They proposed the synthetization of artificial samples by utilizing only a small part of them. Along with the UMT, they also used a GAN. Although the authors believe that the combination of a UMT and a GAN produces better results, GANs insert certain artefacts and noise in the generated images that are detectable and negatively affect the performance of the classifiers. Their method was tested on the LivDet 2015 dataset and demonstrated a BPCER1000 of 21.96% on unknown PAIs.
Chugh and Jain [152] experimented on the efficiency of the so-called spoof buster, a PAD wrapper developed in [18] , which could be used on top of any PAD method to improve generalization and efficiency, especially against PAIs not seen in training. The spoof buster used a deep convolutional neural network that utilized local patches centered and aligned using fingerprint minutiae. The MSU-FPAD dataset was utilized and their method achieved a (weighted average generalization performance) TDR of 75.24% when the leaveone-out method was used, as opposed to a TDR of 97.20% with an FDR of 0.2% when all PAIs were used in the training.
Engelsma and Jain [153] proposed the utilization of three GANs on a training set that contained only bona fide samples. Their method showed improvement in cross-material performance, compared to one class or binary state-of-the-art classifiers, on their own dataset that contains 5531 artificial samples (from 12 materials) and 11,880 bona fide samples. For this dataset, it achieved a BPCER500 of 50.20%.
A Slim-Res CNN [122] also demonstrated, to some extent, robustness against attack with new materials and presented an accuracy of 96.82% on the LivDet 2015 dataset, whose testing sets consisted of artificial fingerprints made of unknown materials.
Park et al. [121] also evaluated their proposed method on the LivDet 2015 and concluded that their method presents efficiency against attacks made from unknown materials. The patched based CNN exhibited an ACE of 1.9%.
Grosz et al. [154] suggested the use of adversarial representation learning in DNNs. Their proposal can be added to any CNN that uses 96 × 96 aligned minutiae-centered patches for training, along with the utilization of a style transfer network wrapper. Their method achieved a 92.94% TDR and a 0.2% FDR on the LivDet 2011 and 2015 datasets and on the MSU-FPAD dataset.
Zhang et al. [126] also tested their proposed method against attacks from new materials and achieved an ACE of 3.31% on the LivDet 2015.
González-Soler et al. [155] suggested three techniques for PAD. Initially, the pyramid histogram of visual words was utilized for extracting the local features of the fingerprint with the use of dense SIFT descriptors. Afterwards, the feature vector was formed with the utilization of three methods: (1) bag-of-words; (2) Fisher vector (FV); and (3) vector locally aggregated descriptors. A linear SVM was used for classification. The FV encoding achieved the best detection accuracy on the LivDet 2019 competition. Fusion of the three encodings achieved even better performance and yielded a BPCER100 in the range of 1.98%-17% in the presence of unknown PAI species.
Chugh and Jain [156] proposed the utilization of the Universal Material Generator (UMG) for the performance improvement of any PAD method against unknown materials [ Figure 11]. The UMG is a CNN, trained on characteristics of known materials that artificial fingerprints are made off, in an effort to synthesize artificial samples of unknown materials. The UMG improves the performance by increasing the TDR from 75.24% to 91.78% when the FDR was 0.2% and the average cross-sensor presentation attack detection performance from 67.60% to 80.63% on the LivDet 2017 dataset.
DISCUSSION
Biometric authentication systems have been widely used in recent years and therefore PAD has become crucial. This literature review revealed the immense research in the field of fingerprint PAD. Hardwarebased approaches rely on the detection of signals that confirm that the subject of the recognition process is a genuine one. Although hardware-based approaches present higher performance and reliability, they are intrusive and require extra capturing hardware, added to the sensor of the fingerprint recognition scheme, which comes at great expense and in some cases adds a time delay on the verification process [48] . These are mostly the reasons that a relatively small number of hardware-based solutions, in contrast to software-based methods, can be found in the literature. Furthermore, software PAD methods have the potential to protect against security vulnerabilities that are not categorized as PAs, e.g., attacks utilizing modified samples at the communication channel of the feature extraction module and the sensor [14] .
Software-based PAD methods can be added to any fingerprint recognition system, without any extra cost and without modifying the sensor. The criteria of the taxonomy of software-based approaches presented in this study were the type of technique or the kind of features used for the information extraction from the fingerprint prior to classification. Every category of this taxonomy presents advantages, disadvantages and research opportunities.
PAD methods that utilize dynamic features exhibited a promising level of accuracy. The main drawback is the time consumption that is high due to the time interval between the capture of samples and the extraction of the dynamic features. This fact makes them unsuitable for real-time authentication, as noted by Nikam and Agarwal [157] . Furthermore, another drawback is user inconvenience since some methods of these categories depend on certain moves of the finger of the user. These are the main reasons why dynamic PAD solutions are very rarely met in recent literature.
The features used in PAD methods that exploit physiological features are the several unique characteristics of the sweat pores and the perspiration phenomenon, which have substantial strength and discriminative power. These methods in some cases exhibit computational simplicity but also present high intra-class variability and usually require more than one image of the fingerprint, thus making the process slower with increased computation time. According to Nikam and Agarwal [157] , perspiration-based methods are not efficient for real-time authentication. Researchers worked mainly in the direction of rectifying these disadvantages but research interest was focused on approaches based mainly on CNNs and texture descriptors due to the advantages they exhibit. Recent works on PAD methods that rely solely on anatomical or physiological features are rare, but as proved by the work of Agarwal and Bansal [143] , they are considered good supplement methods to approaches that fuse features.
PAD methods based on image quality features rely on the diversity between bona fide and artificial samples due to the coarseness of the surface of the fingerprint. This diversity exists on account of the agglomeration that happens during processing of the materials used for artificial samples because of the large organic molecules of these materials [71] . The properties used for image quality-based PAD methods present different strengths, discriminative power and weaknesses. The main advantages of these methods rely mostly on their simplicity and the low computational complexity, thereby achieving fast response times [14] . The major disadvantages are that the classifier's efficiency depends on the kind of PA [158] and the performance depends on environmental, scanner and user related (usability) conditions [70,74] . Although image quality PAD methods have been used commercially, researchers' attention has been focused mainly to local texture descriptors and CNNs that present superior performance. Nevertheless, their computational simplicity makes them ideal for approaches that utilize the fusion of features.
Local texture descriptors are straightforward, rapid and can be implemented in real-time fingerprint recognition schemes. Photometric, rotational and geometric effects have no effect on local descriptors. This fact is one of the reasons why these descriptors achieved superior performance in PAD approaches [28] . Due to their accurate target localization, they generally perform better in cross-dataset and cross-material experiments than their neural network counterparts, which perform best for cross-sensor experiments [159] . There are three major drawbacks for these methods: (1) the performance of the descriptors rely on the type of the sensor used for the acquisition of the sample [160] ; (2) a large dataset is required for training [158] ; and (3) local texture descriptors generalize poorly against PAs accomplished with the use of materials not encountered in training [148] .
Ghiani et al. [78] conducted experiments with features based on pore detection, ridge wavelets and several textural features. They concluded that the best method was LBP, although it has the disadvantages of being sensitive to image rotation and the need for longer computational time due to its long histogram.
According to González-Soler et al. [161] , DSIFT-based encoding achieves the best performance against unknown materials. González-Soler et al. [79] also reported that gradient-based features, such as black saturation, white saturation, lack of continuity, unwanted noises and ridge distortions, achieve the best performance amongst other descriptors and especially the fusion of gradient and textural features present even better performance.
Local feature descriptors are a very active field of PAD. There are many opportunities for research, especially concerning the creation of new feature descriptors. According to Sharma and Dey [162] , methods for the accurate extraction of the contrast and orientation of the fingerprint image, which is mandatory for the design of a new descriptor, can be found in the literature.
CNNs provide an excellent solution to image recognition and have been used in many fields beyond computer vision, such as information security. CNN-based PAD methods provide promising accuracy but there are two drawbacks that limit their usage in commercial fingerprint recognition systems. The first is that these methods are sensor and material dependent. This fact makes them susceptible to PAs with unknown materials or with different capture devices. The cause of this limitation could be that the learning methods of these approaches utilize several filters that rely on known attacks and combine convolutional, pooling and fully connected layers, which do not present good generalization [79] . The second drawback is that these method's requirements regarding memory and computational time are high [162] , making them unsuitable for usage in low resource environments, such as smartphones. This is the reason that there is a noticeable turn by researchers that use deep learning methods to solutions that require the least processing time.
The use of local patches instead of the whole fingerprint image is also widespread. Local patches are small regions of interest of the fingerprint image. Another major limitation of these methods relies on the number of training patterns. CNNs are complex algorithms that need tens of thousands, millions in some cases, of training patterns to perform and generalize well. Therefore, new datasets or mixture of datasets should be utilized. A different approach should be the augmentation of the datasets to provide more bona fide and artificial samples. Nogueira et al. [136] suggested the augmentation of the dataset by the artificial creation of images that present uneven illumination and random noise. They also proposed that different classifiers should be trained for different transformation types. Another possible solution should be the adoption of other deep learning approaches like one-shot learning [163] .
Raja et al. [164] concluded that handcrafted textural features achieve the best performance on capacitive sensors, whereas naturally learned features achieve optimal performance on thermal and optical sensors. They also suggested the use of deep learning-based approaches with the utilization of large data sets for the creation of new reliable PAD methods. Pinto et al. [119] suggested that reliable PAD models should rely on both "hand-crafted" and "data-driven" solutions.
In general, feature fusion approaches exhibit superior accuracy than their single feature counterparts. Furthermore, their performance is competitive with state-of-the-art PAD methods. In Ref. [18] , answers to the major challenges that feature fusion methods face are provided. They concluded that fusion is more effective at a feature level than at a decision level. Furthermore, proper transformation of the different views into a common latent space is the best method for harmonization or normalization of the features used for fusion. Moreover, reducing dimensionality of the classification space is best suited with the use of subspace transformation. Finally, the usage of deep learning methods is suitable for the automated learning of the way diverse features aggregate.
A common limitation to all the aforementioned PAD categories is their generalization ability against unknown materials. This is the main reason that new approaches (discussed in Generalization efficient/wrapper methods) have been proposed by researchers. These approaches are focused on increased performance against PAIs not seen in training and in the case of wrappers-addons can be used on top of any PAD method and have a positive impact on performance. The drawback of the latter type of approach is the increase in computational time and memory usage. Nogueira et al. [112] suggested that the PAD generalization error and the performance drop in presence of PAΙs not used in training, are mostly due to new sensors and not new materials. The low level of interoperability among different sensors, due to the influence in the image properties and more specifically in the corresponding feature space, on account of the unique characteristics of each different sensor was also reported by Tuveri et al. [165] .
Marasco and Sansone [166] concluded that PAD methods that rely on multiple PAD features are more robust against PAs realized with the use of materials not used in training. Finally, Marasco et al. [115] noted that CNNs exhibit the ability to successfully adapt to PAD problems with the use of pre-training on ImageNet.
User authentication systems that rely on a single biometric trait, suffer from vulnerabilities, due to poor data quality and scalability. Multimodal biometrics utilize data acquired from different sources, resulting in better performance and reliability [167] . This is the direct outcome of the fusion of data from different sources that makes possible the extraction of more distinctive features than the features extracted with the use of unimodal systems [168] . Furthermore, these systems acquire data from different sensors, making them more robust against illumination conditions and other factors related to the sensors that have a negative impact on performance [169] . A key factor in multimodal systems is the level in which the fusion of the information is accomplished. Fusion at the extraction level does not present advantages since there is a significant amount of data to be fused. Fusion at the matcher score level has attracted significant attention by researchers because of its simplicity. Finally, the best authentication performance is expected when fusion is performed at the decision level. The disadvantage of this approach is the fact that the extracted feature vectors may be incompatible [170] . In the literature, there are proposed methods that fingerprint is used in conjunction with other biometric traits like face and iris recognition [171,172] , face and speech [173] and face [174] .
Moreover, ECG signals were used along with the fingerprints with promising results, as discussed in HARDWARE-BASED PRESENTATION ATTACK DETECTION [45,46,[175][176][177] . Other cognitive factors, like EEGs, may also be utilized in conjunction with fingerprints. However, the aforementioned biometric traits require explicit, and sometimes expensive, capture equipment.
Another authentication scheme that presents a high level of security is the n-factor authentication scheme, where n denotes the number of combined factors. The factors may be knowledge, possession or inherence based. In the knowledge category, factors like personal ids, usernames or identification numbers are included. Possession-based factors include one-time password tokens, ID cards and smart cards. Inherence factors include any biological trait [178] . These systems utilize cryptography to improve security.
He and Wang [179] proposed that the user should use their smart card, input their password and id, and then utilize their personal biometric impression. This system employed curve cryptography to further improve security. Qiu et al. [180] proposed a similar authentication scheme that utilized the "fuzzy-verifiers" [181] and "honeywords" [182] techniques along with chaotic-maps for mobile lightweight devices. In n-factor authentication schemes, the fingerprint is one of the most used biometric traits [183][184][185][186][187][188][189] .
By analyzing Tables 2-8, we reach the following conclusions: -The majority of the presented state-of-the-art methods make use of the LivDet datasets. Therefore, they can be considered as benchmarks.
-There is a clear revulsion to the research, especially from 2015 and towards to textural features and deep learning methods, especially CNNs.
-The datasets that are utilized by the authors of presented publications consist of only a few thousand samples. The distribution of training and test sets for each one of the most utilized datasets is shown in Table 1.
Finally, Table 9 presents a comparative analysis of the aforementioned PAD techniques, highlighting the advantages and disadvantages of each category.
Research challenges and potential research directions
The presentation of the research methods proposed to the literature highlighted the current trends along with the advantages and disadvantages of each PAD category. Moreover, the research challenges and potential research directions emerged and are presented in this section.
The first challenge concerns the data that train the classifiers. The majority of the presented methods are sensor and dataset dependent, i.e., the training and test sets are from the same sensor and from the same dataset. "Good" data result to better training and eventually better classification results. A "good" dataset should be balanced and comprised of samples acquired from different sensors (the more the better). A thorough analysis of Table 1 reveals that only bona fide and artificial samples are present in the datasets. Nevertheless, as science evolves, the possibility that someone attacks a biometric system with natural samples, i.e., transplanted hands, fingerprints made of natural skin and plastic surgery results, becomes higher. Thus, the models should be trained with datasets comprising of these biological materials also. However, there is a significant lack of these data and therefore research in this field, i.e., biological presentation attack detection is limited. Hence, the sensor and dataset interoperability are major unresolved problems and have not been given much attention yet.
Regarding the methods that are mainly utilized by researchers, it is obvious that during the last five years, there has been a revulsion to deep learning methods, especially CNNs. These methods provide promising accuracy but the drawbacks that limit their usage in commercial fingerprint recognition systems must be addressed. CNNs are sensor and material dependent. The only effective way to overcome the first drawback is to create and utilize datasets comprising of samples acquired from more subjects, more sensors and containing biological materials. Moreover, the required memory and the computational complexity are high [162] and this makes them unsuitable for usage in low resource environments, such as mobiles or other wearable devices. On this basis, the generalization of the method should be improved. Nevertheless, the more data we use, the more time we have to train the deep networks. Thus, new approaches like transfer learning and one shot or zero shot learning are getting more notice due to the less computational resources they require.
Another interesting research direction is auxiliary supervision. As mentioned in DATASETS, fingerprint PAD is considered as a binary classification problem. Nevertheless, the proposed methods in the literature show poor generalization, i.e., poor performance on unseen data. To tackle this, auxiliary learning [190] should be helpful. Recent research showed that auxiliary supervision with end-to-end learning provides better anti-spoofing [191] .
Another thing that also must be noted is that biometric sensors are no longer found only in access controls. Nowadays, the majority of mobiles and other wearable device makes use of these sensors. The acquired data from these sensors (GPS, accelerometer, gyroscope, magnetometer, microphone, NFC and heart rate monitors) can be utilized to distinguish a person. However, since these data are heterogeneous, methods that analyze and extract a concrete representation of them, should be developed.
Multimodal biometrics is another research field that requires more attention, especially when one or more of the modalities concern the so-called cognitive biometrics. Cognitive biometrics concerns the bio-signals, like an EEG, ECG or the electrodermal response, which are generated by the brain, heart and the nervous system, respectively. These modalities when combined with the behavioral ones, i.e., fingerprint, iris and so on, may provide enhanced security.
Finally, a major concern and simultaneously a research challenge that needs to be addressed is the privacy of the individual. The acquired samples, regardless of their type (behavioral/cognitive) or the way that have been obtained, may be used to either to contravene the privacy of the subject or to profiling them. It is therefore crucial to the creation of relevant legislation describing how such data should be handled without violating the privacy of the individual. Furthermore, it is also vital that researchers ensure that the methods they propose consider privacy issues.
CONCLUSIONS
A comprehensive review regarding presentation attack detection methods has been presented. Moreover, both approaches of presentation attack detection techniques, i.e., hardware and software based, were thoroughly analyzed and a taxonomy concerning PAD methods was revisited. This literature review highlighted all recent advances in this field and pointed out areas for future research to help researchers to design securer biometric systems.
|
v3-fos-license
|
2022-10-05T15:09:30.561Z
|
2022-01-01T00:00:00.000
|
252705639
|
{
"extfieldsofstudy": [],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.scielo.br/j/ram/a/7FtkMnvfr9Q5CDBrmbYyH5S/?format=pdf&lang=en",
"pdf_hash": "8ea05b972cbf2df37368f95cbd9d0d436f1f86f9",
"pdf_src": "Anansi",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44292",
"s2fieldsofstudy": [
"Agricultural and Food Sciences",
"Business",
"Economics"
],
"sha1": "ee730a74ebbe05a84405498cf401ff8f6b2bed52",
"year": 2022
}
|
pes2o/s2orc
|
The absorptive capacity of incubated enterprises and innovation actions in the context of agribusiness in Paraíba, Brazil
Purpose: This article analyzes companies’ absorptive capacity and innovation actions incubated in the Agribusiness Incubator of Cooperatives, Community Organizations, Associations, and Rural Settlements of the Semiarid Region of Paraíba (Iacoc). Originality/value: The value of this research is the validation of the literature and the process’ effectiveness of fostering incubators with incubated enterprises, whose transfer of resources/knowledge can generate innovation. We present the theoretical relationship between absorptive capacity and innovation actions in the context of incubated enterprises, inspiring management practices for different types of innovation, including in specific contexts, such as agribusiness. Design/methodology/approach: We carried out a qualitative multicase study whose analysis focused on six projects incubated at Iacoc. Documentary research and semi-structured interviews with the leading managers were used, and the ATLAS.ti software processed the data. Subsequent content analysis framed the data into four main categories: absorptive capacity combined with the product, marketing, process, and organizational innovations. Findings: Results show that the analyzed projects presented advance-ments from the absorptive capacity realized, with the generation of innovations in management and planning, people management, the direction of production techniques, product design characteristics, and access to markets. We argue that the theoretical association of the field of absorptive capacity and innovation in the context of incubated enterprises can trigger substantial gains for the thematic area and the actors of innovation ecosystems: the capture of knowledge by companies, innovation actions, the transformation of those involved in the initiatives and socioeconomic impact.
INTRODUCTION
For an adequate holistic view, companies need to know how to deal with the knowledge they have and envision having and how innovative it is to sustain them. In this regard, the absorptive capacity, resulting from the initial contribution of Cohen and Levinthal (1990), is an important vector, as it acts as an ability to recognize the value of new information, assimilate, and direct it -initially foreseen only for commercial purposes. Therefore, it is assumed that the absorptive capacity can be an innovation enhancer (Apriliyanti & Alon, 2017;Cassol et al., 2017;Engelman & Schreiber, 2018;Zhang et al., 2015). Zahra and George (2002) argue that absorptive capacity has two dimensions: potential absorptive capacity (Pacap) and realized absorptive capacity (Racap). The potential absorptive capacity makes the company receptive to acquiring and assimilating external knowledge, but it does not guarantee its exploitation. On the other hand, the absorptive capacity realized is the company's capacity to explore and transform the acquired knowledge to develop new practices. In this understanding, this research adopts the absorptive capacity realized, recognizing that the companies went through the previous stages of the potential absorptive capacity and were able to explore well the acquired knowledge.
McCann and Folta (2018) point out that, although other elements can lead to different innovative performances, the absorptive capacity can be an essential driver in understanding the differences in the asymmetrical use of knowledge and its application in innovations. It is a crucial vector for innovation theories and a strong predictor of innovation and knowledge transfer within firms (Koch & Strotmann, 2008;Zou et al., 2018).
Innovation is now considered an activity that starts with developing an initial element until its transformation into a commercially valuable component accepted in the social system (Schumpeter, 1997). The minimum requirement to define innovation is that the product, process, marketing, or organizational method are new or significantly improved for the company ((Organisation for Economic Co-operation and Development [OECD] & Eurostat, 2018).
The Organization for Economic Cooperation and Development (OECD & Eurostat, 2018), through the Oslo Manual: Guidelines for the Collection and Interpretation of Data on Innovation, distinguishes four types of innovation: product, process, marketing, and organization. Thus, for innovation classification, we accept the Oslo Manual typology to guide the analysis of this research, with the types of innovation brought from the 4th edition of the Manual being a more precise formulation for use with companies (OECD & Eurostat, 2018). Studies that analyze the link between absorptive capacity and innovation are increasing in the literature and have presented contributions in incubated companies (Cassol et al., 2017;Engelman & Schreiber, 2018). In this type of business, the absorptive capacity of resources is the vector for promoting innovation. In any case, the absorptive capacity proves to be flexible enough to be applied in different units of analysis and the most diverse fields of research.
According to Cassol et al.'s (2017) study, the authors found that knowledge is better used through absorptive capacity, significantly contributing to innovations in incubated enterprises. Despite the growth in research dedicated to the theme, these are generally theoretical or, if empirical, no evidence was identified that associates the absorptive capacity and the adoption of innovative practices in incubated rural enterprises, which justifies this research. These are projects directly related to community organizations, the management and management of natural resources, and the socioeconomic context of communities, which benefit from innovative practices and sometimes need help to capture knowledge, carry out actions and expand their activities with the dynamics of innovation.
Given the above, the objective of this research is to analyze the absorptive capacity and the resulting innovation in companies incubated in the Agribusiness Incubator of Cooperatives, Community Organizations, Associations, and Rural Settlements of the Semiarid Region of Paraíba (Iacoc), an institution linked to the Study Program and Actions for the Semiarid Region (Peasa) of the Federal University of Campina Grande (UFCG) and the Paraíba Technological Park (PaqTcPB).
To achieve the objective of this research, a multicase study was carried out with a descriptive qualitative approach, whose collection sources were semi-structured interviews and document analysis. The relevance of this research lies in the validation of the results and effectiveness of actions to foster incubators as a tool for promoting family-based agribusiness ventures in the semiarid region, with possible use in other contexts. Empirically, the processes presented here are capable of inspiring management models for these initiatives.
The ventures chosen for empirical exploration are justified by their innovative characteristics associated with the absorptive capacity experienced in the company/incubator relationships. We discuss absorptive capacity as a predecessor to product, process, organizational, and marketing ISSN 1678-6971 (electronic version) • RAM, São Paulo, 23(5), eRAMR220143, 2022 https://doi.org/10.1590/1678-6971/eRAMR220143.en innovations. Data were compiled and processed through content analysis supported by the ATLAS.ti software.
Structurally, in addition to this introduction, this research has a theoretical foundation ("Absorptive capacity and innovation"), methodological aspects, results, and conclusions.
ABSORPTIVE CAPACITY AND INNOVATION
Organizational knowledge needs to be absorbed and managed through processes that identify, select, organize, share, disseminate, and apply this knowledge in problem-solving, corporate learning, product, and service innovation, developing strategies, and making decisions (Ali et al., 2018). There is the so-called absorptive capacity to motivate these processes, the ability to transfer experiences, information, and perceptions from experts to innovation practices (Ferreras-Méndez et al., 2016).
As a concept, absorptive capacity (Acap) was coined by Cohen and Levinthal (1990), defined as the firm's ability to identify, assimilate, and explore knowledge from the environment. The authors argue that the term can be understood as recognizing the value of new information, incorporating it, and directing it for commercial purposes (Cohen & Levinthal, 1990). Zahra and George (2002) add to the discussion two conceptual dimensions: the Pacap and the Racap, which is the company's ability to explore and transform the knowledge acquired for the development of new practices; and the first, the ability to develop and assimilate external knowledge, without guaranteeing the exploitation of this knowledge.
According to the proposals developed by Lane et al. (2006), Vega-Jurado et al. (2008), Murovec and Prodan (2009), Flatten et al. (2011), Moré et al. (2014), Ferreras-Méndez et al. (2016, and Apriliyanti and Alon (2017), organizations should seek mechanisms to develop their absorptive capacity internally. These studies suggest that high absorptive capacity is associated with a better chance of successfully applying new knowledge for commercial purposes, resulting in innovation and good business performance.
One way to understand the innovative process is to know how absorptive capacity occurs and how the company develops routines and procedures to internalize and apply internal and external knowledge innovation (Ávila, 2022;Mura et al., 2013;Wang & Hu, 2020). In this sense, innovation is fundamental to economic growth (Schumpeter, 1997) and the primary source of differentiation and competitive advantage for organizations (Brown, 2008), including in the long term (Buchele et al., 2015). Defining innovation is broad. It is everything that differentiates and creates value, essential for a good performance, competitiveness, and survival of companies (Zapata-Cantu et al., 2020). It is a process in which knowledge is acquired, shared, and assimilated to create new knowledge that incorporates products and services (Harkema, 2003), methods and procedures (Brewer & Tierney, 2010), and social and environmental contexts (Harrington et al., 2016). According to the Oslo Manual (OECD & Eurostat, 2018), innovation refers to implementing a new or significantly improved product (good or service), a process, a new marketing method, or a new organizational method in business practices in the workplace or external relations.
There are different taxonomies of innovation. Classifications related to innovation were emphasized in the studies by Spieth and Lerch (2014), Zhou et al. (2017) and, Yoon et al. (2018), which focused on the organizational learning dimension as an influence on innovation. Other paths include classification as incremental innovation (which builds where it already exists), radical (which produces a total change over the past), or semi-radical, which is located between radical and incremental innovation (Castaneda, 2015;Macedo et al., 2015;Torugsa & Arundel, 2016). However, the classification by Chesbrough (2012), Belso e Diez (2018), Kremer et al. (2019), in closed innovation (internal) or open innovation (external), is based on the origin of the source of innovation.
The OECD (2018), through the Oslo Manual, defines four types of innovations that encompass a wide range of changes in the activities of companies: innovations, product innovations, process innovations, organizational innovations, and marketing innovations, which are described in Table 1. Product innovation is the introduction of a good or service that is new or significantly improved in terms of its characteristics or intended uses. Process innovation is implementing a new or significantly improved production or distribution method. Marketing innovations are aimed at better meeting the needs of consumers, opening new markets, or repositioning a company's product in the market to increase sales. Furthermore, organizational innovations in business practices include implementing new methods for organizing routines and procedures for conducting work (OECD & Eurostat, 2018).
Although innovations from developed countries are used as a common source, the Oslo Manual is quite comprehensive and flexible and has become a reference for research in the commercial sector, betting on the usefulness of its content so that companies can enjoy their concepts, adopt them, discuss them or use them as a reference for their innovation initiatives (OECD & Eurostat, 2018).
Finally, innovation is among the main attributes for survival and better business performance. New ventures are aware of this reality, seeking to absorb knowledge and insert disruptive and incremental innovations into the market to reach more customers and achieve remarkable success (Cassol et al., 2017;Rocha et al., 2019). Ventures that increase their involvement in knowledge sources tend to increase their innovative capacity ( (Belso & Diez, 2018;Kremer et al., 2019).
Having understood that absorptive capacity is an essential vector for innovation theories and a strong predictor of innovation and knowledge transfer within companies, the next step will be the methodological design used in this study.
METHODS
Following a qualitative approach, this study, based on the case study strategy according to Yin (2015), with additional support from Lakatos and Marconi (2007), has multiple cases and analyzes the absorptive capacity and the resulting innovation in companies incubated in the Iacoc. This agribusiness incubator aims to offer support to leverage the potential and promote enterprises in the rural environment of the semiarid region of Paraíba.
The choice of cases was made by incubation seniority, considering the minimum incubation time of one year and the estimated time the enterprise manages to adapt its routine to the incubation process. Iacoc coordinators brokered access to the six chosen ventures, including some recently graduated businesses.
Data were collected through semi-structured interviews, whose script was elaborated from the theoretical construction under analysis and validated by three experts, with questions about the genesis of the absorptive capacity and its consequences, the types of innovation, and the processes that interrelate these two phenomena in the studied context. The interview script was composed of two parts, including general knowledge questions about the interviewees, and questions presented from the four previously defined categories that comprise the absorptive capacity performed (Zahra & George, 2002), and product innovations from marketing, process, and organizational (OECD & Eurostat, 2018), together with a pre-analysis of data for the optimal targeting of collection.
The corpus built had six interviewees (E1, E2, E3, E4, E5, and E6), who authorized the disclosure of their names, chosen from their performance in the enterprises and general knowledge about the processes existing in them. For the composition of the primary data, document analysis was carried out, made possible by the incubator and the companies, and specific interactions with the respondents during the data analysis phase to clarify various aspects. Data from the interviews are described in Table 2.
Given the context of the Covid-19 pandemic, the interviews took place through virtual platforms, mediated by tools such as Skype, Zoom, and Google Meet, in December 2020. Then the interviews were transcribed and analyzed, being the most relevant source of data for the study. The analysis of documents was carried out to complement the understanding and guide the search for information about the company within the scope of this study through websites, technical reports, brochures, various newsletters, and other institutional documents. As for data analysis, content analysis was performed following the paths of Bardin (2011), which consists of three stages of material analysis: 1. preanalysis; 2. exploration of the material; and 3. treatment of results, inference, and interpretation. We also registered the option of a support tool: the ATLAS.ti software, which, due to its flexibility, significantly contributes to qualitative analysis, ensuring greater systematicity of data, structuring them to help organize the analysis categories and form the networks that are associations where existing connections between coded information can be visualized (Sampieri et al., 2014). The numbering appears next to the citations (coded in the analysis). Respondents authorized the disclosure of names in the survey results. The letters G correspond to the number of references to the code referred to in citation networks. The letter D refers to density and does not imply analysis.
RESULTS AND DISCUSSION
The Iacoc, linked to the PaqTcPB and the Peasa, work to strengthen the sector agriculture through actions to encourage the development of productive agribusiness ventures in the semiarid region of Paraíba. Iacoc received the certification 1 of Brazilian Reference Center for Support to New Enterprises (Cerne), a methodology developed in partnership with Brazilian Support Service for Micro and Small Enterprises (Sebrae) and National Association of Entities Promoting Innovative Enterprises (Anprotec), to create a platform for solutions to expand the incubator's capacity to generate successful innovative ventures. Next, each category of analysis based on absorptive capacity and the effects on each type of innovation are discussed.
Absorptive capacity and product innovations category
This category refers to the absorptive capacity and product innovations comprising the subcomponents: goods, services, knowledge capture, and their combinations/applications and product design characteristics (OECD & Eurostat, 2018).
We started the analysis of the product innovation absorbency category with the breakdown of the interviewees' statements in accordance with the subcomponents of the types of innovation (OECD & Eurostat, 2018), whose citations are displayed in Figure 1. The statements demonstrated aspects that confirm the elements present in the Oslo manual.
Figure 1
Absorption capacity and product innovation category
(continue)
The absorptive capacity of incubated enterprises and innovation actions in the context of agribusiness in Paraíba, Brazil Yes, we began developing new products besides the cake, like bread, cookies, crackers, and toasts, using tips from Iacoc's food specialist.
Acap ID Services
Acap ID capture of knowledge and its combinations/applications
Dapaz's transcription
It was a turning point because we used to do it with our limitations, and the Iacoc opened our eyes to entrepreneurship. We didn't know we were an enterprise and never imagined that it was good enough to be one, and that we could change so many lives socially and economically.
Izabel's transcription
We went through a course that taught us to get to the level of designing we have today in our pieces, and since then, we have tried to get better as time goes by. Before, we used to work with rustic-looking pieces, and with the knowledge we got from Iacoc, we learned a technique that produces a shiny, smooth finishing to our pieces.
Source: Elaborated by the authors using the ATLAS.ti software (2021).
Absorptive capacity is a driver for taking advantage of the knowledge and application in innovations (McCann & Folta, 2018). In this sense, we highlight the absorptive capacity realized (Zahra & George, 2002) of incubated enterprises to transform the resources/knowledge assimilated at Iacoc, resulting in product innovation by introducing a new good that significantly differs from previous products or processes (OECD & Eurostat, 2018). The interviewees report this fact: "At first, we only produced honey, and just when the group became incubated at Iacoc, it started to produce cake and fruit pulp" (E4); "As we manufacture pans, they suggested making thermal gloves to add value to the piece" (E1).
We innovated our product because in the beginning we worked with a recipe and we did not take into account the values of all the ingre- It is noteworthy that product innovation introduces a good or service (OECD & Eurostat, 2018). The Services subcategory provided in the Oslo Manual does not apply to the researched context. There were no elements evidenced in the services subcategory due to the type of activity performed by the incubated enterprises. They were not found in the contexts under analysis, service providers, or intangibles. The research incubated businesses offer tangible goods, materialized during their production process, and whose ownership is transferred to the buyer.
The capture of knowledge and its applications occur with enterprises' absorptive capacity to transfer experiences, information, and perceptions from specialists to innovation practices (Ferreras-Méndez et al., 2016). In the entrepreneurs' speeches, this is evidenced (Figure 1) mainly in the mentions of the promotion of personal and local development and the advancement of business development, generating a better quality of life for the rural environment of the semiarid region of Paraíba. All demonstrate that they capture knowledge and transform it into innovation, enhancing the social and economic environment and the transformation of enterprises.
In this sense, the absorptive capacity realized by the incubated ones constitutes the company's capacity to explore and transform the acquired knowledge to develop new personal and professional practices. The following excerpts describe the interviewees' feelings: Iacoc is a partner who lends a hand, and it teaches how to grow as a dignified human being and as an enterprise (E4).
So, only after the knowledge acquired at Iacoc did we learn to have confidence in ourselves, and it was transforming to know that we were capable of producing a good product (E4).
Today we feel very victorious, despite the difficulties, we learned a lot at Iacoc and applied it every day, which makes all the difference for our enterprise (E2).
As for the product design characteristics, we observe the absorptive capacity and the resulting innovation in changes in the form and appearance of the enterprises' product, comprising substantial changes in the product ISSN 1678-6971 (electronic version) • RAM, São Paulo, 23(5), eRAMR220143, 2022 https://doi.org/10.1590/1678-6971/eRAMR220143.en design, resulting in innovation regarding its characteristics (OECD & Eurostat, 2018). This fact is reported by respondent E4: Another thing that we learned and applied after Iacoc is that, before, we only made 500g pulp, and we learned how to make it in small packages of 100ml and thus serve other audiences. Regarding honey, before Iacoc, we only worked with liquid honey; now, we work with honey in combs, which are more profitable because it adds more value. Moreover, about the cakes, we only worked with large, deformed cakes, and we learned how to produce and standardize small cakes with a visually better shape. Therefore, in this type of innovation, in the ventures incubated at Iacoc, there is the presence of the absorptive capacity carried out and three subcomponents provided for by the Oslo Manual (OECD & Eurostat, 2018), articulated in various elements, from the appearance of a new good to significant changes of its characteristics. Thus, we observe that entrepreneurs reinforced absorptive capacity and its relationship with innovation, focusing on the organizational learning dimension, stating that applied knowledge influences innovation (Spieth & Lerch, 2014;Yoon et al., 2018;Zhou et al., 2017).
Regarding product innovation, which is the most visible type of innovation, agribusiness-derived products undergo significant changes in their manufacturing, packaging, and distribution processes, adding more excellent value and incorporating innovation through the technicality that comes from the absorption of knowledge from the incubator.
Absorptive capacity and process innovation category
The absorptive capacity and process innovations were explored in this category, comprising the subcomponents: production, distribution and logistics, and communication and information system (OECD & Eurostat, 2018). The related elements are arranged in the figure below.
Interviews made in Portuguese -Acap process innovation (translation)
Acap IC production Izabel's transcription I consider that there was an innovation of our products concerning the optimization of time, through the installation of a rotation system, for example, the construction of a door where there wasn't one, aiming to facilitate our work, which was very important to make production more agile. I also see innovation in transmitting new techniques in the production system.
Rose's transcription
We also learned about logistics by efficiently planning the transportation and stocking of our products from the first point to the consumer.
Acap IC communication and innovation system
Dapaz's transcription Iacoc did the spreadsheets and taught us how to use them. We had an extension program with UFCG which managed to get a computer to the agroindustry, and we learned about an information system, a software that taught us everything about spreadsheets. So, the teachers would place the spreadsheets on this computer and teach us how to fill them.
Source: Elaborated by the authors using the ATLAS.ti software (2021).
Following the process innovation concerning production (OECD & Eurostat, 2018), the enterprises implemented through the absorptive capacity realized (Zahra & George, 2002) a new or significantly improved production or distribution method. The absorptive capacity and resulting in process innovations in the projects encompassed significant changes in new or substantially improved techniques, equipment, and software in auxiliary support activities, such as purchasing and accounting (OECD & Eurostat, 2018), as reported: I see Iacoc as a way to improve and innovate. It brought us innovation in various segments; starting with professional training, knowledge of cash control, inventory, division of tasks, and standardization, we rediscovered our production and expansion capacity innovation of our business. I can assure you that there was a remarkable transformation and all of them contributed positively to our business development (E1).
From the absorption of resources/knowledge, the incubated enterprises implemented new production methods to reduce material and time waste. It is possible to see that production has been significantly improved for incubated companies (OECD & Eurostat, 2018). In some businesses, the control of the processing of fruit pulps began with guidance from nutritionists. For others, the implementation of standardization in their productions.
In the bakery, the cake was made at home. Each member of the association had a recipe. From the absorptive capacity, a change occurred that resulted in adherence to standardization, and the elimination of unnecessary and inappropriate ingredients through the guidance of a food specialist, generating pasta yield and improving flavor and quality. These innovations were incorporated with difficulty for some businesses due to the entrenchment of customs and practices. As one of the entrepreneurs describes: It was a change in the production method. Furthermore, it is not easy to convince a group of people and show all the procedures needed if you do not have someone's help to have the training, qualification, and experience. For example, changing the recipe, because people have the culture that the milk cake needs certain products, it is not easy to change this mentality and show that you don't need all those ingredients. It is a tremendous job, if you do not have support to take your hand and show you the right path, it is difficult, and that was the support we felt from Iacoc and UFCG (E3).
Also, in this sense of process innovations in production, it is possible to mention the example of one of the projects in an area with high solar incidence, which introduced changes in techniques and equipment in the activities. We implemented a project by Iacoc with experts from the agronomy course at UFCG and the Semiarid Renewable Energy Committee (Cersa). The goal was to adopt the on-grid photovoltaic system, which consisted of equipment to convert solar energy into electricity, which started to supply the business energetically. Today, it works 100% with solar energy. In addition to starting during incubation, the reuse of water and installing a biodigester to transform the rest of the food and animal feces into natural gas. "Iacoc contributed to this issue of the use of natural resources, and we started to understand how important sustainability is for the environment and our financial economy" (E3).
In this context, we observe that incubated ventures become innovative companies characterized as companies that, during a given period analyzed, develop innovative strategies, create products, and improve processes or a combination of them (OECD & Eurostat, 2018). Furthermore, the pulp industry, during incubation, had a 70% growth in production and sales through the transformation of knowledge into innovative production methods. However, we can demonstrate the support of UFCG for innovation in the production of hatched ones, as one of the entrepreneurs points out: Honey needs to be tracked, with lots, cities, it needs all the control required by the Ministry of Agriculture. Iacoc fully encourages this quality control from the beginning of production. From the laboratory Regarding innovativeness in terms of distribution and logistics, we observed in the interviewees' speeches a lack of knowledge about the importance of distribution logistics, whose activity is focused on planning the storage, circulation, and distribution of products to the final customer. One of the entrepreneurs succinctly describes: "We were talking about production, but without even knowing what logistics is, so we learned to control the stock properly, take care of storage and organize transport to meet delivery deadlines" (E2).
In the field of communication and information systems, the implementation of new or significantly improved information and communication technologies is considered a process innovation if it aims to improve the efficiency or quality of activity through the hardware functions, software, telecommunications, and automation, facilitating business processes (OECD & Eurostat, 2018). In the words of entrepreneurs, this subcategory is less robust. Therefore, this type of process innovation, computer-aided implementation, is a resource/knowledge that is less absorbed and transformed by the incubated enterprises. However, it reflects innovation to a greater or lesser degree while presenting substantial improvement in processes when absorbed.
Absorptive capacity and organizational innovation category
This category presents the absorptive capacity and organizational innovations and their subcategories: administration and management, business practices, distribution of responsibilities, and external relations. Organizational innovation is implementing a new organizational method in the company's business practices, its workplace, or its external relations (OECD & Eurostat, 2018).
We started the category analysis by breaking down the interviewees' statements in agreement with the subcomponents, whose citations are displayed in Figure 3. The statements showed aspects that confirm the elements present in the Oslo Manual. Absorptive capacity and organizational innovation category entrevistados em concordância com os subcomponentes, cujas citações estão dispostas na
Interviews made in Portuguese -Acap organizational innovation (translation)
Acap IO administration and management
Dapaz's transcription
Iacoc provided many courses like business planning, production management, technological innovation, marketing, and organizational innovation. We learned how to use a management practice that is widely used, the SWOT model (strengths, weaknesses, opportunities, and threats) which is a management tool that evaluates the degree of competitiveness of a company in comparison to its competitors. These are precisely the characteristics that are analyzed in the Model.
Rose's transcription
Through Iacoc's courses, with the teachers, we were awakened to the importance of entrepreneurship, and management practices, things that were unknown to us until then. We were producers, we didn't see ourselves as entrepreneurs or business people. We thought we were just a group of men and women who wanted to work, produce, and make a little money. We had management classes about the division of tasks. Iacoc gave each one their own responsibility. The ones responsible for production, cash flow, equipment, and cleaning of the space. They gave each one a role. Iacoc helped by giving every member of the staff their own responsibility.
Rodrigo's transcription
We have a few partnerships with other companies so that when everything is regulated we can export to China and the USA.
Source: Elaborated by the authors using the ATLAS.ti software (2021).
In the scenario of the incubated enterprises under analysis, we understand that management knowledge and business practices were practically non-existent. They had a limited view of their management, having commercialization as their primary objective. Respondents did not perform the activities as a set of actions necessary to manage an organization in all its areas, promoting integration between them and the best use of available resources to achieve the planned objectives.
In this sense, the enterprises highlighted that they learned management and entrepreneurship from the incubation. They reported that their activities improved significantly as several positive changes in the administrative routine (OECD & Eurostat, 2018). As evidenced by the interviewees' statements: Another thing, we were terrified of facing innovation, scared of innovating in general, and afraid of not working out. Nevertheless, with the training, we see that we can innovate, and we are introducing new products to our internal management. We are growing precisely because we have this knowledge. After all, if not, we were stagnant, stopped. As I said, we learned to acquire more confidence in managing our business through Iacoc (E4). Iacoc showed the importance of good business practices with five senses. And we applied this quality tool, the 5S, which allowed for a reduction in waste and better use of time. We had no management organization, which made the difference (E5).
Regarding absorptive capacity and organizational innovations in terms of distribution of responsibilities, we observed innovations in associations and cooperatives during incubation, in the organization of the workplace involving the implementation of new methods to distribute responsibilities and decision-making power among employees in the division of existing work within the company's activities (OECD & Eurostat, 2018). As the respondent E4 reinforces: If it weren't for Iacoc, we wouldn't have direction; we wouldn't have evolved in several aspects, such as the issue of product costs, an organization within the company's management, because we worked like that, everything was very disorganized. There was no division of work; heavy work was left to some and not others.
As for absorptive capacity and organizational innovation concerning external relations, entrepreneurs highlighted that they acquired previously limited access to the market. They also stressed that before incubation, they did not have partners, and therefore the application of acquired knowledge promoted the possibility of expanding their market share. As reported by the respondent: We entered a partnership with the Cooperative of Rural Producers of Family Farming during the incubation period on the Coast of Sul Paraibano -Coopasa. We started our story within Iacoc because no one knew anyone, and we already left there doing business; we still have partnerships today. So, the main contributions to external relations innovations were partnerships with trade, partnerships with other cooperatives, and market access (E2).
It is possible to observe that incubated entrepreneurs in the food areas served only the public of schools -Food Acquisition Program (PAA) and National School Feeding Program (PNAE) -, focusing on associations/ cooperatives, sales, directed to these programs and to the community.
Given the evidence, there is absorptive capacity in incubated enterprises, understood as the ability to recognize the value of new information, assimilate it and direct it to commercial purposes and, as support for the construction of competitive advantage based on innovation (Cohen & Levinthal, 1990;Zahra & George, 2002). We observe, then, that all the knowledge captured and absorbed influences changes in management practices and, consequently, triggers organizational innovations and consequent restructuring in all sectors of the incubated companies.
Absorptive capacity and marketing innovation category
This category refers to absorptive capacity and marketing innovations comprising the subcomponents: marketing, sales, after-sales support, product placement, packaging, promotion, and pricing. It involves implementing a new marketing method with significant changes in packaging, product positioning, advertising, or pricing of new sales channels (OECD & Eurostat, 2018). The related elements are arranged in the Figure 4. Absorptive capacity and marketing innovation category posicionamento do produto, em sua promoção ou na fixação de preços e de novos canais de vendas (OECD & Eurostat, 2018
Interviews made in Portuguese -Acap marketing innovation (translation)
Acap IM marketing, sales, and post-sales support
Dapaz's transcription
Concerning marketing innovation, I can say we didn't have a website, it was created with Iacoc, which motivated our ranking in the market. If you Google Fontes de Sabor, we are the first ones to appear. We now have a website, Twitter, and Instagram, which helps with promotions, sales, and even post-sales because we have direct contact with our customers.
Acap IM positioning and product packaging Julio's transcription An example of innovation in our products is the packaging. We used to work with simple packaging, not a lot of design, and it was Iacoc themselves who created our new package, which made a huge difference. When you get a project and you put it into practice, then you can see that it was really missing the design factor. They also innovated our brand. With the knowledge provided by Iacoc, we realized we needed to raise our price, and it was a concern because we thought that the customers would find it overpriced. We had to decide between raising the quality and the price or stopping our production. We decided to follow their instructions, and alongside came the clients' acceptance. In fact, it only got better. There are places where we used to deliver trays with 15 muffins, and now we deliver 20.
Source: Elaborated by the authors using the ATLAS.ti software (2021).
Exploring this group, in terms of marketing, sales, and after-sales, we noticed in the unanimous speech of the entrepreneurs the mention of the creation and dissemination of the brand and the consequent learning and enhancement of after-sales. Although the businesses have different activities, we observed a significant alignment in the speech of those interviewed about the marketing plan as the main innovative result.
This example of knowledge absorption and application in marketing innovation common to all enterprises highlights that, although other elements can lead to different innovative performances, absorptive capacity can be an important driver to understanding the differences in asymmetrical use of knowledge, as well as its application in innovations (McCann & Folta, 2018). The results of other forms of innovation: product innovation, process innovation, and organizational innovations (OECD & Eurostat, 2018), vary according to the distinctive aspects and the absorptive capacity of each company.
All interviewees reported that before incubation, they did not have a visual identity, such as labels, promotional material, and website: "[...] we did not have any action aimed at the dissemination of our products, everything came from the knowledge acquired by Iacoc. We started to have a visual identity at Iacoc" (E1). In line with the company's ability to explore and transform the knowledge acquired to develop new practices (Zahra & George, 2002), they described that after the creation and dissemination of the brand, they showed an increase in product acceptance and recognition by consumers. They demonstrated that the incubator's commercialization, formalization, and organization aspects provided visibility for the business. As the respondents corroborate: After Iacoc, today we sell to the entire community, which is a vast community; we have around 80 families, with aggregated small communities. We deal in the community three days a week, we sell for school lunches, both to the PAA and the PNAE, we have already placed some of the cookies in the city's supermarkets, and we are also selling to the of São Domingos, a neighboring town. We already put it on in the open market, but due to the pandemic, we are not going. But in the incubation process, we reach new markets (E3).
As for the absorptive capacity of marketing innovation in terms of packaging, promotion, and prices, the following were highlighted: I remember a fair where the general coordinator of Iacoc was with us, and we were selling honey for R$12.00 a comb, and she said it was to sell for R$15.00 because everything we were using in the production process was quality, the jar, the label (E4).
There was also the issue of the new label inserted in our products that gave more visibility, so we can compete equally in terms of packaging and price, as we see in the market (E2).
Furthermore, interviewed entrepreneurs reported that they achieved, during incubation, the seal of the Federal Inspection Service (SIF), adhered to good manufacturing practices, standards, and differentiated packaging with nutritional labeling. We observed that they applied their knowledge to innovation in placement on the market by becoming adept at social media, such as a website, Instagram, and Facebook, and carrying out promotions and applying the fair price of products.
Finally, we understand knowledge absorption as transferring experience and knowledge to innovative business processes (Oyemomi et al., 2016). This concept is directly reflected in the post-incubation relationship between ISSN 1678-6971 (electronic version) • RAM, São Paulo, 23(5), eRAMR220143, 2022 https://doi.org/10.1590/1678-6971/eRAMR220143.en incubated companies and their customers, which is now managed with more assertiveness, planning, and control, given the knowledge absorbed and an innovative attitude towards the market.
After explaining the results obtained through the analysis, the following topic will bring together the main conclusions found in the research.
Discussion
Associations/cooperatives continually seek mechanisms to develop their absorptive capacities, applying knowledge from external sources, adapting them to their internal needs, and seeking new or better results in products and processes. In this sense, there is a notable absorptive capacity carried out by enterprises in the rural area of the semiarid region of Paraíba, and, as a result, a set of innovative practices were adopted.
In Paraíba, the work developed by the incubator is aimed at low-income communities in situations of social vulnerability, generating the possibility of marketing groups to transform themselves into associations, cooperatives, and micro-enterprises and promoting inclusion and community development. In this scenario, companies incubated by Iacoc are formed by micro and small companies dedicated to agribusiness, whose activities fall into any of the following areas: Crop production, horticulture, and floriculture, production of certified seeds and seedlings, livestock, fishing, aquaculture, beekeeping, alternative poultry farming, food product manufacturing, crafts, and beverage production.
In this context, the appropriation of knowledge is naturally asymmetric. However, entrepreneurs capture expertise and transform it into innovation, resulting in enhancing the social and economic environment and the transformation of professionals. In this sense, the incubator has enabled significant changes in business dynamics and the incubated companies' social context (see Table 3).
Table 3
Findings on the connection between absorptive capacity and innovation
Type of innovation
Participants' perception of absorptive capacity and innovation
Product innovation
The products are improved in the manufacturing, packaging, and distribution processes, innovating by absorbing the incubator's knowledge.
Type of innovation Participants' perception of absorptive capacity and innovation
Process innovation Although scarcer, computer-aided implementation is an absorbed resource/knowledge and reflects innovation in processes.
Organizational innovation
Knowledge is absorbed and influences management practices and organizational innovations and consequent restructuring in all sectors of incubated companies.
Marketing innovation
Post-incubation relationship between incubated companies and their customers: relationship with more assertiveness, planning, and control, given the absorbed knowledge and an innovative attitude towards the market.
Source: Elaborated by the authors.
The categories' elements reflect the literature in the fields of absorptive capacity and innovation. The absorptive capacity and product innovations category, for example, translates into the introduction of a new good that differs significantly from previous products. In the process innovation category, we noticed significant changes in techniques, equipment in auxiliary activities, and the organizational innovation category, which brings to the involved projects a new organizational method that involves the company's business practices, organization of its workplace, or the workplace itself, and its external relations. Finally, the marketing innovation category supports the direction of resource-absorbing incubators to meet the needs of consumers. In this sense, all types of foreseen innovation are found in the contexts under analysis.
FINAL CONSIDERATIONS
This article analyzed companies' absorptive capacity and innovation actions incubated in the Iacoc. This study presented a theoretical contribution from the approximation of the literary basis between absorptive capacity and innovation results in the context of incubated enterprises through the innovation classification provided for in the Oslo Manual (OECD & Eurostat, 2018), opening a new path for field studies.
As a managerial contribution, we present practices of incubated enterprises that can inspire management models for innovations in other contexts
|
v3-fos-license
|
2021-12-19T17:09:08.908Z
|
2021-12-15T00:00:00.000
|
245300505
|
{
"extfieldsofstudy": [],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "http://cjs.sljol.info/articles/10.4038/cjs.v50i4.7943/galley/6414/download/",
"pdf_hash": "c2e5124c6376fb311eaf7be960698658a44783cb",
"pdf_src": "Anansi",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44294",
"s2fieldsofstudy": [
"Biology",
"Environmental Science"
],
"sha1": "c0b470eccc7776f9f1b453dbd8c00f1088b5af62",
"year": 2021
}
|
pes2o/s2orc
|
Molecular phylogeny-based identification of Colletotrichum endophytica and C. siamense as causal agents of avocado anthracnose in Sri Lanka
Avocado (Persea americana) is a sub-tropical fruit with high nutritional value and numerous health benefits. Among the postharvest fungal diseases that affect ripe avocados, anthracnose is one of the most destructive disease worldwide, causing significant postharvest fruit losses and limiting shelf life. Over 15 Colletotrichum species have been reported as causing avocado anthracnose from avocado growing countries in the world. In the present study, 35 Colletotrichum isolates were obtained from ripe avocados showing anthracnose symptoms, collected from the Central and North Western Province of Sri Lanka. Fifteen randomly selected isolates were subjected to DNA sequence analysis using ITS, TUB2, and GAPDH regions. Species affiliations and identities of the resulting sequences were determined through similarity-based searches of the NCBI GenBank Database. Based on the combined phylogentic analysis of three gene regions, nine and six isolates were identified as C. endophytica and C. siamense respectively, both belonging to the C. gloeosporioides species complex. Of the two species, C. endophytica is reported as a causal agent of avocado anthracnose for the first time.
INTRODUCTION
Avocado (Persea americana Mill, Lauraceae) is native to Central America and southern Mexico and believed to have originated about 12,000 years ago, based on archeological evidence. The avocado is botanically classified in to three races, West Indian (WI), Mexico (XX), and Gautemalan (G). Systematic studies have classified more than 500 cultivars worldwide and there is a great variability in fruit traits not only between races but also among cultivars within races. The peel of some cultivars (e.g. Hass) changes from green to black or purple. The pericarp, which is the fruit tissue proper excluding the seed, comprises the rind (exocarp), the fleshy edible portion (mesocarp), and a thin layer next to the seed coat (endocarp) (Biale and Young, 1971).
Avocado is a fruit with high nutritional value and numerous beneficial health effects (Meyer and Terry, 2010). The fruit is a rich source of fats, particularly of monounsaturated fatty acids. The most abundant fatty acid is oleic acid that is known to reduce inflammation, a risk factor for cardiovascular diseases, and beneficial effects on cancer (Yoneyama et al., 2007). Health benefits of avocados are due to the presence of numerous bioactive phytochemicals (Adikaram et al., 1992;Tabeshpour et al., 2017). The fruit contains rare sugars of high carbon number and is relatively rich in certain vitamins, dietary fibre, and minerals. The fruit has high oil content and low sugar, hence recommended as a high energy food for people with diabetics. Avocado is a climacteric fruit, with a marked rise in respiration rate, followed by a decline.
Genus Colletotrichum is composed of plant pathogens of worldwide importance, particularly causing anthracnose in several tropical fruit species. Anthracnose disease (Sivanathan and Adikaram, 1989), caused by Colletotrichum species, and the stem-end rot (Madhupani and Adikaram, 2017) incited by several fungal pathogens, including Lasiodiplodia theobromae, are major constraints to the avocado industry, causing heavy fruit losses after harvest and limiting their marketing potential and shelflife.
Anthracnose disease was believed to be caused by Colletotrichum gloeosporioides (Sivanathan and Adikaram, 1989) and C. acutatum (Hartill, 1991) for decades in the 19 th century. More recent molecular studies, have revealed the association of over 15 Colletotrichum species with the anthracnose disease from both avocado growing and marketing countries of the world. Among them, the most significant number of species recorded, from a single country, was Israel where multi-locus phylogenetic analyses using ITS, act, ApMat, cal, chs1, GAPDH, GS, HIS3, TUB2 gene/ markers, identified eight previously described species, C. aenigma, C. alienum, C. fructicola, C. gloeosporioides sensu stricto, C. karstii, C. nupharicola, C. siamense, C. theobromicola, and a novel species, C. perseae, as causing avocado anthracnose, confirming their pathogenicity (Sharma et al., 2017). Talhinhas et al. (2002) were the first to carry out multilocus-based phylogenetic analysis for Colletotrichum species. Using multiple sequence alignment, past phylogenetic analyses have revealed that the genus Colletotrichum comprises of eleven species complexes and 23 singletons where the C. gloeosporioides species complex is collective of C. gloeosporioides s.s and 51 closely related species (Weir et al., 2012;Jayawardena et al., 2020). Similarly, C. acutatum is now considered a species complex consisting of 41 species that include C. acutatum s.s and its close relatives (Jayawardena et al., 2020).
The present study re-evaluated the Colletotrichum species associated with avocado anthracnose in Sri Lanka by multigene DNA sequence approach, using 35 isolates from diseased fruits collected in two major avocado producing provinces and also the semi-systemic nature of internal symptom development.
Isolation of Colletotrichum
Ripe avocado fruits showing characteristic symptoms of anthracnose disease were collected from wholesale fruit stores or retail outlets in two main avocado-producing and distributing areas, Kandy (Central Province), and Kurunegala (North-Western Province) Districts, of Sri Lanka, over two fruit seasons in 2015 -2016. Diseased fruits were brought in sealed polythene bags to the Plant Pathology laboratory at the Department of Botany, University of Peradeniya, Sri Lanka.
Colletotrichum was isolated from anthracnose lesions on 35 infected avocado fruits. Segments (5 × 5 mm 2 ) of infected tissues, cut from the advancing margin of anthracnose lesions in the fruit peel, were surface sterilized in 1% sodium hypochlorite (Clorox, USA) for 1 -3 min followed by rinsing twice in sterile distilled water (SDW). The excess liquid in tissue segments was removed by placing them on sterile filter papers. Tissue pieces (4 per plate) were aseptically transferred onto PDA medium, supplemented with 50 µg mL -1 tetracycline to suppress bacterial growth. The plates were incubated at 28 ℃ for 5 -7 days. The 35 isolates obtained were sub-cultured by transferring discs (6 mm diameter) of mycelium onto fresh PDA plates and allowed to grow at 28 ℃ for 14 days.
Preparation of mono-conidial cultures
A suspension of conidia of each isolate was prepared by suspending the mycelium scraped from 10 -day old cultures in sterile distilled water (SDW) and filtering through sterile glass wool. A loop-full of each suspension was streaked over thin tap water agar plates. After incubation the plates for 18 h at 28 ℃, a small piece of agar with a single germinated conidium, located by moving the objective lens (× 25) of a light microscope (Olympus CX 22) along the streak line, was cut and transferred onto fresh PDA. The plates were incubated for seven days. Pure cultures were maintained in microcentrifuge tubes (1.5 mL) containing 800 µL sterile PDA at 15 ℃ (Prihastuti et al., 2009) to be used in subsequent studies.
DNA extraction, PCR amplification and sequencing
Fifteen isolates, selected randomly from the initial 35 isolates, were used for molecular studies. DNA was extracted using the protocol described by Živković et al. (2010). Aerial mycelium (0.5 g), scraped from seven days old cultures, using a sterile inoculation loop, was placed in a sterile microcentrifuge tube (1.5 mL) containing 300 µL of extraction buffer (0.2 M Tris-HCl, 0.25 M NaCl, 25 mM EDTA, and 2% SDS, pH 8.5) and crushed well. Uncapped tubes were then placed in a boiling water bath for 5 min and allowed to cool to 25 ℃. Aliquots (200 µL) of phenol, equilibrated with the extraction buffer (vol/vol), and chloroform (200 µL) were added. The tubes were vortexed for 2 -3 min and centrifuged at 7,647 g for 5 min. The supernatant was transferred into a new 1.5 mL microcentrifuge tube containing 200 µL of chloroform and vortexed for 30 s followed by centrifugation at 7,647 g for 15 min. The supernatant was pipetted out into a new 1.5 ml tube and 200 µL of ice-cold isopropanol was added. Tubes were inverted several times for DNA to precipitate and centrifuged at 7,647 g for 15 min. The pellet was retained and washed with 400 µL of ice-cold ethanol and centrifuged at 7,647 g for 5 min. The pellet was air-dried for 10 min and re-suspended in 50 µL in low-TE buffer (10 mM Tris-HCl and 0.1 mM EDTA, pH 8.5) to dissolve DNA and stored at -20 ℃.
All PCR amplifications were carried out, as described by Weir et al. (2012). The PCR products were sequenced for both directions using Applied Biosystems, 3500 Genetic Analyzer at the Department of Molecular Biology and Biotechnology, Faculty of Science, University of Peradeniya, Sri Lanka.
Pathogenicity test
Anthracnose lesions in ripe avocado fruits collected in the study were examined, and the symptoms were recorded. Isolates of C. endophytica and C. siamense were grown on pure culture. Freshly harvested fruits of uniform size, devoid of blemishes or any disease symptoms, were chosen for artificial inoculation. Suspensions of conidia of an isolate each of C. endophytica and C. siamense were prepared by scraping mycelium, suspending them in sterile distilled water and filtering through glass wool. The concentration of conidia was adjusted to 1 x 10 6 mL -1 . Four drops (20 µL) of conidia from each isolate were applied on to four equally distanced sites along the fruit surface, from the stem-end to the blossom-end. Six replicate fruits were used for each isolate. The fruits treated with drops of SDW were maintained as controls. Inoculated and control fruits were incubated in separate trays, lined with moistened tissues, and covered with glass plates, at 28 -30 ℃. The fruits were examined daily and the symptoms, when appeared, were compared with those of the original diseased fruits in which the disease was initially observed. The pathogens were reisolated from symptomatic fruits on PDA. Morphological features of the colonies and, asexual reproductive stages of the isolate, were compared with those of the original isolates used for inoculation.
Data analysis
The species affiliations and identities were determined through similarity-based searches of the NCBI GenBank Database (htttp://www.ncbi.gov). Based on the identifications that resulted from the BLAST search, a combined phylogenetic analysis for ITS, TUB2, and GAPDH was conducted including the authenticated sequences of the members belonging to the C. gloeosporioides complex obtained from the GenBank (Table 1). Bayesian inference analysis was performed for the combined matrix. The best fitting substitution model was determined with jModelTest v.2 (Darriba et al., 2012) using the Akaike information criterion. The nucleotide substitution model General Time Reversible was selected. Bayesian inference was conducted to obtain posterior probabilities using MrBayes ver. 3.2.6 (Huelsenbeck and Ronquist, 2001;Ronquist and Huelsenbeck, 2003) with 10,000,000 generations Markov chain Monte Carlo chains with a sampling frequency of every 1,000 generations. The initial 25% samples from each run were discarded as burning. A majority rule consensus tree was calculated using the remaining trees to obtain the posterior probabilities for each node. The resulting tree was visualized and edited in FigTree ver. 1.4.3 (Rambaut and Drummond, 2016). Colletotrichum hippeastrum (isolate CBS 241.78) was used as the out group. All the sequences, generated during the study and used in multi-gene analyses, were deposited in GenBank and the accession numbers are given in Table 1.
Isolation of pathogen
Colletotrichum infections in ripe fruits appeared as blackish brown, and circular lesions of different sizes with slightly irregular margins scattered over the peel of ripe fruit. Salmon colored, sticky conidia masses, resembling slimy droplets, were seen in the centre of older lesions (Figure 1). Lesions enlarged in size widening their diameter, up to 3 -5 cm or more. Multiple infections in closer proximity tended to coalesce forming larger diseased areas.
Thirty five isolates were obtained from the anthracnose lesions in fruits collected from different locations in the Central Province where avocados are mostly produced and, also from the North Western Province of Sri Lanka. All 35 isolates produced oblong conidia, and the colonies of majority of the isolates consisted of pink conidial masses. The isolates were identified to the Genus Colletotrichum from their cultural and conidial characteristics.
Phylogenetic analyses
The combined data set for ITS, TUB2, and GAPDH sequences consisted of 1286 bps. The phylogenetic tree that resulted from the Bayesian analysis is given in the Figure 2. All members of the C. gloeosporioides complex formed a monophyletic group while all the Sri Lankan Colletotrichum isolates identified as C. siamense and C. endophytica formed a separate monophyletic clade. However, the both clades received low support, posterior probability of 0.84 and 0.73 respectively and the clades are unresolved.
Morphological characteristics of Colletotrichum siamense
The colonies on PDA first appeared white and turned pale yellow to grey with time. Aerial mycelium was greyish white, dense, wooly, or cottony with very few conidial masses at the center. Sectoring was observed in some cultures. Conidia were cylindrical with slightly rounded ends and sometimes tapering towards one end and measured 20.8 -30.4 µm × 7.0 -8.4 µm. Appressoria were ovoid or irregularly lobed, 9.2 -11.1 µm in diameter. Some cultures produced both ovoid and lobed appressoria while others produced only ovate. Appressoria colour ranged from brown to dark brown (Figure 4).
Morphological characteristics of Colletotrichum endophytica isolate
Colonies on PDA first appeared white and the center of colony of some cultures became grey to ash color with time. Aerial mycelium at the periphery was white, dense, wooly or cottony with numerous conidia masses. Some isolates produced sectoring after sub-culturing. Conidia Table 1: Accession numbers of authenticated sequences of the C. gloeosporioides species complex obtained from the GenBank and the sequences generated from the present study, used for phylogenetic analysis. were cylindrical with slightly rounded ends. Length and breadth of the conidia varied from 20.7 -32.6 µm (length) and 6.9 -9.9 µm (breadth). Appressoria were ovoid or irregularly lobed. All cultures produced both ovoid and lobed appressoria. Appressoria were initially pale brown color and later turned dark brown ( Figure 5).
Pathogenicity test
The two Colletotrichum species could be repeatedly isolated from diseased avocados in the study, indicating their consistent presence in infected fruits. Healthy fruits, artificially inoculated with the two fungi, developed typical anthracnose symptoms that were observed originally, 6 -7 days after inoculation. The control fruits did not develop any disease symptoms. Morphological characteristics of the colony and conidia of the two fungi re-isolated were similar to those of the original isolates used for inoculation.
DISCUSSION
Colletotrichum gloeosporioides (Sivanathan and Adikaram, 1989) and C. acutatum (Silva-Rojas and Ávila-Quezada, 2011) have been believed for decades to be the pathogens causing anthracnose disease in avocado and, certain other tropical and sub-tropical fruit species. The two species show morphological similarities. The conidia morphology of C. acutatum being the only distinguishable, but often inconsistent, character between them. Morphology and the development of reproductive structures have been utilized in the characterization of the genus Colletotrichum and its teleomorph, Glomerella, by taxonomists. The variability of morphological characters with changing environmental and growth conditions makes them unreliable as taxonomical criteria. Molecular-based methods are presently considered advantageous in the species level identification of the genus Colletotrichum (Weir et al., 2012) than morphological features. In general, morphological differences among isolates within the genus Colletotrichum do not correlate with molecular differences.
The present study used multigene sequence analysis with two coding genes, TUB2, GAPDH, and the nuclear ITS region that contributes to a higher resolving ability for species level identification of Colletotrichum causing avocado anthracnose in Sri Lanka. Two species, C. endophytica and C. siamense, were identified as casual agents. ITS region has been useful only for the identification of Colletotrichum isolates into the species complex level (Prihastuti et al., 2009). Colletotrichum endophytica, belonging to the C. gloeosporioides species complex, was isolated as an endophyte in Pennisetum purpureum. Colletotrichum endophytica was later reported as causing anthracnose disease in Camellia sinensis and chili in China, black pepper in India (Chethana et al., 2015) and more recently in mango also in Southern China (Li et al., 2019).
TUB2 sequences, generated for C. endophytica in the present study and deposited in the GenBank, would therefore be a valuable source of reference sequence material for future studies of Colletotrichum. The present study did not encounter either C. gloeosporioides or C. gigasporum that were previously identified to be associated with the avocado anthracnose in Sri Lanka (Hunupolagama et al., 2015).
Interestingly, the authenticated isolate of C. endophytica [CAUG28 (Diao et al., 2017)] was also grouped, within the C. endophytica clade, together with the Sri Lankan isolates. However, the authenticated C. siamense, which is also an ex-type (ICMP_18578) Weir et al. (2012), grouped in the main clade with the other species of the C. gloeosporioides complex, separate from the Sri Lankan C. siamense isolates. Based on multi-locus phylogenetic analyses, eight previously described species and a novel species (C. perseae) were identified as avocado anthracnose pathogens in Israel (Sharma et al., 2017). In addition, several more Colletotrichum species were reported causing anthracnose disease in avocado from countries worldwide, raising the total number of species to over fifteen. The inconsistency of the Colletotrichum species reported warrants further studies on the avocado-Colletotrichum pathosystem in avocado producing and marketing countries of the world.
Colletotrichum siamense was first described as a causal agent of coffee berry anthracnose from Northern Thailand (Prihastuti et al., 2009). The species was later recorded on many hosts across tropical and subtropical regions without any host specificity, peach (Yang et al., 2009), mango (Phoulivong et al., 2010;Udayanga et al., 2013), custard apple, Cerbera sp., figs, and papaya (Rampersad, 2011;Udayanga et al., 2013) and Pongamia pinnata (Dwarka et al., 2016). The understanding of C. siamense is still in a state of confusion. The present study identified variable cultural, conidial and appressorial characters within C. siamense suggesting that C. siamense might not be a single species. Molecular analysis of 85 Colletotrichum isolates from fruit crops in India, using ApMat marker, resolved C. siamense to be a species complex (Sharma et al., 2015). However, Liu et al. (2016), following a molecular analysis based on Genealogical Concordance Phylogenetic Species Recognition (GCPSR) and coalescent methods, concluded that C. siamense sensu lato is a single species rather than a species complex.
The present phylogenetic analyses have shown a separation of the Sri Lankan C. siamense isolates from the authenticated C. siamense which may supports the idea that C. siamense might well be a species complex (Sharma et al., 2015) rather than a single species. Similarly, the authenticated C. endophytica together with the Sri Lankan C. endophytica isolates are placed outside the main C. gloeosporioides complex, indicating the diversity of these species at the molecular level. The separation of the Sri Lankan Colletotrichum isolates from the rest of the species of the C. gloeosporioides complex indicates that the Sri Lankan isolates are genetically diverse. While studying the DNA sequence alignments, a notable difference of all 15 Sri Lankan isolates was an INDEL of six base pairs, either CACACG or CACATG, that was unique only to the local isolates in the GAPDH region (Figure 3). This INDEL was also shared by C. perseae, a novel species that was recently identified and described as causing avocado anthracnose disease in Israel (Sharma et al., 2017).
The present study reports C. endophytica for the first time from avocado anthracnose disease. This would necessitate new disease management strategies for avocado anthracnose since C. endophytica is new to avocado. This may also increase its importance as a quarantine pathogen (Yan et al., 2015). These re-iterate once again the importance of accurate identification of causal agents in designing disease management strategies.
CONCLUSION
In conclusion, the present study identified C. endophytica and C. siamense as pathogens avocado anthracnose where this is the first report of C. endophytica from avocado. The study also reports for the first time the semi-systemic nature of symptom development in the disease.
|
v3-fos-license
|
2024-03-22T15:52:57.027Z
|
2024-03-01T00:00:00.000
|
268592959
|
{
"extfieldsofstudy": [],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.mdpi.com/2076-2615/14/6/948/pdf?version=1710846507",
"pdf_hash": "cde013130238bb09ab0c6f891714ffb1de80ce39",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44295",
"s2fieldsofstudy": [
"Environmental Science",
"Biology"
],
"sha1": "aee4a5e793db8cbfdbddf5aac760c74cfcd5956c",
"year": 2024
}
|
pes2o/s2orc
|
Effects of pH on Olfactory Behaviours in Male Shore Crabs, Carcinus maenas
Simple Summary Climate change potentially threatens biodiversity, and in a changing environment, it is vitally important that we learn to understand how animals react to the predicted changes. In marine organisms, the sense of smell governs almost all essential behaviours animals exhibit, from finding food and detecting a predator to finding a mating partner. Interpreting animal behaviour when exposed to odour is a complex task, as many factors from seasonality to individuality, fitness, social status, and even weather and water chemistry influence an individual’s response. Here, we examine the impacts of reducing seawater pH levels predicted for the end of the century upon decision-making in male shore crabs when exposed to the female reproductive odour, the female sex pheromone. There is a significant alteration in the responsiveness of male crabs, with large, more sexually active males taking significantly less time to detect and react to females but then showing less sexual mating activity once reaching the female odour. This disruption of olfactory communication can potentially impact the mating and reproductive success of this globally distributed species, showing that even coastal crustaceans that are known to be hardy and able to survive substantial stressors are potentially at risk from altered seawater chemistry associated with climate change. Abstract The effects of climate change are becoming more apparent, predominantly concerning the impacts of ocean acidification on calcifying species. Many marine organisms rely on chemical signals for processes such as foraging for food, predator avoidance, or locating mates. The process of how chemical cues in marine invertebrates function, and how this sensory mode is affected by pH levels, is less researched. We tested the impact of reduced pH (7.6), simulating end-of-the-century predicted average ocean pH, against current oceanic pH conditions (8.2), on the behavioural response of male shore crabs Carcinus maenas to the female sex pheromone bouquet consisting of Uridine–diphosphate (UDP) and Uridine–triphosphate (UTP). While in current pH conditions (8.2), there was a significant increase in sexual interactions in the presence of female pheromone, males showed reduced sexual behaviours at pH 7.6. The crab weight–pH relationship, in which larger individuals respond more intensely sexually in normal pH (8.2), is reversed for both the initial detection and time to locate the cue. These results indicate that lowered pH alters chemical signalling in C. maenas also outside the peak reproductive season, which may need to be taken into account when considering the future management of this globally invasive species.
Introduction
Chemical communication is known as the language of life within the complex aquatic environment.Due to reduced visibility in turbid waters and the structurally complex habitat making visual cues less effective, organisms have adapted to use a wide variety of chemical signals and cues to instigate primary behaviours key to their survival [1].Crustaceans have been extensively researched in relation to feeding stimulants, with many Animals 2024, 14, 948 2 of 14 using low molecular weight metabolites like amino acids that are released from injured animals [2,3].This complex mechanistic pathway used for communication is under threat from anthropogenic influences, including heavy metal pollution and climate change.
Over the last few centuries, the human population has rapidly expanded, causing many environmental issues, including a huge loss in biodiversity [4] and increased demand for dwindling marine resources, thus threatening food security [5,6].This exponential growth has led to ecosystems changing locally and globally.The marine environment has been over-exploited [7], and the demand for fossil fuels has continued to grow, now exceeding 410 ppm [8].Ocean acidification, caused by the increasing amount of CO 2 within the atmosphere being absorbed by the oceans, could substantially alter marine ecosystems and have devastating impacts on the species within them [9].
The change in oceanic chemistry and temperature is proving to have a significant effect on our marine organisms, reducing their success in key behaviours such as feeding [10], predator/prey interactions, homing [11], and reproduction.The recent IPCC report 2022 [8] has shown that unprecedented and irreversible change, over hundreds and maybe thousands of years, has occurred in our climate, and without drastic action, the impacts of global warming will have long-reaching and severe impacts on the state of our oceanic environments.Water surface temperatures, by 2100, are predicted to increase 5-7 times more than compared to increases seen in the previous 50 years, with pH levels declining to 7.6 pH units by 2081-2100.The impacts of Ocean Acidification, caused by the decrease in pH levels, alter the chemical structure and function of molecules, impacting communication success [12].
The shore crab Carcinus maenas is a common inhabitant of various coastal habitats throughout Europe [13] and is used widely as a test organism in ecotoxicology [14,15].Although C. maenas is native to areas around Europe and North Africa, in recent decades, it has invaded North America, Australia, parts of South America, and South Africa [16].This crustacean is a successful invader due to its high tolerance to environmental perturbation [17] and is accustomed to environmental abiotic fluctuations due to its natural habitat, and many have made adaptations to counteract these stressors [18].Therefore, shore crabs are widely used to study the impacts of environmental stressors on an organism's physiology and the subsequent adaptations made to survive these extremes [16].Shore crabs, like most marine invertebrates, rely on chemicals to communicate and evaluate their environment as well as coordinate key behaviours like feeding, reproduction, and predator detection [19].
Behavioural studies on sensory systems generally use one cue in isolation in clean, static tanks.However, in the environment, animals are exposed simultaneously to a variety of signals, such as the presence of prey/food, potential predators, and competitors, as well as mating partners.The best-studied potentially conflicting signalling systems are predator-prey interactions, where a range of impacts on olfactory cue-driven behaviours have been detected [20].Since the chemical nature of many marine signals/cues is not elucidated yet, most studies on olfactory disruption also rely on unknown chemical cues from conditioned water or macerated food that cannot be quantified, making the interpretation of animal responses challenging.Interactions of competing signals, such as foraging for food vs. attraction to mating partners, also depend on the physiological state of the animals tested.In Carcinus maenas, this depends on the seasonality of the reproductive phase as well as social interactions [20].Evaluating the readiness of individual animals to respond to an olfactory cue is essential to fully understand the impacts of stressors upon animal behaviour, but few studies address such complexities in their methods or data interpretation.This complexity in the interpretation of animal behaviour and the lack of quantifiable cues have been major contributors to the significant repeatability problem of previous studies on fish olfaction and ocean acidification, leading to controversial discussions about the impacts of OA on animal behaviour [21][22][23].
In this study, we examined how lower pH may affect a shore crab's reaction to both natural and synthetic versions of the known female sex pheromone and food cues.We utilised flow-through Y-shape olfactometers with simulated additions at current pH (8.15) and predicted future (7.6) pH values.Examining multiple sensory signals simultaneously enables comparison of responses to competing signals at controlled concentrations of known, quantifiable chemical cues.Using marked individuals allows the inclusion of animal characteristics such as sex, size, and weight as a confounding variables to explain consequences to population dynamics resulting from pH change-driven shifts in behaviour.As such, we address here a gap in our knowledge of recent olfactory disruption literature [12,24].
Materials and Methods
Over 300 Carcinus maenas were collected by hand from natural seawater ponds surrounding the University of Algarve Marine Station (Ramalhete, grid coordinates: 37.006767945043585Ch-7.96741479703442), so transport time was minimal, and no casualties occurred.Male crabs were transferred into a large flow-through tank filled with water of pH 8.1, where they were kept under pH-controlled conditions for one week prior to experiments in 1.5 m × 1.5 m by 0.8 m height tanks, which held 1500 L, and they were fed with defrosted mussels (Mytilus edulis).The culture conditions were selected to mimic those in the estuary at the field station.The Ramalhete Marine Station (CCMAR, Faro, Portugal) was equipped with a constantly measuring direct CO 2 -control system adjusting pCO 2 .Two independent systems were used to control pH of 8.1 ± 0.015 and pH of 7.6 ± 0.008.For a measured total alkalinity of 2500 mmol/kg SW, the pCO 2 was calculated as 537 and 1922 matm in the 8.1 and 7.6 pH water (CO 2 calc software v1.2.0).Seawater parameters were measured daily at 2.30 p.m. for temperature: 20.16 • C ± 1.05 • C, salinity: 35.67 PSU ± 0.26 PSU, and dissolved oxygen: 7.55 mg/L ± 0.16 mg/L (for details of methods, see [25,26]).These tanks contained tubes for the crabs to shelter in.The control tank pH was kept at pH 8.1 ± 0.015, and the reduced pH tank was kept at 7.6 ± 0.008.The total alkalinity of 2500 µmol/kg SW was measured, and the pCO 2 was calculated as 537 and 1922 µatm for the control and reduced pH tank water, respectively (CO2calc software v1.2.0).Salinity, temperature, and dissolved oxygen were recorded as seawater parameters daily at 2.30 p.m. (mean temperature: 20.16 • C ± 1.05 • C, mean salinity: 35.67 PSU ± 0.26 PSU, mean dissolved oxygen: 7.55 mg/L ± 0.16 mg/L) [26]).The seawater was natural seawater pumped directly from the estuary at Ramalhete and cleaned via fluidised sand filters.
Females were transferred into a smaller (500 L) flow-through tank with pH 8.1 water and were kept under control conditions.Males and females were tested towards the end of the Carcinus reproductive season (late October) when most, but not all males should still be sexually active, and both sexes may also respond to feeding cues [2].The carbonate chemistry of all water samples was determined from pH (measured with an Orion 8103SC pH electrode calibrated on the National Bureau of Standards (NBS) scale).The crabs were not fed throughout this experiment.
Sodium carboxymethylcellulose (medium viscosity, Sigma-Aldrich, Gillingham, U.K. C4888) was used to make gel with added mussel juice and/or synthetic pheromone/feeding cue compounds to achieve the test concentrations, as outlined below.Negative control gels were made with natural seawater, positive controls used crushed and 0.2 µm-filtered mussel juice, and two test gels used the synthetic feeding stimulant Glutathione 10 −4 M [27] and the sex pheromone UDP/UTP 10 −4 M at a ratio of 4:1 [28].All chemicals were obtained from Sigma-Aldrich.These gels were freeze-dried and stored in a freezer (−20 • C) until required for testing.The cue diffusion rate was calculated prior to experimentation to ensure equal distribution of stimulants (see Supplementary Materials).This testing was carried out in October 2019.
Experimental Design
Two identical Y-shaped flow-through olfactometer tanks were set up with running natural seawater entering each branch of the tank with split flow tubes at a flow rate of 1 L per minute (Figure 1).Each tank was filled with seawater (20.16 • C ± 1.05 • C) to a depth of 12 cm; one tank was filled with water measured at pH 8.1 and the other at pH 7.6, modelling current and predicted ocean pH conditions for the year 2100.The pH was measured continuously throughout the study using an Orion 8103SC pH metre calibrated using the National Bureau of Standards (NBS) scale.Additionally, CO 2 in the header tanks was continuously measured using an IRGA analyser (WMA-4; PP Systems, Amesbury, MA, USA) with data downloaded every 15 days.Salinity (CO310 conductivity metre; VWR, Radnor, PA, USA), pH (Orion 8103SC pH metre; Thermo Scientific, Waltham, MA, USA), temperature (Roth digital thermometer; Hanna Instruments, Woonsocket, RI, USA), and dissolved oxygen (Symphony SB90M5, VWR, Lutterworth, UK, accuracy ±0.2 mg/L; ±2%) were regularly monitored in the experimental aquaria.The tanks were positioned close together, so temperature and light intensity remained the same for both conditions.The tanks were lined with black liners to minimise external visual disturbances, such as shadows, that may distress the crabs or affect results [29,30].The tanks were filled with 2.5 cm of sediment taken directly from the banks of the estuary and thoroughly rinsed with seawater to mimic natural environmental conditions.Silicone tea strainers were used to hold the cellulose gels in place at one end of the tank, enabling constant flow over the cues for controlled distribution (see Figure 1).The tanks were positioned outside and covered from above with mesh to create shade.Both sides of the tank were shaded evenly so no direct sunlight would enter past the mesh.
Two identical Y-shaped flow-through olfactometer tanks were set up with running natural seawater entering each branch of the tank with split flow tubes at a flow rate of 1 L per minute (Figure 1).Each tank was filled with seawater (20.16 °C ± 1.05 °C) to a depth of 12 cm; one tank was filled with water measured at pH 8.1 and the other at pH 7.6, modelling current and predicted ocean pH conditions for the year 2100.The pH was measured continuously throughout the study using an Orion 8103SC pH metre calibrated using the National Bureau of Standards (NBS) scale.Additionally, CO2 in the header tanks was continuously measured using an IRGA analyser (WMA-4; PP Systems, Amesbury, MA, USA) with data downloaded every 15 days.Salinity (CO310 conductivity metre; VWR, Radnor, PA, USA), pH (Orion 8103SC pH metre; Thermo Scientific, Waltham, MA, USA), temperature (Roth digital thermometer; Hanna Instruments, Woonsocket, RI, USA), and dissolved oxygen (Symphony SB90M5, VWR, Lutterworth, UK, accuracy ±0.2 mg/L; ±2%) were regularly monitored in the experimental aquaria.The tanks were positioned close together, so temperature and light intensity remained the same for both conditions.The tanks were lined with black liners to minimise external visual disturbances, such as shadows, that may distress the crabs or affect results [29,30].The tanks were filled with 2.5 cm of sediment taken directly from the banks of the estuary and thoroughly rinsed with seawater to mimic natural environmental conditions.Silicone tea strainers were used to hold the cellulose gels in place at one end of the tank, enabling constant flow over the cues for controlled distribution (see Figure 1).The tanks were positioned outside and covered from above with mesh to create shade.Both sides of the tank were shaded evenly so no direct sunlight would enter past the mesh.A preliminary test was carried out to assess the cue diffusion rate and how long the odour lasted until it was entirely diffused and could no longer be detected.The results of this showed that odours took approximately 5 min to diffuse to the other end of the Yshaped tank at the chosen flow rate and lasted for a duration of approximately 2 h before needing to be replaced (Table S1).In the main study, crabs (n = 40 per condition, both sexes) were randomly selected from a large storage tank (pH 8.1).Control bioassays used a synthetic cue (Reduced Glutathione = GSH, or female sex pheromone = UDP/UTP) gel in one arm and a negative (seawater control) or a positive (mussel juice) gel in the other.This bioassay procedure was repeated for each experimental condition: GSH vs. seawater control, pheromone vs. seawater control, GSH vs. mussel juice, pheromone vs. mussel juice, and pheromone vs. GSH.Measurements of the crabs' carapace width were taken A preliminary test was carried out to assess the cue diffusion rate and how long the odour lasted until it was entirely diffused and could no longer be detected.The results of this showed that odours took approximately 5 min to diffuse to the other end of the Y-shaped tank at the chosen flow rate and lasted for a duration of approximately 2 h before needing to be replaced (Table S1).In the main study, crabs (n = 40 per condition, both sexes) were randomly selected from a large storage tank (pH 8.1).Control bioassays used a synthetic cue (Reduced Glutathione = GSH, or female sex pheromone = UDP/UTP) gel in one arm and a negative (seawater control) or a positive (mussel juice) gel in the other.This bioassay procedure was repeated for each experimental condition: GSH vs. seawater control, pheromone vs. seawater control, GSH vs. mussel juice, pheromone vs. mussel juice, and pheromone vs. GSH.Measurements of the crabs' carapace width were taken using callipers (in cm) and recorded before they were placed into the tank.The size (CW-carapace width) of the Carcinus ranged from 1.5 to 8 cm, with crabs over 2 cm CW described as sexually mature in the Ria Formosa estuary [31].The crabs were placed into a plastic tube in the tank and left for 2 min to allow them to acclimatise in the wake of the odours released from the gels.The tube was removed, allowing the crabs free movement.The time taken in seconds from the initial reaction of the crabs (time to initiate rapid antennule flicking) to them reaching the gel was recorded, and cue choice and behavioural observations (Table 1) in response to the cue were also recorded.The crabs were monitored for five minutes, if no movement was observed, then the crabs were removed and recorded as having no visible reaction.Antennule flicking was used as a behaviour indicating the detection of a feeding stimulant, as this has been commonly observed and reported in multiple decapod crustaceans [29].This method was carried out by two observers in parallel, with one pH per tank ensuring all other environmental conditions were identical for each bioassay (such as time of day, light, weather).Data were recorded visually for each bioassay and put into Excel, where they were analysed using a combination of Excel and the statistical software RStudio 4.0.2.The crabs were released after testing as only natural odours had been used.
Wafting
This behaviour can be defined by a rapid back-and-forth movement created by the Carcinus maenas' mouth pieces.
Grabbing
This behaviour was when the Carcinus maenas physically grabbed the tea strainer that the odour was inside of with either claw.
Buried
This behaviour was recorded if the Carcinus maenas buried into the sediment, either at the start of the experiment or near a cue.Non-Visible This behaviour was recorded if the Carcinus maenas didn't show any visible behaviours.
Detection
This behaviour was defined by the onset of the Carcinus maenas rapidly flicking their antennules.
Locating
This behaviour was defined by whether the Carcinus maenas made a decision and reached the end of one of the Y-shaped olfactometer arms Cradle cue This behaviour was recorded when the Carcinus maenas showed cradling behaviour towards the cue.
Ran at cue
This behaviour was recorded if the Carcinus maenas reached the cue and then continued to run in a confused manner around it.
To analyse the differences between the time variables in each of the pH test conditions, we used the statistical software RStudio.The distribution of the data was checked via the Shapiro-Wilk test and inspected using histograms.As data were not normally distributed in some groups, non-parametric equivalents were used.The data for location and detection times were analysed using the unpaired two-samples Wilcoxon test (Wilcoxon rank-sum test).For the comparison of different behaviours in the two pH conditions, a generalised linear model (GLM) with a Poisson distribution was run, as the values being low in some areas meant that statistics, such as chi-squared, were unsuitable.Also, the data are dependent on each other as they were run in tandem to not add censorship to data that were compared together.
Cue Location Time
There was no significant effect of seawater pH (Wilcoxon rank-sum test with continuity correction, W = 88.5, p-value = 0.08844) (Figure 2A) on the time taken to locate the pheromone gel by the Carcinus maenas, as measured by the time taken to reach the end of the olfactometer arm after the crabs made a choice of odour (seawater or pheromone).There was no significant effect of seawater pH (Wilcoxon rank-sum test with continuity correction, W = 14, p-value = 0.5892) on the time taken to locate the control by C. maenas, as measured by the time taken to reach the end of the olfactometer arm after the choice of odour (seawater or pheromone), showing that pH does not influence locomotion ability.
as measured by the time taken to reach the end of the olfactometer arm after the choice of odour (seawater or pheromone), showing that pH does not influence locomotion ability.
Detection Time
There was no significant effect of seawater pH (Wilcoxon rank-sum test with continuity correction, W = 164.5,p-value = 0.3615) (Figure 2B) on the time taken to initiate rapid antennule flicking by Carcinus maenas that selected pheromone for the olfactometer choice (seawater or pheromone).There was no significant effect of seawater pH (Wilcoxon rank-sum test with continuity correction, W = 18, p-value = 1) (Figure 2B) on the time taken to initiate rapid antennae flicking by C. maenas that selected control for the olfactometer choice (seawater or pheromone), showing that pH has no impact upon the detection of the cues.The overall impact of pH on the induction of rapid flicking independent of the cue used follows the same pattern and is presented in the Supplementary Materials (Figure S1).
The responses of male crabs towards the pheromone are longer in normal pH and close to being significant (p-value = 0.08844, Figure 2A), and the variance in Figure 2A is greater at pH 8.2 than in reduced pH conditions.As experiments were undertaken towards the end of the reproductive season when not all male crabs respond to sex pheromones and larger, dominant males are more likely to respond to the female sex pheromone.We also analysed the data for a correlation of male size and the impacts of pH levels.Figure 3A,B shows marked size-dependent trends, both in the time taken to initiate antennal flicking and the time to locate a cue.While larger individuals under normal pH conditions take longer to detect and to locate the cue, this trend is reversed under low pH conditions.
Detection Time
There was no significant effect of seawater pH (Wilcoxon rank-sum test with continuity correction, W = 164.5,p-value = 0.3615) (Figure 2B) on the time taken to initiate rapid antennule flicking by Carcinus maenas that selected pheromone for the olfactometer choice (seawater or pheromone).There was no significant effect of seawater pH (Wilcoxon ranksum test with continuity correction, W = 18, p-value = 1) (Figure 2B) on the time taken to initiate rapid antennae flicking by C. maenas that selected control for the olfactometer choice (seawater or pheromone), showing that pH has no impact upon the detection of the cues.The overall impact of pH on the induction of rapid flicking independent of the cue used follows the same pattern and is presented in the Supplementary Materials (Figure S1).
The responses of male crabs towards the pheromone are longer in normal pH and close to being significant (p-value = 0.08844, Figure 2A), and the variance in Figure 2A is greater at pH 8.2 than in reduced pH conditions.As experiments were undertaken towards the end of the reproductive season when not all male crabs respond to sex pheromones and larger, dominant males are more likely to respond to the female sex pheromone.We also analysed the data for a correlation of male size and the impacts of pH levels.Figure 3A,B shows marked size-dependent trends, both in the time taken to initiate antennal flicking and the time to locate a cue.While larger individuals under normal pH conditions take longer to detect and to locate the cue, this trend is reversed under low pH conditions.The mating behaviour of shore crabs involves a cascade of individual behaviour elements, from detecting a cue (antennal flicking), locomotion towards a cue, to grabbing and attempted mating or guarding.For definitions used here, see Table 1.A Poisson GLM was performed to compare the behaviours within the behavioural assays (Supplementary Materials, Figure S2).There was a significant effect of seawater pH (Estimate: 1.610 × 10 −15 , Std.Error: 0.388, z-value: 0.000, Pr (p-value): 1.000) (Supplementary Materials, Figure S2, Figure 4A) on these individual components of sexual mating behaviour exhibited by the Carcinus maenas in response to pheromone exposure.The mating behaviour of shore crabs involves a cascade of individual behaviour elements, from detecting a cue (antennal flicking), locomotion towards a cue, to grabbing and attempted mating or guarding.For definitions used here, see Table 1.A Poisson GLM was performed to compare the behaviours within the behavioural assays (Supplementary Materials, Figure S2).There was a significant effect of seawater pH (Estimate: 1.610 × 10 −15 , Std.Error: 0.388, z-value: 0.000, Pr (p-value): 1.000) (Supplementary Materials, Figure S2, Figure 4A) on these individual components of sexual mating behaviour exhibited by the Carcinus maenas in response to pheromone exposure.The mating behaviour of shore crabs involves a cascade of individual behaviour elements, from detecting a cue (antennal flicking), locomotion towards a cue, to grabbing and attempted mating or guarding.For definitions used here, see Table 1.A Poisson GLM was performed to compare the behaviours within the behavioural assays (Supplementary Materials, Figure S2).There was a significant effect of seawater pH (Estimate: 1.610 × 10 −15 , Std.Error: 0.388, z-value: 0.000, Pr (p-value): 1.000) (Supplementary Materials, Figure S2, Figure 4A) on these individual components of sexual mating behaviour exhibited by the Carcinus maenas in response to pheromone exposure.There was a significant effect of seawater pH (Estimate: 1.099, Std.Error: 0.471, zvalue: 2.331, Pr (p-value) 0.020 *) on the number of crabs exhibiting the full sexual mating behaviour, specifically the grabbing response, which is a key part of the mating process (Supplementary Materials, Figure S2).There was also a significant increase in the number of crabs exhibiting no visible behaviours at all in lower pH conditions (Estimate: 1.642, Std.Error: 0.446, z-value: 3.682, Pr (p-value): 0.000 **) (Supplementary Materials, Figure S2).There was no significant effect of seawater pH (Estimate: 0.511, Std.Error: 0.5164, z-value: 0.989, Pr (p-value): 0.323) on the number of crabs exhibiting the run behaviour, so crabs were still exploring the Y-shaped olfactometer (Supplementary Materials, Figure S2).There was no significant effect of seawater pH (Estimate: 0.406, Std.Error: 0.527, z-value: 0.769, Pr (p-value): 0.442) on the number of crabs exhibiting the wafting behaviour, which is associated with detecting a sexual cue by creating currents fanning a response signal towards a sender (Table 1; Supplementary Materials, Figure S2). Figure 4B shows that when looking at the decision-making of the male crabs to either stay at the starting area, run to the pheromone arm, or select the control arm of the olfactometer, there is a clear preference for the pheromone over the control at both pH levels; albeit, this is reduced in pH 7.6.At the same time, there is an increase in the number of males making no decision to run towards either chemosensory cue.
When analysing the behavioural data shown in Figure 4A in relation to the size of the males, the effects of lowered pH become more pronounced (Figure 5) in the group of males that fall into a size class that has been described as sexually active.
Animals 2024, 14, x FOR PEER REVIEW 8 of 15 There was a significant effect of seawater pH (Estimate: 1.099, Std.Error: 0.471, zvalue: 2.331, Pr (p-value) 0.020 *) on the number of crabs exhibiting the full sexual mating behaviour, specifically the grabbing response, which is a key part of the mating process (Supplementary Materials, Figure S2).There was also a significant increase in the number of crabs exhibiting no visible behaviours at all in lower pH conditions (Estimate: 1.642, Std.Error: 0.446, z-value: 3.682, Pr (p-value): 0.000 **) (Supplementary Materials, Figure S2).There was no significant effect of seawater pH (Estimate: 0.511, Std.Error: 0.5164, zvalue: 0.989, Pr (p-value): 0.323) on the number of crabs exhibiting the run behaviour, so crabs were still exploring the Y-shaped olfactometer (Supplementary Materials, Figure S2).There was no significant effect of seawater pH (Estimate: 0.406, Std.Error: 0.527, zvalue: 0.769, Pr (p-value): 0.442) on the number of crabs exhibiting the wafting behaviour, which is associated with detecting a sexual cue by creating currents fanning a response signal towards a sender (Table 1; Supplementary Materials, Figure S2). Figure 4B shows that when looking at the decision-making of the male crabs to either stay at the starting area, run to the pheromone arm, or select the control arm of the olfactometer, there is a clear preference for the pheromone over the control at both pH levels; albeit, this is reduced in pH 7.6.At the same time, there is an increase in the number of males making no decision to run towards either chemosensory cue.
When analysing the behavioural data shown in Figure 4A in relation to the size of the males, the effects of lowered pH become more pronounced (Figure 5) in the group of males that fall into a size class that has been described as sexually active.
Discussion
This study demonstrates that reduced ocean pH alters the chemosensory behaviour of the shore crab C. maenas [32].Altering the detection and response to a chemosensory signal could have a multitude of potential reasons.These include the inability to detect the odour through receptor-ligand interaction disruption, as described for peptide cues [12], changes to the conformation of chemoreceptors [33], or alterations to signal transduction pathways such as GABAA receptor alteration shown in a variety of fishes [22].
Discussion
This study demonstrates that reduced ocean pH alters the chemosensory behaviour of the shore crab C. maenas [32].Altering the detection and response to a chemosensory signal could have a multitude of potential reasons.These include the inability to detect the odour through receptor-ligand interaction disruption, as described for peptide cues [12], changes to the conformation of chemoreceptors [33], or alterations to signal transduction pathways such as GABA A receptor alteration shown in a variety of fishes [22].
Animals 2024, 14, 948 9 of 14 The hypothesis that the chemoreceptors for signalling compounds are affected by the reduced pH was proposed by [3].They suggested that the increased hydrogen ions (H + ) might alter the charge distribution on the odour receptor's docking site of an animal's sensory organs.Though this is difficult to test directly, it would reduce the ligand-receptor interactions, which affect signal detection on the same scale as changes to the signal molecule would [12,28].Altered receptor signal interactions through both structural and charge distribution changes of the cue and the receptor were hypothesised as being responsible for altered signal detection [13].Modelling of binding energies utilising known chemoreceptors enabled Schirrmacher et al. [33] to demonstrate the changes to the conformation of chemoreceptors for a predator cue in hermit crabs.
For the detection of a chemical signal, the cue must be available in a bioactive form above a detection threshold level, which is known to be impacted by the protonation status of the cues [13].Lowered pH leads to higher abundance of the protonated form of a signalling molecule, here UDP/UTP, that then potentially impacts receptor-ligand interactions depending on the pKa of the compound.There was no significant difference between pH treatments for the control and the sex pheromone, suggesting that males can still detect pheromones in low pH at the same speed if they are delivered at a concentration of bioactive molecules above the detection threshold (Figure 2A).In fact, male crabs took slightly longer to reach the cue in the olfactometer at pH 8.2, showing that the physical ability to run towards a cue is not decreased in lowered pH conditions.The fact that pheromone-stimulated male crabs were able to reach the end of the arm slightly faster in low pH confirms (Figure 2A) shows that if the concentration of pheromone is high enough to be above the reaction threshold, a response is initiated regardless of pH.The data also suggest that there are no signs of either of a lack of neural stimulation, as described for fish and modification of GABA A channels [22], or of metabolic depression caused by short-term low pH exposure.Equally, there was no significant increase in the time it took to exhibit antennule flicking from crabs tested in reduced pH compared to normal pH conditions, suggesting that low pH did not impair the crab's ability to detect the odour (Figure 2B).At a very low pH of 6.6 [34], hermit crabs found impairments to olfactory behaviour, including antennular flicking and prey detection.However, these results are different to a study conducted on the freshwater crayfish, Cambarus bartoni, in which individuals showed a reduced rate of antennule flicking and failed to locate a food odour under low pH conditions [35].
Results from behavioural assays upon pheromone exposure depend on a diverse list of environmental, physiological, and social factors, making it difficult to quantify and compare studies [36].This also includes the narrow window of cue concentration over which a response may shift from initiating a response to falling below the detection threshold.Even when the bioavailability of a cue is reduced by only a small percentage, dramatic effects can be recorded if the concentration of a bioactive cue falls below the detection threshold [36].However, large changes in the bioavailability of the bioactive form of a cue may not result in any change in behavioural responses when they occur above the response threshold.
The detection of pheromones at lowered pH in Carcinus maenas is similar to what has been found for predator detection [20], which is independent of seawater pH.With shore crabs inhabiting coastal, often estuarine areas where pH conditions fluctuate significantly on a daily and seasonal level, the stability of key behaviours, especially towards potential lethal threats, is beneficial [21].Nevertheless, the interpretation of male attraction in olfactometers is not always simplistic, as the reasons why an animal runs towards a cue can vary.When exposed to sexual stimuli, the observer usually assumes that the attraction is based upon the sexual cue.The attraction could potentially be based upon social interactions, gregariousness, or even cannibalism.Running fast may also not indicate attraction but could be escape behaviour, and altered pH along with altered odour cues could initiate a reaction of confusion, leading to crabs running away or hiding [32].
Interestingly, Figure 3 shows that the time to respond to a cue via increased antennal flicking and to locate are size-specific, with larger typically more sexually active males [32] taking longer to run towards the female cue than small males.Size, social hierarchies, and sexual maturity are also factors that could influence the results [36].Perceived risk is also going to be different between larger and smaller crabs in a novel situation such as the bioassays.Smaller crabs, although having reached sexual maturity [31], do not respond as fast as larger, dominant male crabs that are known to win fights and respond quicker to sex pheromones [36].Figure 3 also implies that large males select the olfactometer arm with the female pheromone.This relationship is completely reversed at decreased pH, supporting the hypothesis that decreased pH alters male mating attraction.This supports the interpretation that in sexually active males, reduced detection of female pheromones at reduced pH (see Figures S1 and S2 for responses of those males that selected the pheromonebaited arm of the Y-shaped olfactometer) could initiate a stress-type hypersensitive response when the sensory system is impaired [32].Ovelheiro et al. [31] showed that the size at which male crabs reach sexual maturity differs between populations in Portugal and even more so on a wider geographical scale.Defining a size at which shore crabs are more sensitive to seawater pH levels is therefore too speculative and will require repeats at each population studied.Environmental adaptations like this are a major limiting factor in ecological research, highlighted recently to also impact the interpretation of behavioural assays [36].
Altered behavioural responses to cues could also show potential physical damage to chemoreceptors and sensory organs, as it has been shown that calcified animals may experience dissolution of their exoskeletons under such conditions [37][38][39] and may experience physiological stress, even though some responses may improve under reduced pH conditions [23,40,41].Further testing found that the antennules of hermit crabs, after a five-day exposure to reduced pH seawater, did not reveal any visible damage when viewed under an electron microscope [42].Similarly, Munday et al. [43] found no evidence of visible damage to the olfactory organs of their fish larvae using electron microscopy.However, Velez et al. [44] observed organisms developing mucus on the epithelia after exposure to low pH, presumably as a mechanism to protect themselves from the effects of low pH.
Whilst attraction towards an odour cue, such as the female sex pheromone, is a major element of the reproductive behaviour of many invertebrates and can be controlled by odour trails, Figure 4 highlights that the key step of forming a mating pair by grabbing the female or, in this case, the pheromone source, and attempting mating stance is impaired significantly by reduced pH, especially in those large sexually active Carciuns males (Figure 5).Pair formation, the mating stance, and post-mating guarding behaviour are critical to ensure successful reproduction in many brachyuran crustaceans.Altered or reduced success in low pH conditions is therefore a potentially significant impact of climate change-associated pH drop.In species that live in fluctuating pH conditions near the shore, such as estuarine environments where pH decreases significantly during the night and seasonally, the pH is lower in autumn/winter; such pH effects will be more pronounced in future oceans [45].Our data could also be part of the reasons why some intertidal species, such as C. maenas, reproduce during the day in summer conditions when pH conditions are at their highest levels.
The fact that different elements of the complex reproductive behaviour are differently impacted by pH conditions in shore crabs also fits the reports by Richardson et al. [20] that predator detection is not altered by pH, while detection and response to food odours change in Carcinus maenas with experiments undertaken in the same location as the current study.Such complexity in the impacts of climate change is further increased when one considers the role of temperature and alkalinity in olfactory responses and their consequences for animal behaviour, as shown in a meta-analysis by Clements and Hunt [46].Understanding olfactory disruption through climate change is therefore extremely difficult, as highlighted by the recent controversial discussion [22,23], and comparing studies on different species makes little sense ecologically [36].To improve our understanding, there is a clear need to standardise our methods for this type of research to develop predictive models based upon the use of identified, quantifiable signalling compounds [36].
Overall, the percentile of males responding to the female sex pheromone was lower in this study than reported in previous studies [2,12].This could be due to factors such as the pH, the stability of the cues, the smaller size of the males when used [32], as well as reproductive seasonality.Shore crabs' mating season ranges between the months of April and October [32].Male shore crabs in the UK show the highest responses to female sex pheromones in the month of July [12].We conducted our research in October towards the end of the crabs' reproductive season when the responsiveness to sex pheromones decreases significantly [26,27].The cues were released from gels made from carboxycellulose that were frozen and then freeze-dried.The efficiency of pheromone cues in hermit crabs was not reduced due to freeze-drying [47,48].
Peptide signalling cues are susceptible to protonation in low pH conditions, altering the overall charge [12].Peptide forms present today differ significantly from the protonated signalling peptides predicted in future oceans, including changes in molecular structure and electrostatic properties, which are crucial for receptor binding.Their study similarly used shore crabs, and results suggest an impaired functionality of the signalling peptides at low pH.The change of charge, structure, and consequently function of signalling molecules presents one possible mechanism to explain altered behaviour under future oceanic pH conditions [12].The sex pheromone cue is a combination of two structurally similar, related nucleotides, UDP and UTP, and the structure of these two molecules changes shape and protonation under reduced pH albeit only by a small margin of less than 5% (Roggatz pers comm.).This is unlikely to render them significantly more difficult to detect by some of the shore crabs, but it reduces the bioavailability of pheromone in the correct, bioactive form slightly.Combined with the potential impacts of pH upon the structure and charge distribution at the chemoreceptor [28,33], this may leave some individual males impacted in their response to the pheromones.
For the duration of the study, male crabs were kept together in a large communal tank, meaning fights would occur.It is known that the shore crab changes its behaviour in relation to recent social interactions [23].There was a large range in initial reaction times within the crabs, ranging from 0.01 s up to 60 s.These results could be in part explained by social status and hierarchy.Crabs of higher social status, typically larger crabs, win most fights and therefore will have a stronger response towards the female pheromone cue than smaller 'submissive' crabs with lower social status.Jiménez-Morales et al. [49] found that crabs remember their status after a fight, with the dominant and submissive male recognising their hierarchy status.Losers and winners will occur in all our tests; this randomisation allows the tests to be unbiased, and it may help explain the large variability in the data in terms of reaction time.
Kim et al. [34] found individual variation in the speed of antennule flicking and speed of prey detection, with crabs exposed to low pH treatments displaying higher individual variation than in the control pH treatment, suggesting that phenotypic diversity could promote adaptation to future ocean acidification.We similarly recorded a wider range of initial reaction times across the individuals tested in low pH treatments, ranging from 1-70 s, supporting the suggestion that individual variation may relate to phenotypic diversity as a sign of adaptation to future ocean pH conditions.Individual variation in our study may have also been caused by additional factors such as social hierarchies (winner/loser effects, size), which can influence behavioural responses towards the cues [20,36].
Research related to cue alteration in the ocean, particularly animal responses to such changes, has shown that the cues themselves are directly influenced by pH [12,50].Natural chemical cues are being modified by humans, and novel anthropogenic cues are being introduced into the ocean, both of which can directly and indirectly alter the persistence, composition, and transmission of natural cues.Natural cues can be either stable or unstable, whereas with synthetic cues, we can choose whether they are stable or not.When looking into synthetic cues and their uses in the future, it is important to test their effectiveness across varying levels of pH, including expected low pH levels.A reason for doing this is so that the cue will remain effective even amidst pH changes.
Limited research has been completed using gels infused with pheromone as a speciesspecific way to manage invasive species [51].Although food cues may have a positive response on crabs, this odour would not be species specific to C. maenas, as many marine organisms are attracted to similar food odours, predominantly amino acids and peptides.C. maenas is one the most prolific marine invasive species, impacting commercial bivalve cultures as well as threatening the stability of ecosystems [52].Our research provides insight into the complexity of utilising sex pheromones as a potential integrated pest management strategy in a changing world.It highlights the need for further research into the complex environmental and social behaviours of crustaceans that govern their interactions.
Conclusions
Low pH alters the responses to female sex pheromones in male shore crabs, with different elements of reproductive behaviour affected differently.The time to respond via antennular flicking and attraction to the source became more rapid in lowered pH, especially in large, sexually active males.This could be explained as hyperosmia due to olfactory stress in changed seawater chemistry, leading to heightened sensitivity to female odour.Males attracted to the source of the cue did show substantially decreased pheromone-induced mating activity, highlighting a potentially significant disruption of mating success.
The data show the complexity of how olfaction is impacted by pH conditions.This is especially important at the end of the spawning season when crabs' responses to sexual cues are already reduced, as in our study.An individual's size, physiology, moult stage, and potential social status impact their responses.This highlights that predicting the impacts of climate change-associated olfactory disruption accurately will require a much deeper understanding of such variables.There is a need to standardise our methods for this type of research to develop predictive models based on the use of identified, quantifiable signalling compounds.
Figure 1 .
Figure 1.This graphic shows the setup of the Y-shaped olfactometer (length 1.1 m, base width 28 cm, arms 37 cm × 24 cm), with flow entering at the tips of the Y (yellow arrows) and flow leaving at the base of the olfactometer.Cue locations are indicated by green/red dots, and the crab movement options are indicated by the blue arrows.The crab is placed at the base before it is released.The crab in the figure is not to scale.
Figure 1 .
Figure 1.This graphic shows the setup of the Y-shaped olfactometer (length 1.1 m, base width 28 cm, arms 37 cm × 24 cm), with flow entering at the tips of the Y (yellow arrows) and flow leaving at the base of the olfactometer.Cue locations are indicated by green/red dots, and the crab movement options are indicated by the blue arrows.The crab is placed at the base before it is released.The crab in the figure is not to scale.
Figure 2 .
Figure 2. (A) Decision-making vs. responding at altered pH levels: time taken to locate cue choice by male Carcinus maenas reaching the end of the Y-shaped olfactometer arm in the choice of odour, either control or pheromone.Box plots represent the interquartile range.The boxplots depict the median with the first and third quartiles of the distribution.Whiskers extend to 1.5 times the interquartile range; data extending beyond this range are defined as outliers and plotted individually.(B) Time taken to initiate rapid antennae flicking by male C. maenas in response to chemical exposure (control and pheromone).
Figure 2 .
Figure 2. (A) Decision-making vs. responding at altered pH levels: time taken to locate cue choice by male Carcinus maenas reaching the end of the Y-shaped olfactometer arm in the choice of odour, either control or pheromone.Box plots represent the interquartile range.The boxplots depict the median with the first and third quartiles of the distribution.Whiskers extend to 1.5 times the interquartile range; data extending beyond this range are defined as outliers and plotted individually.(B) Time taken to initiate rapid antennae flicking by male C. maenas in response to chemical exposure (control and pheromone).
Figure 3 .
Figure 3. (A) Correlation between the time male crabs took to initiate antennal flicking as a measure of detection of a chemosensory cue, the female sex pheromone (UDP: UTP 4:1); (B).correlation between the time male crabs took to locate the cue, the female sex pheromone (UDP: UTP 4:1).
Figure 4 .Figure 3 .
Figure 4. (A): Percentages of visual responses of different behaviours exhibited by male crabs exposed to female sex pheromone in a Y-shaped olfactometer.(B).The percentage of male crabs' initial response in the direction selecting either not responding by doing nothing, selecting the pheromone, or the control baited arm of the Y-shaped olfactometer at current and future ocean pH levels.N = 74.
Animals 2024 , 15 Figure 3 .
Figure 3. (A) Correlation between the time male crabs took to initiate antennal flicking as a measure of detection of a chemosensory cue, the female sex pheromone (UDP: UTP 4:1); (B).correlation between the time male crabs took to locate the cue, the female sex pheromone (UDP: UTP 4:1).
Figure 4 .Figure 4 .
Figure 4. (A): Percentages of visual responses of different behaviours exhibited by male crabs exposed to female sex pheromone in a Y-shaped olfactometer.(B).The percentage of male crabs' initial response in the direction selecting either not responding by doing nothing, selecting the pheromone, or the control baited arm of the Y-shaped olfactometer at current and future ocean pH levels.N = 74.
Figure 5 .
Figure 5. Percentages of visual responses of different behaviours exhibited by large and small males.Crabs exposed to female sex pheromone in a Y-shaped olfactometer at current and future ocean pH levels.N = 74.
Figure 5 .
Figure 5. Percentages of visual responses of different behaviours exhibited by large and small males.Crabs exposed to female sex pheromone in a Y-shaped olfactometer at current and future ocean pH levels.N = 74.
Table 1 .
This table describes the behaviours exhibited by C. maenas in the study.
|
v3-fos-license
|
2022-06-19T15:19:20.827Z
|
2022-05-01T00:00:00.000
|
249829544
|
{
"extfieldsofstudy": [],
"oa_license": "CCBY",
"oa_status": "HYBRID",
"oa_url": "https://www.cambridge.org/core/services/aop-cambridge-core/content/view/D03738195B52764DEC14B4292AEC8E40/S0261143022000034a.pdf/div-class-title-online-musicking-for-humanity-the-role-of-imagined-listening-and-the-moral-economies-of-music-sharing-on-social-media-div.pdf",
"pdf_hash": "86b176f922e31360b8d16a04b7d8ccfdd7f26170",
"pdf_src": "Cambridge",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44296",
"s2fieldsofstudy": [
"Art",
"Sociology"
],
"sha1": "e28eed9376bc482382ef1d8de21a616a6b5b7888",
"year": 2022
}
|
pes2o/s2orc
|
Online musicking for humanity: the role of imagined listening and the moral economies of music sharing on social media
Abstract Music sharing on social media increasingly involves ‘imagined listening’, a form of sociality based on how we think that others listen to music (as well as on our own imagining of sounds) and typically mediated by the exchange of visual prompts, such as the thumbnail images associated with a particular streaming link or recording. Drawing on ethnographic research conducted online and offline with Spanish migrants in London, I show how practices of music sharing based on imagined listening articulate specific moral economies. In these economies, users imbue the sharing of music with positive value, as something that contributes to human flourishing and balances the negative aspects of social media and the world. I also consider how users reckon with the algorithmic manipulations of social media platforms and the fleeting forms of user engagement characteristic of an online world in which there is more music than could ever be heard.
Introduction
Music scholars have dedicated considerable attention to how music is created and redistributed in online platforms but have rarely addressed how and why ordinary users of social media exchange music online. This article, which is based on an ethnographic study of music sharing, puts forward the concept of 'imagined listening' as a new analytical tool to explain the social relationships that arise from people's interactions with online music and other musical media. Considering the interplay of users' understandings of imagined audiences with their ideas of imagined listening, I open new avenues for research on the social value of music and the moral economies of music sharing online.
Since the advent of peer-to-peer platforms at the turn of the century, sharing and exchanging music files has propelled the expansion of online networked culture and social media platforms (boyd and Ellison, 2007). Digital distribution technologies have further increased what Kassabian (2013) calls the 'ubiquity' of music, giving an increasingly important social role to the redistribution of sound over its recording (Jones, 2000) and diversifying the modes of distribution and consumption of music media via streaming platforms (Nowak, 2016). In parallel to this process, social media platforms have also played an increasing part in creating and maintaining sociality (Miller et al., 2016), wherein music has maintained its central role for daily communicative practices. Private listening and public performance have become intertwined with music activities on social media such as live streaming, which are directed to a fluctuating imagined audience (Litt and Hargittai, 2016). Before the internet's coming of age, popular music studies for a long time explored how music practices allow people to perform for others the dramatised ritual of placing themselves in a network of relationships through musical choices (Frith, 1996). Recent scholarship addressing internet-bound practices such as curating the musical contents of personal profiles (Durham, 2018), and as computermediated replacements of the living-room bookshelf (Wikström, 2013), have updated this interpretation, but without a clear focus on social relationships. Fandom studies that give more attention to social media (Duffett, 2013;Jenkins, 2006;Jenkins et al., 2013) contextualise studies of music communities online, but tend to neglect the musical practices of casual fans. Beyond the promotional uses of social media by artists (Mjøs, 2012;Suhr, 2012;Harper, 2019) or platform-centric studies (Burgess and Green, 2009;Bonini, 2017;Durham and Born, 2022), the culture-making dynamics of music circulation online remain an under-researched area. Studies of music creativity in online platforms (Lysloff, 2003;Cheng, 2012) rarely address the role of those outputs once they are publicly distributed, with notable exceptions in areas such as politics (Green, 2020). When music scholars have tackled music's so-called virality and its associated articulations of locality and gender (Howard, 2015;Stock, 2016;Waugh, 2017;Harper, 2020), they have focused on the music videos rather than the users, and without exploring notions of collective flourishing online.
This article addresses this gap in the existing scholarship by shedding light on why people post (Miller et al., 2016) music on social media, and in which ways music matters (Hesmondhalgh, 2013) in online sociality, contributing in particular an ethnographic perspective on music audiences and their ideas of collective flourishing and moral civility. It expands the scope of recent scholarship that argues against the 'rhetoric of digital dematerialisation' (Devine, 2015) and contributes to foregrounding the materiality of digital music experiences (Jones, 2018) by advancing an anthropological theory of music as an online (im)material object of exchange within a social moral economy. I use the adjective (im)material because on the one hand, online music is indeed storable and exchangeable, and therefore retains some sort of tangible materiality as files (Horst and Miller, 2012), particularly as objects stored in physical mass data centres that require manual retrieval via clicks in an interface. On the other hand, the circulation of digital music is precisely based on its online immateriality, and on the immaterial aspects of music listening, such as music cultures and the relationships they create. While a number of studies on filesharing (Durham and Born, 2022;Giesler, 2006;Lysloff, 2003) address the gift-like economies and values that emerge in platforms specifically designed to share music files, this article focuses on the advent of related moral economies on generic social media platforms, which are not particularly (or not only) conceived for music circulation and host a wider range of casual and nonexpert users. Here I apply the concept of moral economy as it has been operationalised by Fassin (2012), understood as 'the production, circulation, and use of values and sentiments in the social space around a given social issue' (Fassin, 2012, p. 441), in this case referring to the issue of online music distribution. Following Fassin's anthropological approach I consider both the political aspects of these norms and obligations, and the more specifically philosophical networks of values and affects that underlie human activities, particularly when moral economies arise within areas of ethical ambivalence, such as free music distribution in market economies and within privately owned, but cost-free social media platforms.
The first section of this article explains the relevance of imagined audiences in musical practices on social media. It foregrounds fieldwork evidence showing that in music sharing, imagined audiences are often imagined communities of listeners. In addition, the algorithmic technologies of social media and streaming platforms influence how users engage with music. Platforms create the impression of on-going activity (a kind of simulated liveness) and encourage short-span practices linked to musical 'discoveries', but my research participants worked with or around these affordances in the pursuit of their own social and musical goals. The second section outlines how users' awareness of these affordances of social media in the context of the musical abundance of online spaces foster 'cultures of circulation' (Lee and LiPuma, 2002) based on visual references to music and particular forms of 'imagined listening'. Imagined listening here includes thinking of and remembering a piece of music and imagining an audience for its re-distribution, as well as how that audience will listen to and ultimately benefit from it. Imagined listening is then a form of online sociality based on how we think that others listen to music and on our own imagining and re-evocation of those sounds, mediated by the engagement with, and exchange and management of, visual prompts (for instance YouTube and Spotify thumbnails or record iconography) in an online interface. In the last section I demonstrate how these practices of imagined listening ultimately shape and maintain the moral economies of music sharing online, and how they are linked to understandings of civility and musical citizenship. Participants consider the exchanges of music that they frame as solidary, educational, neighbourly and gift-like as capable of transmitting abstract concepts such as happiness or beauty, and therefore practices for the common good of humanity.
The findings discussed in this article are developed from the initial interpretations outlined in my doctoral research (Campos Valverde, 2019). The thesis discusses a much larger research project developed during 2016-2019 that includes an analysis of music sharing on social media from the perspectives of cultural and personal identity, transnational family relationships, assemblage theory, politics, safe spaces and ritual, in addition to the aspects addressed here. My contribution is inherently interdisciplinary, combining methods from digital and social media anthropology (Horst and Miller, 2012;Hargittai and Sandvig, 2015;Hine, 2015;Quinton and Reynolds, 2018) with traditional ethnographic engagement, as well as theoretical contributions from popular music, cultural studies and media studies. Research insights in this article stem from extensive online and offline fieldwork and participant observation among Spanish migrants in London, 1 and participants' insights from interviews about their musicking practices on social media. For the purposes of this paper, musicking is understood as the set of music-related practices that participants undertake online, such as posting, sharing, commenting, rating and thinking about music. Most of the ethnographic evidence deals with music files stored on YouTube and Spotify and subsequently shared to Facebook, Twitter, Instagram and to a lesser extent, WhatsApp and Telegram. Although initially I collected data merely from observing participants on social media, at a second stage of the research I used internet studies literature to think through concepts that summarised some of the dynamics that I was observing. However, this form of armchair anthropology also proved to be insufficient. I complemented this observation and interpretation work with the ethnographic face-to-face part of the study, with a view to understanding participants' musicking practices beyond my distanced theorisations. Conversations at this stage, and later, recorded interviews, yielded important contributions. Particularly useful was a specific face-to-face interview technique that I employed, printing screen-captures of participants' online activity and asking them to discuss their reason for sharing the music shown. Through the use of this technique, I attempted to encourage participants to reflect on the reasons for these postings in depth, and to think of their own posts from a third-person point of view, in contrast with interview techniques that favour participant-led tours of their overall social media activity (cf. Why We Post, 2016). To protect participants' anonymity, their contributions are cited here with pseudonyms, and the original text from their posts is not shown but summarised. The evidence presented in the following sections shows this mixed-methods approach, weaving fieldwork notes, social media screen-captures (including interface images) and interview quotes, with theoretical contributions. The research design for this project was also open-ended from the start, and I did not select a specific music genre or scene. Instead, I collected fieldwork data from diverse music cultures to infer wider social practices and understandings: the insights presented here on the social dynamics and the meaning of music circulation online are applicable in multiple contexts. That said, conducting immersive ethnography with Spanish migrants allowed particular insights into musically mediated relationships when these are inherently cosmopolitan, and into online sociality as practised citizenship. In addition, the mixed method employed to recruit participants also influenced the material collected. I promoted my study in person with other Spaniards and by handing out flyers, which I also put in strategic locations around London where Spaniards regularly gathered in large numbers, particularly nightlife venues. However, most participants recruited appeared to have found out about the study via the posts that I shared on Facebook. This strategy seemed to involuntarily recruit more women participants. 2 In this way, both the methodology I employed and the community investigated played a significant part in developing the broadly applicable theory of imagined listening outlined here. The online musicking practices of migrants therefore illustrate how music media is a cultural object of exchange for the wider social media user community. born in Venezuela or Colombia, but who had lived in Spain for significant periods of their lives before moving to the UK. 2 Whether this was the effect of the algorithm showing my posts to similar profiles to myself or whether gender played a role in building trust for participants remains uncertain.
Imagined audiences and algorithmic mediation
Initially, the participant observation phase of my research appeared to demonstrate the relevance of cultural studies to understand online musicking. Looking at the participants' social media profiles, it seemed that interpreting musical activity on these platforms primarily as a social theatre, or as a performance to articulate cultural identity and accrue cultural capital, could explain why people share music online. However, this subcultural perspective soon proved to be limited in a context of migration, owing to the ways participants understood their audiences and managed them online. During our first conversation, participant Sue made clear that her Facebook posts about British metal were addressed to her family back home and had no local objective or audience. Although she admitted that her music sharing was a way to transmit her passion for metal to her children and keep them 'on the right musical path', she did not intend to articulate these relationships beyond what she perceived as a close albeit physically distant social circle. Even when she posted about attending a concert or music event, this was still directed to her family, and not to other attendees or fans. Other participants had similar understandings of the audience that aimed to generate the illusion of family togetherness, hardly fitting the cultural studies approach mentioned earlier. Sandra shared hard rock songs and concert videos mostly thinking about her sisters in Spain, explicitly stating that she had lost hope in making friends locally via shared music taste. Even for participants that were musicians, accumulating followers or attention did not come up as a crucial motivation for sharing. As a singer, participant Anabel was more concerned about showing her knowledge of soul, gospel and jazz among her contacts in Spain than to promoters or musicians in London. Even participants who were more focused on their UK social lives and local taste-making -Cynthia, Daniel or Javierwere primarily concerned with recreating a Spanish microcosmos in London loosely aggregated around music, rather than using this cultural capital to climb up any social ladder in the UK. Thus, in my case study, users appeared to address audiences or social groups strongly defined by previous personal ties and that did not fit ideas of local, class-bound subculture. The liminal character of migrant lives questions the fitness of typical cultural studies' approaches in contexts of economic deprivation, where support from and maintenance of firmly established and familiar relationships may become safer social investments than the often-unattainable expansion of local ties or capital through music taste. Indeed, fieldwork showed that for Spanish migrants, 'achievements of wealth and status'and in this case study, displays of identity or music knowledge -'are hollow unless they can display them before an audience living elsewhere, in the authentic heartland of their imagined collectivity' (Werbner, 2002, p. 10, my emphasis). But more than arguing in favour of the distinctiveness of migrants' online practices of sharing, these initial observations indicate that researching migrants' online musicking highlights that social media users often do not address their local relationships when they are online. Although this specific group of participants emphasised these different understandings of intended audiences back home, 'imagined audiences' (Litt and Hargittai, 2016) are present in most communication in the online mediascape, not just between migrants. Since social media platforms only supply limited resources to users to manage their reach, these musicking activities are directed towards an imagined audience, as the real audience cannot be known. These imagined audiences can be targeted, for instance a specific person or group of people with whom the user has a previous relationship. They can also be abstract, understood as mental conceptualisations of users of a given platform or online audiences in general (Litt and Hargittai, 2016). In this second definition of the concept, imagined audiences in an online context are similar to the imagined communities described by Benedict Anderson (1991), as their existence is largely based on the mental imagining and collective sense of belonging of their members. Indeed, participants confirmed that in social media the audience can be redefined by active users at any one time, depending on who music is directed to, or received by. Jasmin, another participant, expressed well this idea that the audience is imagined by the person posting something, to the extent that the receiver may not have access to the content, or that the target audience can include multiple groups: that person does not have to [necessarily] be on my Facebook. For me it is important to post it because that's how I feel in that moment. (. . .) That's why I post many songs. (. . .) Also because I find the music interesting, so that my friends can listen to it. Jasmin October 2017. 3 As Jasmin's statement hints, imagined audiences on social media are then mental constructs and systems of social understandings, assembled through the online practices of music circulation themselves, more than established or identified networks of communication. In this sense, imagined music audiences are also imagined communities of listeners with whom users want to share music. Both Daniel and Cynthia evoked this idea; they reported sharing music to communicate with these imagined collectivities, which appeared loosely but not exclusively to be made up of their social media contacts: Sandra contributes to re-creating and expanding that audience herself. In other words, algorithms constantly re-create audiences, but so do people.
In addition, imagined audiences are always in flux because algorithms encourage short, asynchronous interactions around music media rather than long-term engagements with pieces or albums. These temporary or even ephemeral engagements with music are also directed to varied music genres, promoting classifications by mood and narratives of discovery (Morris and Powers, 2015), to the extent that only specific timeframes of engagement with music and audience types are algorithmically possible (Hills, 2018). However, as participants mentioned, once again people may further increase these momentary practices through their own activities to cope with algorithmic inadequacy and to create a greater experience of agency, ownership and flexibility in listening practices (cf. Hagen, 2015Hagen, , 2016. For instance, Fernando highlighted how Spotify's algorithm can discover hardly any Western classical music that is new to him. Because he knows which pieces, interpretations and recordings are his favourites, Spotify's attempts to turn him into a temporary fan of a particular piece are unsuccessful. At the same time, Fernando admits that he has to limit his interaction with Spotify in order to avoid further feeding the inadequate algorithm. He only listens to particular albums and ignores platform playlists, thus becoming himself a momentary online listener. These self-imposed fleeting practices would also help participants avoid 'filter bubbles' (Pariser, 2012) that would further narrow their listening habits, although they did not articulate these biases in such terms. Javier, a participant with an eclectic music taste, expressed frustration with the limitations of recommendation algorithms on Spotify: Because the first songs that I listened to [on Spotify] were metal or rock, now everything that it recommends me is like that, and I don't always feel like listening to that genre. That's why I only use it every once in a while. Javier, December 2017 These conscious forms of self-limitation driven by algorithmic technologies are relevant to this discussion because they shape how participants think about the audience for their own posts and the music that they share. If one's own engagement with music online is sporadic or temporary, it is safe to assume that others will act the same. If I know which music I like, so do others. Thus, while both algorithmic and human practices may contribute to an idea of non-stop music liveness, where new audiences are incessantly re-created by people and machines, users are conscious that this is not the case, and that people engage with music online within certain limitations. Cynthia, who can post as many as 10 songs per day at times, and who is teased for sharing too much music by her friends, accepts these patterns of temporary engagement: It has happened to me, that from people that hadn't posted anything [on Facebook] or commented [on my posts] for a while, suddenly one day I log in and I see 20 likes to 20 different posts. Cynthia January 2018 More importantly, the human and machine dynamics outlined in this section mean that momentary forms of musicking and imagined audiences are normalised aspects of musical engagement in this mediascape. In light of this I argue that music circulates on social media because users imagine that there is an audience, as this is one of the basic principles of liveness in diachronic online communication. Specifically, an imagined audience that interacts with music online within the limitations of algorithmic and human liveness, discovery culture and fleeting engagement.
Ubiquitous and silent music media: imagined listening in the 2.0 mediascape Worthy of consideration here is how momentary musicking practices, imagined audiences and algorithmic liveness are also both cause and consequence of the ubiquitous presence of music online (Kassabian, 2013;Fleischer, 2015;Johansson et al., 2018), and how they foster specific new forms of listening. Participants accessed music and other music-related media easily and recirculated it on their profiles, without any cost beyond an internet connection or smartphone contract, thanks to 'ubiquitous computing' (Kassabian 2013, p. 1;Mazierska et al., 2019;Prior, 2015). In my case study, sharing music happened mostly from streaming platforms to social media profiles on smartphones, without requiring specific locations or listening habits. 5 Among my participants this ubiquity of online music gave rise to understandings of music as a continuous, ongoing stream like radio or domestic utilities (cf. Johansson et al., 2018;Negus, 2016). However, here I argue that these understandings of music as a ubiquitous utility foster contradictory musicking practices. On one hand, music is socially relevant in online communication because it is always available, easy to use, and textually rich. On the other, its ubiquitous presence is something taken for granted by users, so actual engagement with music is not a priority. In fact, the ubiquity and availability of music media favours its taken-for-grantedness. Among this particular group of participants, users do not think of music in terms of commodity precisely because it is almost always accessible. Sandra said: To listen to music, I don't download anything anymore. I listen to everything on streaming. (. . .) One day the internet will end, and I will kill myself (laughs), because I won't have any music anywhere (. . .) Sometimes there is a video on YouTube that you really like, and you think: 'I should download this.' But you don't. (. . .) You think: 'they won't take it down' but then they do! (laughs). (. . .) You take it for granted. Sandra, November 2017 As Sandra indicates above, participants considered storing music to have become a time-consuming and resource-heavy luxury in this mediascape of ubiquity, so streaming was for them a dynamic online archive, even if one not fully under their control. Sue, for instance, explained that because she lost all her records in her divorce, she cannot ensure that her children inherit a good material music library without considerable investment, and thus posting a song for them every evening somehow replaced that family archive.
More importantly, this understanding of music's ubiquity and taken-for-grantedness has the potential to dissociate listening from the actual moment when a song or music video is first encountered online. Participants admitted their own struggle to keep up with music releases, friends' and platforms' recommendations, and other musical paratexts, and they were aware that their own posts could get lost in the ever-changing algorithmic maze. Cynthia's friends often mentioned their inability to listen to all she shared owing to her excessive music posting, and Sandra's used similar language to describe her sharing of information about Pearl Jam. Additionally, in certain contexts the face-to-face recommendations of friends and family may carry an aura of authenticity and thus be more appreciated than online recommendations, as Johansson et al. (2018) conclude. Cynthia explained that her friends sometimes recommended songs to her that she had already posted but they had not seen, her posts being ignored. However, recommendations that Cynthia made to her friends in person were taken seriously. Consequently, participants were empathetic to how others managed their time and attention to deal with the sheer volume of music recommendations, and were aware of the difficulties experienced by imagined audiences in seeing and engaging with their music posts. Considering this evidence on ubiquity and taken-for-grantedness, I argue that music is so abundant in online interaction that, paradoxically, most of its circulation is silent. Playlists remain unheard and music links are not clicked, because users are unable to listen to all of the music that they are exposed to by friends, family, and algorithmic recommendations: In general people do not react much to songs (. . .). Maybe most of them don't even listen to them. But that's not only for me. I have seen that for other people that share music; they almost never have reactions. I think that on Facebook people are just scrolling down all the time and when they see something that requires stopping and listening, they don't even check it. (. . .) When I share something on Facebook I know that people are not going to listen to it. I give people the chance, but I know that they are not going to listen to it. Javier, December 2017 If someone wants to listen to it and likes it, fine. If not, so be it. (. . .) because everybody has their [Twitter] timelines full [of information], so the probability [of someone listening] is low. Sandra, November 2017 I try not to be annoying with music, (. . .) because just as I usually do not click on the links from other people, I understand that they don't do so with mine. (. . .) for instance, if I post five links, maybe they only click on one. The rest, they either remember them [the songs], or they don't care, or they are in a context in which they cannot listen to them. So, I don't share them with any intention, because I put myself in their place. Teresa, October 2017 From this evidence it follows that listening does not necessarily happen when someone sees a music post or tweet, and that listening is somehow a secondary motivation to share music with others as there is a mutual understanding of this ubiquity of music online. The concept of silent music thus makes perfect social sense for these participants in a technological context of abundance. It is not too much of a leap to say that thinking about, or testing whether it is worth, sharing music as Javier and Teresa make explicit here, is in itself a key driver of this practice.
However, fieldwork confirmed that just seeing the name of a song or a thumbnail preview may be sufficient to understand the reference, message, or mood intended, and even to imagine the piece of music being circulated. To an extent, music circulates as a visual object that evokes sound. So, even unheard, these visual prompts help to articulate a message. Statements from participants thus demonstrated that the semiotic capacity of music media is indeed a result of its visual and sonic aspects, as Goodwin (1992) points out, but in social media this meaning making may happen through practices where music may not be listened to at all: Sandra's references to imagination and singing to oneself in one's mind reveal the crucial aspect of exchanging music on social media (in comparison for instance with peer-to-peer tools): the visual interface enables users to experience music as a product of memory or imagination, more than as a primarily acoustic feeling. Indeed, if archival practices and 'tissues of quotations' (Barthes, cited in Olson, 2008) are the foundational and ubiquitous forms of presenting and producing music online (for instance on Spotify and YouTube), the circulation of music as a visual object evinces its character as a reference to stored sound. This use of music media as a reference to known sounds goes beyond practices of selective inattention in which music may be used as background noise. In contrast with musicking practices of the 1990s that generated a sort of split attention or double musickingsuch as bars that played music videos on screens while playing a different song over the PA systemon social media there may be only one sound evoked at a given time, a video thumbnail or a gif as in Sandra's example, but one unheard nonetheless. Ubiquitous music in offline environments generated a 'ubiquitous mode of listening' (Kassabian, 2013, p. 10) related to the attention economy of shopping malls (Sterne, 1997) or bars. Here I argue that ubiquitous music in visual online environments (characterised by imagined audiences, mediatised liveness, algorithmic mediation and momentary musicking) generates imagined listening (and not only lack of listening), which is related to the attention economy of social media spaces.
Imagined listening, then, is the emerging mode of listening characteristic of online musicking, and the tacit cultural norm that governs music sharing and circulation online. Imagined listening here is understood as a form of online musicking and sociality based on how we think that others listen to music, and on our own imagining of those sounds, mediated by the exchange of visual prompts in an online interface. Imagined listening practices are the mental processes of the user who posts a piece of music media helped by a mental evocation of it, and imagines how others will or could listen. They are also the mental practices of the audience as they remember and evoke known songs and sounds from visual cues in the social media interface. As Sandra said, the memory of certain tunes sparked from a visual prompt can be sufficient to engage in musicking, so a YouTube or Spotify thumbnail preview on a Facebook or Twitter feed is indeed a reference to stored sound, and enough to activate this mode of listening. This is also the case for other musical activities on social media, such as ranking or voting, which also spark memories or imaginings of songs. Figure 2 includes a representation of a jukebox featuring eight metal songs, along with encouragement to readers to vote (drop a coin) for their favourite. Also included is a comment from Sue, who votes for the track by Iron Maiden. Here I argue that activities like this one summon fleeting musical memories of the songs in question, even when they do not involve listening to the recordings themselves, simply because the interface provides a visual representation or evocation of listening, in this case at a bar or venue. However, for many contemporary music fans the equivalent iconography of on-demand music would not be a jukebox but a YouTube or Spotify icon. Therefore, every time a young(er) individual encounters a preview of a song in their social media feed, they experience a reactivation of this mental evocation of music. They also understand the visual prompt as equivalent to the social practice of playing a song in the jukebox: in a public social space where people are hanging out, some individuals explicitly show (through iconography in this case) what music they are (perhaps mentally) listening to.
As the statements above suggest, when practices of imagined listening do not develop through musical memory, they do through imagining that the audience is listening, or how it will listen to a posted song in an unspecific future. Sharing music media makes sense because users imagine specific or abstract groups of people who will not just see the musical image and mentally evoke the song, but will also listen to it (or even further engagement such as watching and listening in the case of music video) at the point of reception or later. 6 This is ultimately why participants share and circulate songs and music videos: Because I am optimistic! (laughs) . . ., and I think that at some point people will remember and say: 'let's listen to that song that These practices of imagined listening also happen in more active and critical ways than the apparent abandonment of agency derived from accepting the lack of control over who listens to songs. The tacit rules of imagined listening become prominent when users acknowledge the social uses of platform affordancessuch as private messaging and taggingand the conceptualising of imagined and actual listening as two separate practices: I am not thinking about anyone in particular (. . .) otherwise I would tag them. Daniel, January 2018 I don't want to bore people, so sometimes I send them music directly and I don't share it with everybody. Elisa, October 2017 With these statements participants suggest that although some may tacitly be imagining others listening when they post a song, they also instantly recognise this mental construction, because when they really expect listening, they choose different forms of communication (such as tagging and instant messaging). In this sense, online musical life operates at different intensities ('scalability' per Miller et al., 2016: 3). Through this micro-management of the attention of social media contacts via different ways of music sharing and listening, users decide how to engage with others and thus shape their social relationships. If 'online listening constitutes a recognition of others' (Crawford, 2009, p. 533), listening and responding at length require deeper engagement expected from close ties, while rating only or not engaging at all may be used for acquaintances.
Similar kinds of imagined listeningimagining oneself or others listening in the past or in the futuretake place when users share music via compiled playlists on streaming platforms. Daniel and Sue explained that preparing a playlist or a list of posts and sharing it with others before attending a concert or a musical not only entails learning the songs in preparation for the live shows (for which perhaps the playlist is a temporary tool), but is also a form of mental anticipation that involves imagining oneself and others listening to those songs live in a specific future. Sandra and her sisters shared 'guilty pleasures' playlists with each other on Spotify to evoke their past collective experience of growing up together, while knowing that they would not be listened to that much. This evidence suggests the use of online music media as 'dynamic memory' (Ernst, 2012) that enables processes of past and future nostalgia, where remembering is used as an audiovisual aesthetic that enables social interaction around sound files (or visual prompts to sound files) and their imagining. Imagined listening as an online mode of listening is then also related to the existence of 'unlistened' playlists in streaming platforms, as a form of musicking that entails a sort of expected engagement of oneself and others with music, as well as our own re-evocations of musical memories (including the future evocation of those memories in live shows). Similarly, these playlists and their associated forms of imagined listening equally represent desired expectations about our personal relationships, in the same way as with previous formats such as the mixtape (Rando, 2017). This is quite explicit in the case of migrants and their desire for family or friendship togetherness, but nonetheless applicable to internet users at large as they communicate with imagined audiences.
The paradox created by the ubiquity of music media, whereby it does not generate collective listening, could be read as a sign of the absence of social connection in online spaces. Participants' statements also suggest an acknowledgement of speaking into the void, or at least an ambivalence about the impact of their musical practices. Although I noticed that some participants' playlists did not have any followers, fieldwork did not show a clear pattern for reactions or feedback to music content. Sandra explicitly mentioned that her music postings also come from a place of loneliness as a migrant living in a small commuter town, without any close ties or other music fans to talk to. She also highlighted how posting during a concert can be a way to connect with other attendees online, in the absence of in-person interaction. In other words, once the emptiness of social life is accepted, participants find that online musicking is an imaginable, albeit imperfect, remedy.
Imagined listening thus responds to the mediascape of ubiquity by prioritising cultural aspects of music media other than their sounding playback. In a similar vein to what Madianou and Miller (2012;also, Miller et al., 2016) describe with the concept of 'polymedia', I argue that when the focus is no longer on accessibility, which is taken for granted, people's attention is drawn to choices about different avenues for sociality. As Frith points out, the meaning of a musical experience, including that of listening, appears as a social matter by defining imagined social processes (1996, p. 250 my emphasis), even if people might perceive meaning as a value intrinsically embedded in the music itself (Frith, 1996, p. 252). In other words, the different kinds of imagined listening shown above, based on musical memories and on imagining oneself and others listening in the past, present, or future, provide resources to work on diverse forms of social culture-making.
The moral economies of music circulation I turn finally to the macrosocial aspects of this silent exchange of music. The moral economies of music sharing and the values that users accord to music further purvey arguments in favour of understanding imagined listening as the basis of online musicking cultures. The evidence presented in this section demonstrates how participants in these moral economies 'make choices based on ethical principles (. . .) reflecting but also going beyond the roles assigned to them (. . .) giv[ing] life to institutions through their ethical questioning and affective responses' (Fassin, 2012, p. 441). Fieldwork revealed that three principles govern the moral economies of these silent spheres of music circulation: solidary fandom; exchange and gift-giving rituals; and musical civility.
First, participants engaged in practices of what I call solidary fandom: musicking activities where the user undertakes the role of a grassroots promoter, oriented towards helping emerging artists. By recirculating music and other media and promoting the shows of emerging artists, users hope to help them expand their fan bases and achieve greater recognition, or in some cases, strengthen the presence of the band in their immediate social circle. Within these practices of solidary fandom, there are two further aspects of its moral economy to consider. On the one hand, participants are conscious of the relative impact of their musicking practices in terms of data traffic within their social media contacts. Participants understood activities oriented towards helping emerging bands or meeting less-known artists as a positive potential of social media communication, which would redress a perceived corporate control of music.
(. . .) even if I am not a mass medium with a huge audience, if I put something on the spotlight I will generate more audience at some level for that artist, and maybe more money. I am conscious of it and I try to generate interest for particular artists. (. . .). Anabel, November 2017 [I have shared it] [w]hen I have found an artist that was very good but was not very well-known, or almost completely unknown, on YouTube. (. . .) with the idea that the guy deserves to be heard, and that he should be more popular. (. . .). Javier, December 2017 On the other hand, practices of solidary fandom confirm once again the reflexive character of these online cultures. Participants' awareness of the mediascape lead them to focus on the positive impacts of these practices on their personal lives and on the lives of their immediate social circles.
For small bands these little things are useful. The more people post about it and the more you publish on your social networks, the more they become known, which is ultimately free advertising. If the band is worth it, it doesn't cost a thing to help them. Rose and other participants thought that meeting an emerging artist would more likely lead to a meaningful social interaction, thus orienting their musicking on social media to those possibly richer socialities. They were also sceptical of interacting with the professionally managed social media accounts of famous artists, and as Daniel and Sandra highlighted, while their music sharing might be an effective promotional tool for artists without a considerable following, they considered it unnecessary to further promote famous artists. If anything, they considered it unfair to give more media space to established acts, again understanding their actions to redress the power imbalance between famous and emerging artists. Rose went so far as to say that any social media interaction with famous bands was pointless, a reaction that calls into question whether online fandom practices can be understood as the cultivation of 'the perception of accessibility and proximity' (Duffett, 2013, p. 238). As Sandra articulates above in terms of cost, participants give different moral value to these instances of free 'fan labour' (Baym and Barnett, 2009;Terranova, 2004), thus socialising and helping to create a fairer music market are privileged over the potential access and benefits to famous artists. These statements also confirm a continuity between the underground-oriented musical dynamics of MySpace in the 2000s and those of current social media platforms, driving people to invest in social relationships with emerging artists.
An emphasis on positively impacting lives is even more pervasive in instances where music exchange is a kind of gift givingand here we arrive at the second moral principle of online music circulation. Sharing music constitutes a kind of gift economy: that is, a form of music exchange and relation of mutual obligation between online ties involving social-media-specific ways of giving, receiving and returning gifts. Malinovski's (2002Malinovski's ( [1922) foundational text on gift economies describes the Kula Ring, an exchange circle and ceremonial trading system where participants from different islands trade altruistic gifts from others and contract mutual obligations to reciprocate. Miller (2011) offers a recent discussion of this kind of gift economy on social media that he calls 'Kula 2.0', in which textual sociality and communication contribute to the trading circle between an internet-connected archipelago of users. My contention here is that musicking activities that circulate music between social media and streaming platforms are also culture-making exchange circles, developing as an aggregate of smaller gift-like exchanges between friends, families and acquaintances: (. . .) the same way you share information or opinions through Twitter, you share music with the same purpose: that the other person, that you think would like it or could be interested, receives it. (. . .) In fact, when I put something like 'for my girls' or 'for my friends', for me they are like gifts. (. . .) They are like small moments of happiness that you share with people. (. . .) Not thinking about someone in particular or a specific moment, you simply say 'I am going to share this', like a gift, 'I am going to send a gift to the world, so that someone sees it'. Teresa, October 2017 As shown in this statement from Teresa, and others cited above from Javier, Sandra and Cynthia, even if the lack of feedback casts doubt on the impact of these practices, these music exchanges still work as gift economies. The existing imagined audience facilitates the understanding of these exchanges as 'gifts of co-presence' (Miller 2011, p. 212), based on previous understandings of music as gift and the mutually understood need to reciprocate. As Baym (2018) highlights, fans feel the moral obligation of sharing the music they like to connect with others, again suggesting that forms of music exchange that predate social media or stem from peer-to-peer practices (Born and Haworth, 2018;Giesler, 2006) still govern the exchange of music to an extent. However, while some gift economies focus on obligations between particular persons (or a community of committed fans), a key aspect of the musical exchange I am considering is the emphasis on moral value and general collective benefit. Similarly, while both Miller (2011) andChambers (2013) posit that meaning-making activities of online sociality involve ritualising relationships through the exchange of cultural artefacts and publicly proclaiming friendship, here I argue that exchanges of music take place as a series of semi-public 'prestations' (Mauss, 2002(Mauss, [1954) aimed in a general way at a user's imagined listenership, where explicitly stating friendship is only secondary to an abstract idea of collective benefit.
Instead of highlighting a particular relationship, in my case study it was easier to notice this moral grounding of collective benefit in time-specific music exchanges, where people effectively salute an imagined audience or each other at a specific time of the day, through posting a piece of music or musical iconography. For instance, participant Diana often shared music on Facebook with a 'Good morning' caption ( Figure 3), to wish others a good day with a feel-good song. But in this case her Facebook friends are almost a proxy for humanity at large. Sue also posted music every evening with the caption 'Good night', sharing songs that were important to her as a bedtime kiss to her children before they went to sleep. Many other participants also greeted friends and family at specific moments of the week with timespecific music, such as Friday evening playlists. In other words, when a particular relationship was highlighted, it was also a time-specific exchange ritual.
Often, these exchanges were articulated through collectively sacralised pieces of music. That is, to salute others, people use music already part of previously existing social conventions such as 'good vibes' or 'club music'. If music sharing helps in managing different levels of relationship closeness, sharing an iconic track that most will understand is an efficient way to salute many. Thus, music has an essential role in these exchange rituals both because participants consider it as a valued element in itself (a present), but also because it furnishes a familiar accompaniment to daily human routines, very much as happens offline. Music media are so culturally rich that they can confer online norms and etiquette, and can be used to maintain customs and expectations about how one should feel on a Monday morning, or what one should be doing on Friday evening, but also more generally about reciprocity and online sociality. However, these practices seem slightly more targeted to specific groups than the jukebox-like sharing explained above, and in consequence more personal verbal messages preceding music iconography give important moral weight to music sharing.
The ethnographic examples in this article thus show that on social media, musical cultures of circulation are ultimately oriented towards constructing morality through musicking. Social media practices can be 'a moral activity in and of itself' (Miller et al., 2016, p. 212) in the sense of being intrinsically social, and musicking relationships model 'ideal relationships as the participants in the performance imagine them to be' (Small, 1998, p. 13). Therefore, music sharing activities on social media are users' put-into-practice ideas about what music and society are, and ideally should be, governed by the principle of imagined listening. Indeed, fieldwork showed that a third (and last) overall aspect of these music cultures of circulation is their moral economies of civic duty, and the activating of civic discourses. In other words, in online social life a form of musical civility is put in motion when the circulation and exchange of music are used to articulate moral values and understandings of ideal forms of civil society (online and offline), and to contribute to collective forms of human flourishing.
These civic narratives make social sense because participants saw their musicking activities as a form of spreading not just musical gifts, but more abstract concepts such as happiness. Participants' statements revealed that imagined listening also implied imagining that the music posted generated happiness or good feelings, and therefore, posting music rendered a service not only to specific people but also to humanity at large: It's quite rhetorical. I can't say that it has a specific objective. The general feeling that I have when I share any kind of song (. . .) the purpose would be the same as for sharing a beautiful picture: to share beauty, good feelings. That is what is behind anything I share.
(. . .) It's like . . . 'this song is awesome, you are welcome'. I know that they are going to be thankful for it. Javier, December 2017 Indeed, as these statements and the example by Diana above (Figure 3) show, sharing songs understood as qualitatively good or capable of creating good feelings and transmitting beauty also mobilises a sort of moral exchange of music: circulating audiovisual objects encapsulating ideas of morality and values is a civic practice that shows human understandings of reciprocity and redistribution in social life, in the same way as Venkatraman (2017) indicates for memes. This belief in the power of music circulation to achieve these civic publics of collective flourishing expressed by participants, articulated around concepts such as happiness or benefits for humanity, appears as a middle ground between the spiritual understandings of karma that Venkatraman outlines, and a completely secularised idea of musical civics. On the one hand, these musicking activities develop in a ritualised, faith-based system of exchange with specific codes of public behaviour that is believed to improve social media communities and thus society at large. On the other, rather than spiritual beliefs, these music exchanges are based on the belief in a universal and positive value of music when it is sent into the exchange circle, whatever the style or artist. As Johansson et al. point out, 'if music is a daily companion and is as important as breathing, it may be seen as a common good' (2018, conclusion). In any case, these statements once again highlight that even if the song may not be listened to at all, practices of imagined listening are the crucial elements that sustain the emergence of new moral economies of exchange: imagining the positive effect of music on others and society governs music sharing online. All these principles evoked by participantsof solidary fandom, gift giving, courtesy, morality, abstract happiness and rules of online behaviourcontribute to this moral economic system of music circulation online. Reiterating the insights stated in the previous sections, migrants' practices in social media reveal the importance of abstract understandings of the audience, to the extent that music is thought to be shared with humanity at large, for the benefit of everyone.
However, here the apparent righteousness of this music sharing entails contradictory moral principles. One the one hand, users' altruistic attitude does not expect gifts in return: the statements from Cynthia and Javier show that imagining the potential benefit of that music on society or how others will be happy or thankful are the central elements of this culture of music circulation. Yet participants' statements also show ambivalence about the impact of their practices as rhetorical social devices. On the other hand, this music exchange could also be interpreted as a way to reproduce normativity, particularly in its more structured manifestations as timespecific salutation and moral meme. After all, music circulation in those formats seeks to establish quite conventional codes of behaviour in a loosely regulated social space: politeness, scheduling conventions for work and leisure, sweeping ideas of universal happiness or good music, and so on. Here I concur with Venkatraman (2017) and Costa (2016, p. 79) in that social media can be quite a conservative social space. With the exception of solidary fandom practices, the ostensibly utopian moral economies outlined in this section are rather Western-centric, and definitely not cyber-punk.
Despite these contradictions, ubiquity and imagined listening are thus far from creating an obstacle to music circulation or devaluing music online. On the contrary, they are part and parcel of the emergence of these new moral values and norms about musical civics that foreground the relevance of music online. The practices outlined here try to recover precisely sociality-centred forms of musicking with a focus on collective impact. They confirm that human agency is not at all lost in a highly commodified and mediated context such as social media. More importantly, they highlight that to answer the questions of why people share music on social media, and why music matters online, emergent forms of musical citizenship offer a compelling argument. Instead of conceptualising the contemporary mediascape as a place where music has lost relevance and been increasingly commodifiedas if algorithmic and internet technologies could have deprived music of its aura -I argue that music cultures and their iconography are ingrained in social life to such an extent, and its cultural references are so widely shared and appreciated, that they might not even need to be listened to, and that they are thought time and again as articulations of civic values.
Conclusion
In this article I have demonstrated how the musicking practices of migrants on social media illuminate the importance of online music as a cultural object of exchange. In a mediascape characterised by musical abundance, momentary engagement and algorithmic technology, practices focused on imagining an audience and how they listen to, and benefit from, music, highlight the social relevance of musicking in online culture-making and sociality. Through posting, sharing and circulating music online we not only visually evoke sound to communicate with others, but we also create our own imaginations of others as potential listeners, and put to work personal memories of sound as imagined listening experiences within ourselves. I have also presented evidence to demonstrate both that imagined listening is the cultural norm that drives music sharing and circulation on social media, and that it sustains emergent moral economies of music exchange that go beyond online spaces into our material lives. This not only opens new avenues for research on musicking and listening practices, but contributes to studies that bridge online sociality with the materiality of digital experiences. Through the management and redistribution of music online, we aspire to re-enact groups and togetherness, and influence and change our social circles for the better. Music is circulated on social media because users consider it a practice for the common good in reference to the specific environment of social media, but also in general societal terms. In this sense imagined listening forms the basis of the moral economies of music sharing online, but in turn the moral economies further reinforce practices and bring home the material applications of imagined listening. In online spaces, music exchanges influenced by understandings of mutuality, economic interests, moral aspirations and desires of social change collide. Here I have proposed an approach that considers the conscious agency of users in this mediascape and their unyielding tendency to make online experiences more human, particularly in contrast with techno-deterministic approaches and arguments about virality. Far from being cognitively damaging, excessive or virally inhuman, online music is shared as an act of musical civility and citizenship participation, confirming its sustained crucial role in society as a collective form of human flourishing.
|
v3-fos-license
|
2018-04-03T00:50:46.229Z
|
2017-10-31T00:00:00.000
|
25038102
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.nature.com/articles/s41598-017-14862-3.pdf",
"pdf_hash": "d6d0d62dfd88dba92f8e34a74215b09ccd7d3155",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44298",
"s2fieldsofstudy": [
"Medicine"
],
"sha1": "d6d0d62dfd88dba92f8e34a74215b09ccd7d3155",
"year": 2017
}
|
pes2o/s2orc
|
A network meta-analysis on the beneficial effect of medical expulsive therapy after extracorporeal shock wave lithotripsy
We applied a newly introduced method, network meta-analysis, to re-evaluate the expulsion effect of drugs including tamsulosin, doxazosin, nifedipine, terazosin and rowatinex after extracorporeal shock wave lithotripsy (ESWL) as described in the literature. A systematic search was performed in Medline, Embase and Cochrane Library for articles published before March 2016. Twenty-six studies with 2775 patients were included. The primary outcome was the number of patients with successful stone expulsion. The data were subdivided into three groups according to duration of follow-up. A standard network model was established in each subgroup. In 15-day follow-up results, SUCRA outcome showed the ranking of effects was: doxazosin > tamsulosin > rowatinex > nifedipine > terazosin (88.6, 77.4, 58.6, 32.2 and 30.4, respectively). In 45-day follow-up results, SUCRA ranking was: tamsulosin > nifedipine > rowatinex (69.4, 67.2 and 62.6, respectively). In 90-day follow-up results, SUCRA ranking was: doxazosin > rowatinex > tamsulosin (84.1, 68.1 and 49.1, respectively). In conclusion, doxazosin and tamsulosin have potential to be the first choice for pharmacological therapy to promote the expulsion of urinary stone fragments after ESWL, with this doxazosin can improve the SFR in the long term, while tamsulosin may result more in accelerating the process of expulsion.
Rationale
3 Describe the rationale for the review in the context of what is already known.
3
Objectives 4 Provide an explicit statement of questions being addressed with reference to participants, interventions, comparisons, outcomes, and study design (PICOS).
3-4 METHODS
Protocol and registration 5 Indicate if a review protocol exists, if and where it can be accessed (e.g., Web address), and, if available, provide registration information including registration number.
NA Eligibility criteria 6 Specify study characteristics (e.g., PICOS, length of follow-up) and report characteristics (e.g., years considered, language, publication status) used as criteria for eligibility, giving rationale.
4
Information sources 7 Describe all information sources (e.g., databases with dates of coverage, contact with study authors to identify additional studies) in the search and date last searched.
4
Search 8 Present full electronic search strategy for at least one database, including any limits used, such that it could be repeated.
4
Study selection 9 State the process for selecting studies (i.e., screening, eligibility, included in systematic review, and, if applicable, included in the meta-analysis).
4-5
Data collection process 10 Describe method of data extraction from reports (e.g., piloted forms, independently, in duplicate) and any processes for obtaining and confirming data from investigators.
4
Data items 11 List and define all variables for which data were sought (e.g., PICOS, funding sources) and any assumptions and simplifications made.
Risk of bias in individual studies
12 Describe methods used for assessing risk of bias of individual studies (including specification of whether this was done at the study or outcome level), and how this information is to be used in any data synthesis.
5
Synthesis of results 14 Describe the methods of handling data and combining results of studies, if done, including measures of consistency (e.g., I 2 ) for each meta-analysis.
Section/topic # Checklist item Reported on page #
Risk of bias across studies 15 Specify any assessment of risk of bias that may affect the cumulative evidence (e.g., publication bias, selective reporting within studies).
RESULTS
Study selection 17 Give numbers of studies screened, assessed for eligibility, and included in the review, with reasons for exclusions at each stage, ideally with a flow diagram.
5-7
Study characteristics 18 For each study, present characteristics for which data were extracted (e.g., study size, PICOS, follow-up period) and provide the citations.
6
Risk of bias within studies 19 Present data on risk of bias of each study and, if available, any outcome level assessment (see item 12).
6
Results of individual studies 20 For all outcomes considered (benefits or harms), present, for each study: (a) simple summary data for each intervention group (b) effect estimates and confidence intervals, ideally with a forest plot.
5, 6
Synthesis of results 21 Present results of each meta-analysis done, including confidence intervals and measures of consistency.
5, 6
Risk of bias across studies 22 Present results of any assessment of risk of bias across studies (see Item 15).
DISCUSSION
Summary of evidence 24 Summarize the main findings including the strength of evidence for each main outcome; consider their relevance to key groups (e.g., healthcare providers, users, and policy makers).
7-10
Limitations 25 Discuss limitations at study and outcome level (e.g., risk of bias), and at review-level (e.g., incomplete retrieval of identified research, reporting bias). For more information, visit: www.prisma-statement.org.
Page 2 of 2
|
v3-fos-license
|
2016-05-04T20:20:58.661Z
|
2011-01-13T00:00:00.000
|
12693395
|
{
"extfieldsofstudy": [
"Chemistry",
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0016191&type=printable",
"pdf_hash": "1f5f6858850dc35da5bdcb3e65e66b3cdc149504",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44299",
"s2fieldsofstudy": [
"Chemistry",
"Biology"
],
"sha1": "1f5f6858850dc35da5bdcb3e65e66b3cdc149504",
"year": 2011
}
|
pes2o/s2orc
|
The N–Terminal Tail of hERG Contains an Amphipathic α–Helix That Regulates Channel Deactivation
The cytoplasmic N–terminal domain of the human ether–a–go–go related gene (hERG) K+ channel is critical for the slow deactivation kinetics of the channel. However, the mechanism(s) by which the N–terminal domain regulates deactivation remains to be determined. Here we show that the solution NMR structure of the N–terminal 135 residues of hERG contains a previously described Per–Arnt–Sim (PAS) domain (residues 26–135) as well as an amphipathic α–helix (residues 13–23) and an initial unstructured segment (residues 2–9). Deletion of residues 2–25, only the unstructured segment (residues 2–9) or replacement of the α–helix with a flexible linker all result in enhanced rates of deactivation. Thus, both the initial flexible segment and the α–helix are required but neither is sufficient to confer slow deactivation kinetics. Alanine scanning mutagenesis identified R5 and G6 in the initial flexible segment as critical for slow deactivation. Alanine mutants in the helical region had less dramatic phenotypes. We propose that the PAS domain is bound close to the central core of the channel and that the N–terminal α–helix ensures that the flexible tail is correctly orientated for interaction with the activation gating machinery to stabilize the open state of the channel.
Introduction
The human ether-a-go-go related gene (hERG) encodes K v 11.1, the pore forming subunit of the rapidly activating delayed rectifier K + channel (I Kr ) [1]. Reduction of hERG channel activity either by drugs [2] or genetically inherited mutations [3] results in prolongation of the QT interval on the surface electrocardiogram and a markedly increased risk of arrhythmias and sudden cardiac death [4]. hERG channels are tetrameric with each subunit containing cytoplasmic Nand C-terminal domains and six transmembrane domains. The fifth and sixth transmembrane domains along with an intervening pore helix from each of the four subunits surrounds the ion conducting pore [5]. In addition, a cyclic nucleotide binding domain (cNBD) immediately C-terminal to the pore domain is thought to contribute to the stabilization of the tetrameric structure [6]. Conversely, the cytoplasmic Nterminus of each subunit contains a Per-Arnt-Sim (PAS) domain (residues [7,8] that is stable as a monomer and interacts with the remainder of the channel thereby regulating the kinetics of channel opening and closing [7,9,10]. In hERG, N-terminal deletions that remove the PAS domain (D2-373 [11], D2-354 [12,13], and D2-138 [7]) significantly enhance the rate of deactivation of the channel. Further, the N-terminal domain (residues 1-136) is able to restore deactivation gating in N-terminally truncated hERG [7,9]. However, deletions within the short N-terminal tail that precedes the PAS domain (residues 1-25) also result in significantly faster rates of deactivation [7,12,13]. Moreover, application of a peptide corresponding to the N-terminal 16 residues can slow the deactivation kinetics of channels with most of the N-terminus deleted (D2-354, [12]).
To clarify the role of the N-terminal tail domain in hERG K + channel deactivation, we determined the solution state structure of a construct encompassing both the PAS domain and the Nterminal tail. We show that the tail contains an amphipathic ahelical region from T13 to E23. Deletion of either the initial unstructured segment (D2-9) or replacement of the amphipathic a-helical region with a flexible linker resulted in faster deactivation, suggesting that both regions are necessary but neither is sufficient to permit normal deactivation. Alanine scanning of the N-terminal tail indicated that residues R5 and G6 are involved in critical interactions that stabilize the open state of the channel. Although the majority of alanine mutations in the amphipathic ahelical region did not have a significant effect on the rate of deactivation, both I19A and R20A showed an enhanced rate of deactivation. This suggests that the a-helix may act as a spacer, rather than being involved in critical specific interactions with other domains of the channel.
Protein expression
The PAS domain (residues 1 to 135) was expressed as an Nterminal cleavable glutathione-S-transferase (GST) fusion in E. coli C41 strain after overnight induction at 20uC. A freeze/thaw method was used to lyse the cells with 20 mM Tris buffer containing 5 mM 2-mercaptoethanol, 0.1% v/v Tween 20, and 150 mM NaCl. The lysate was incubated with glutathione beads (GE Healthcare, Amersham, UK) for 3 h and the protein eluted by TEV protease digestion overnight at 4uC. The protein was concentrated and passed through a Superdex 75 column (GE Healthcare), equilibrated in 10 mM HEPES pH 6.9, 150 mM NaCl, 5 mM N-octyl-D-glucoside (OG) (Anatrace Inc., Maumee, OH, USA) and 3 mM tris(2-carboxyethyl)phosphine (TCEP). The purified PAS domain eluted as a single peak at the expected molecular size for a monomer (as previously described [7]). The 13 C/ 15 N double-labelled PAS domain was produced by substituting the nitrogen and carbon sources in bacterial growth medium with 15 N-enriched NH 4 Cl and 13 C-enriched glucose, respectively.
Sample preparation and NMR spectroscopy
The NMR sample consisted of 0.21 mM 13 C/ 15 N PAS domain protein in solution containing 10 mM HEPES, 3 mM TCEP, 5 mM OG, and 7% D 2 O, at pH 6.9. All NMR experiments were performed on a Bruker Avance II 900 MHz NMR spectrometer at 298 K. 2D 1 H-15 N HSQC, 3D 1 H-15 N NOESY and 3D 1 H-13 C NOESY data were acquired using traditional methods while 3D HNCO, HNCA, HN(CO)CA, HNCACB, CBCA(CO)NH, C(CO)NH, H(CCO)NH and HCCH-TOCSY data were acquired using a non-uniform sampling method and maximum entropy reconstruction [14]. The sample was buffer-exchanged in D 2 O before acquiring 3D HCCH-TOCSY and 1 H-13 C NOESY NMR spectra.
NMR chemical shift assignment and structure calculations
All NMR spectra were analysed using XEASY3 [15]. Sequence-specific backbone assignments were made using 3D HNCO, HNCA, HN(CO)CA, HNCACB and CBCA(CO)NH data. Side-chain chemical shift assignments were made using 3D (H)CC(CO)NH-TOCSY, H(CC)(CO)NH-TOCSY and HCCH-TOCSY data. A total of 2634 distance constraints were derived from 3D 1 H-15 N and 1 H-13 C NOESY data, 24 hydrogen bond constraints were derived from the 1 H-13 C NOESY data (based on amide protons that were still observable afte exchange of the sample into D 2 O buffer), and 178 dihedral angle constraints (w,y) were derived from TALOS [16]. The error range used in the structure calculations were set to twice the standard deviation estimated by the program. Automated NOE assignment and structure calculations were performed using the program CYANA v2.1 [17]. An ensemble of the 20 structures with the lowest target function values was chosen to represent the solution structure of the protein. Energy minimization of these structures was performed using the program AMBER 10 [18]. The generalized Born (GB) solvent model was used for the final energy minimization using the distance constraints from the CYANA calculation. The energy-minimized structures were validated using the PSVS server [19] and deposited in the PDB [20] under the accession code 2L0W. Chemical shift assignments were also deposited in the BioMagResBank under accession code 17066. Secondary structure elements were predicted using Talos+ [21].
Electrophysiology HERG cDNA (a gift from Dr Gail Robertson, University of Wisconsin) was subcloned into a pBluescript vector containing the 59 untranslated region (UTR) and 39 UTR of the Xenopus laevis bglobin gene (a gift from Dr Robert Vandenberg, University of Sydney). Mutagenesis was carried out using the Quickchange mutagenesis method (Agilent Technologies, CA, USA) and confirmed by DNA sequencing. Wild-type (WT) and mutant channel cDNAs were linearized with BamHI and cRNA transcribed with T7 RNA polymerase using the mMessage mMachine kit (Ambion, city, TX, USA).
Xenopus laevis oocytes were prepared as previously described [22]. Stage V and VI oocytes were isolated, stored in tissue culture dishes containing ND96 (in mM: KCl 2.0, NaCl 96.0, CaCl 2 1.8, MgCl 2 1.0 and HEPES 5.0) supplemented with 2.5 mM sodium pyruvate, 0.5 mM theophylline and 10 mg mL 21 gentamicin, adjusted to pH 7.5 with NaOH and incubated at 18uC. All experiments were approved by the Garvan/St Vincent's Animal Ethics Committee (Approval ID 08/34).
Xenopus laevis oocytes were injected with 5-10 ng cRNA and incubated at 18uC for 24-48 h prior to electrophysiological recordings. All experiments were undertaken at room temperature (21-22uC). Two-electrode, voltage-clamp experiments were performed using a Geneclamp 500B amplifier (Molecular Devices Corp, Sunnyvale, CA, USA). Glass microelectrodes had tip resistances of 0.3-1.0 MV when filled with 3 M KCl. Oocytes were perfused with ND96 solution (see above). In all protocols a step depolarization of +20 mV from the holding potential of 290 mV was applied at the start of each sweep to enable off-line leak-current subtraction. We assumed that the current leakage was linear in the voltage range 2160 to +40 mV. Data acquisition and analysis were performed using pCLAMP software (Version 9.2, Molecular Devices Corp, Sunnyvale, CA, USA) and Excel software (Microsoft, Seattle, WA, USA). All parameter values were estimated as mean 6 standard error of the mean (SEM) for n experiments, where n denotes the number of different oocytes studied for each construct. Isochronal activation curves were measured using standard tail current analysis [1]. Cells at a holding potential of 290 mV were subjected to 4-s depolarizing steps to voltages in the range 270 to +50 mV before stepping the voltage to 270 mV where tail currents were recorded. Tail current data were normalized to the maximum current value (I max ) and fitted with a Boltzmann expression: where g/g max is the relative conductance, V 0.5 is the half-activation voltage, V t is the test potential and k is the 'slope factor'. Alternatively, the data were fitted with the thermodynamic form of the Boltzmann expression: where DG 0 is the work done at 0 mV, z g is the effective number of gating charges moving across the membrane electric field, E. F, is Faraday's constant, R is the universal gas constant and T is the absolute temperature. Equations (1) and (2) are equivalent, however from Equation (2) we can calculate the effect of mutations on changes in the chemical potential (DG 0 ) and electrostatic potential (-z g EF) that drives activation and deactivation of the channel.
To measure rates of deactivation, oocytes were depolarized from a holding potential of 290 mV to +40 mV for 1 s to fully activate the channels; they were then repolarized to potentials in the range 250 to 2160 mV. A double exponential function was fitted to the decaying portion of tail currents. In order to compare rates of deactivation for different mutants at comparable driving forces, the voltages in the range 260 to 2160 mV were converted to electrochemical driving forces (-z g EF) as defined in Equation (2).
Results
The solution structure of hERG 1-135 domain was well folded with dispersed peaks in the 1 H-15 N-HSQC (Fig. S1). More than 97% of the backbone amides were assigned, the exceptions being A40 and R77. 75% of the sidechain N-H from arginine, asparagine and glutamine were assigned; the exceptions were R4, R5, R73, R76, R77 and Q84. All resonances of Ha and Hb atoms were assigned except for Hb of R4 and R5. All Ca, Cb and C were assigned except for the carbonyls of R76 and V113. In addition, more than 90% of the remaining sidechain proton and carbon resonances were assigned. These assignments allowed a total of 2634 distance constraints (Table 1) to be unambiguously derived and these were used in combination with the dihedral angle and hydrogen-bond constraints to calculate the solution structure of the N-terminal 135 residues of the hERG K + channel (Fig. 1). The RMSDs of all backbone and heavy atoms excluding . Both the unstructured tail and the a-helix of the Nterminal domain are required to slow deactivation kinetics of WT hERG channels. Sequence of the N-terminal domain in WT hERG1 channels is shown in panel A. Mutant constructs were designed to examine the effect of removing the unstructured tail (D2-9) or the entire N-terminal domain (D2-25), or the effect of disrupting the Nterminal a-helix (GGS) by replacing residues L15 and T17-E23 with P_GGSGGSG as shown in red. (B) Typical rates of deactivation observed in tail currents recorded at 2120 mV following a step to +40 mV. To aid comparison current traces were normalized to peak tail current. Constructs D2-9 (blue), D2-25 (green) and GGS (red) all produced channels with faster deactivation rates than WT hERG (black). (C and D) Mean 6 SEM for deactivation rates recorded over a range of voltages from 250 mV to 2160 mV. Current decay associated with channel deactivation was best fitted to two exponentials, generating t fast (C) and t slow (D) for WT (open circles, n = 11), D2-9 (blue squares, n = 14), D2-25 (green triangles, n = 5) and GGS (red inverted triangles, n = 10) channels. All mutant channels enhanced the rates of both the fast and slow components of deactivation over the entire voltage range. doi:10.1371/journal.pone.0016191.g003 the N-terminal tail (S26 to K135) were 0.40 and 1.10 Å , respectively.
Statistics highlighting the extremely high precision and stereochemical quality of the ensemble of hERG PAS domain and N-terminal tail structures are shown in Table 1. The average MolProbity score of 1.55 places the ensemble in the 94th percentile relative to all other structures ranked by MolProbity [23]. The high stereochemical quality of the ensemble stems from a complete absence of bad close contacts, high Ramachandran plot quality (95% of residues in the most favored region), and a very low percentage of unfavorable sidechain rotamers. During the automated NOESY assignment/structure calculation process the CANDID module of CYANA assigned 86% of all NOESY crosspeaks to give an average of 19 NOE constraints per residue.
The NMR solution structure for the segment from residues S26 to K135 was almost identical to the crystal structure of the same region (Fig. 1B). The average RMSD of this NMR ensemble relative to the crystal structure (1BYW) was less than 1 Å . The only significant discrepancy was observed for the loop between Hb and Ib, a region previously shown to be highly dynamic in a molecular dynamics study [24].
In addition to the structure of the PAS domain (residues S26 to K135), the NMR solution structure revealed a randomly distributed a-helical tail (blue lines in Fig. 1A), in particular residues T13 to E23 ( Fig. 2A) with a backbone RMSD of 0.31 Å . Despite not being able to determine a fixed conformation of the N-terminal tail, it was consistent with the secondary structure prediction using the NMR chemical shifts and inter-residue NOE constraints (Fig. S2).
Closer examination of the N-terminal helix (T13 to E23) revealed that it was amphipathic with positively charged residues (R20 and K21), negatively charged residues (D16 and E23), and polar residues (T13 and T17), located on the same side of the helix (Fig. 2B), while non-polar residues (F14, L15, I18, I19 and F22 but not I19) were located on the opposite face of the helix. Residues 1-9 are disordered with no clearly defined structure, while residues 10-12 adopted a turn conformation.
To study the functional significance of the a-helix in the Nterminal tail, we compared the effects of deleting only the initial portion of the N-terminus (D2-9), with deletion of the entire Nterminal tail (D2-25), or replacement of the a-helix with a flexible linker (denoted GGS mutant, see Fig. 3A). Typical examples of tail currents for D2-9, D2-25 and GGSmut channels (recorded at -120 mV and normalized to the peak inward current amplitude) are shown in Fig. 3B. All mutant channels showed significant enhancement of both the fast (Fig. 3C) and slow (Fig. 3D) components of deactivation over the entire voltage range studied. None of the mutants affected the relative amplitudes of the fast and slow components of deactivation at the most negative potentials. However, at less negative potentials where the fast component became less dominant, the amplitude of the fast component relative to the slow component was greater in all three mutant channels compared to WT hERG (Fig. S3).
When comparing the rates of deactivation for WT and mutant channels at a single voltage, it is important to consider that changes to steady-state activation can affect the electrochemical potential for deactivation. Steady-state activation properties were determined by fitting, with a single Boltzmann expression (Equation 1), the I-V relationship of peak tail currents at 270 mV plotted against the preceding voltage step (Fig. 4). The resulting half-maximal voltage for activation (V 0.5 ) of D2-9 channels (25.660.9 mV, n = 14) was shifted in the depolarizing direction compared to WT hERG (223.260.8 mV, n = 11, ANOVA p,0.05), without any change in slope. A small but statistically significant shift in activation V 0.5 was also observed for GGSmut (218.560.6 mV, n = 10) channels, while D2-25 channels were similar to WT hERG (Fig. 4A). The chemical (DG 0 ) and electrostatic (-z g EF) potential that drives activation was calculated by fitting the activation data with a Boltzmann function in the form of Equation 2 (data summarized in Table S1). Since changes in DG 0 parallel changes in activation V 0.5 , D2-9 channels had a significantly smaller chemical potential for activation (-1.960.3 kJ mol -1 ) than WT hERG (26.860.2 kJ mol 21 , ANOVA p,0.05). To compensate for changes in electrochemical driving force, the rates of deactivation calculated at voltages between 250 mV to 2160 mV were plotted against the electrochemical potential for deactivation (Fig. 4B). After correction, D2-9, D2-25 and GGSmut channels all had enhanced rates of deactivation compared with WT hERG (ANOVA p,0.05). Thus, alterations to the electrochemical driving force for deactivation could not explain the enhanced deactivation rates seen with these mutant channels.
To probe the role of individual residues within the N-terminal tail, native residues from P2-E23 were individually replaced with alanine, or with valine in the case of A9. Measured rates for fast (t fast ) and slow (t slow ) components of deactivation, in addition to the relative contributions of these components are given in Table S2. At negative potentials, the fast component accounted for the majority of deactivation (.80%). This parameter was therefore used to compare WT and mutant channels. Several of the mutations introduced small (less than 610 mV), and statistically significant shifts in the voltage dependence of channel activation when compared to WT hERG (Table S1). Accordingly, the effects of each mutation on deactivation rate were compared at an equivalent driving force of 230 kJmol -1 (as indicated in Fig. 5). In Fig. 6, the effect of alanine mutants on deactivation rates are classified into those that were unchanged (grey bars), faster (red bars) and slower (blue bars) compared to WT.
Typical tail currents recorded at 2120 mV, as well as mean t fast values plotted against electrochemical driving force, are shown in Fig. 5A for two mutations (R5A and G6A) located in the unstructured N-terminal tail. Both the R5A and G6A mutants had significantly faster deactivation rates than the WT channel; the t fast at 230 kJmol -1 was 16.561.0 ms (n = 8) for R5A and 12.761.3 ms (n = 7) for G6A compared with 28.562.8 ms (n = 11) for WT (ANOVA p,0.05, Fig. 6). These data suggest that either, or both, of these residues could interact with other parts of the channel protein to slow deactivation rates in WT hERG.
Within the amphipathic N-terminal a-helix (residues T13 to E23), alanine mutations I19A and R20A gave channels with faster rates of deactivation when compared with WT hERG; t fast at 230 kJmol -1 was 19. Rates of deactivation are represented by the decay in tail currents recorded at 2120 mV following a step to +40 mV. Current traces were normalized to peak tail current to aid comparison. B. Mean 6 SEM rates of deactivation (t fast ) plotted against the total electrochemical driving force -(DG 0 -z g EF) for channel deactivation. When compared at an equivalent driving force of -30 kJmol -1 (dotted line) four mutant channels exhibited altered deactivation rates compared with WT hERG (open squares, n = 11). Two mutations, P10A (green circles, n = 14) and D16A (blue triangles, n = 10), produced channels that were slower than WT (left panel), while R5A (red diamonds, n = 8) and G6A mutant channels (orange inverted triangles, n = 7) deactivated faster than WT hERG (right panel). doi:10.1371/journal.pone.0016191.g005 surprisingly, two mutations in the a -helical region, i.e., T13A (t fast , 38.262.2 ms, n = 15) and D16A (t fast , 54.264.7 ms, n = 10) exhibited slower deactivation rates than WT (ANOVA, p,0.05). Interestingly, the four residues with altered deactivation rates (T13, D16, I19, R20) lay on one face of the amphipathic helix (Fig. 6C). In addition, mutation of one residue (P10A) that lies between the amphipathic a-helix and the D2-9 region, also significantly slowed deactivation (44.462.4 ms, n = 14) compared with WT hERG (28.562.8 ms, ANOVA p,0.05).
Discussion
The solution structure of hERG PAS domain determined in this study was found to be very similar to the crystal structure determined previously by Morais Cabral et al. [7] and the solution structure recently reported by Li and colleagues [25]. Our solution structure of the PAS domain (residues 26-135) superimposed very well with the crystal structure apart from the loop between Hb and Ib that was previously shown to be highly dynamic [24]. The major difference between the X-ray and NMR structures is that in the latter the Nterminal tail contained an amphipathic a-helix from residues T13 to E23. The first nine residues in the NMR structures were unstructured as were residues 24-26 that link the a-helix to the PAS domain. The remaining tail-domain residues (P10, Q11, N12) appeared to adopt a turn conformation but there were insufficient NOE constraints to be able to define it as an extension of the a-helix.
Our functional, electrophysiological data indicated that removal of either the initial unstructured segment (D2-9) or replacement of the a-helix with a flexible linker (GGS mutant) both produced similar phenotypes, i.e. markedly faster rates of deactivation. This is essentially the same phenotype produced by deletion of the entire N-terminal tail (D2-25, Fig. 3A, or D2-23 [7] or D2-26 [7]). These data indicate that the initial unstructured N-terminal segment and the a-helical region (residues 13-23) were both required, but neither alone was sufficient, for the normal slow deactivation kinetics of WT hERG channels.
Alanine scanning mutagenesis of the initial unstructured Nterminal segment (P2 -N12) identified R5 and G6 as the most critical residues for deactivation. Both R5A and G6A mutants had enhanced rates of deactivation that could easily explain the faster deactivation observed with deletion of residues 2-9. Within the ahelical segment two alanine mutants resulted in enhanced rates of deactivation (I19A and R20A). These mutants resulted in a smaller perturbation to deactivation than R5A and G6A, nevertheless the combined effect of all four residues may explain why deletion of residues 2-25 caused a greater enhancement of the rate of deactivation than did deletion of residues 2-9.
Whilst the deletion mutants and the GGS mutant all had faster deactivation phenotypes, three alanine mutants resulted in slowed deactivation, i.e., T13A and D16A in the a-helix and P10A in the linker between the a-helix and the initial unstructured segment. It is possible that these three residues are important for ensuring that the tail does not bind too tightly to the open state of the channel. It is noted that T13 and D16 lie on the same side of the a-helix as I19 and R20, which are the only other residues in the a-helix that significantly perturbed deactivation. This suggests that this surface of the a-helix is involved in protein-protein interactions that affect the rate of deactivation. However, when we mapped the functional effect of alanine mutants onto the structure of the N-terminal ahelix, the residues with perturbed function did not lie along the entire length of the a-helix (Fig. 6C). We therefore suggest that in addition to providing some specific interactions the a-helix may also serve as a spacer to ensure that the flexible tail is held a predetermined distance from the PAS domain itself. Given that alanine mutants tend to stabilize a-helices it is also possible that the P10A mutant may have stabilized a longer helical domain that results in slower deactivation.
It is important to recall that the structure we have solved is an isolated domain. It is possible that the N-terminal tail structure reported here is more flexible than it would be in the whole, intact channel protein. Conversely, it is clear that the N-terminal tail interacts with another part of the channel protein to regulate deactivation and we suggest that the flexibility of the distal N-tail (residues 1-9) is important for its function and/or regulation.
Model for the structural basis of deactivation gating
Deletion of the N-terminal tail (D2-25, Fig. 3A), the entire PAS domain (D2-138, [7]) or majority of the N-terminus (D2-354 [13], D2-373 [11]) all result in a very similar phenotype, i.e. approximately 5-fold faster rate of deactivation. A plausible hypothesis that explains these observations is that the PAS domain binds (with relatively high affinity) to another domain on the hERG channel and it positions the flexible N-terminal tail region close to the central core of the hERG channel where it binds and unbinds sufficiently rapidly to modulate the rate of deactivation. The region(s) of the channel where the PAS/N-terminal a-helix and the flexible N-terminal domains bind remain to be determined. Two obvious candidates are the S4-S5 linker [25], a part of the channel known to be critical for regulation of deactivation gating [12,13,26] and the C-linker + cyclicnucleotide binding domain, as mutations in this domain modulate the kinetics of deactivation [27,28,29]. Li and colleagues showed that the PAS domain can bind to the S4-S5 linker, however these studies were perfomed with an isolated S4-S5 peptide fragment and need to be confirmed in studies involving either the entire channel protein or at least larger domains. Similarly, testing of the hypothesis that the PAS domain and/or N-terminal a-helix bind to the cyclic-nucleotide binding domain will require expression and purification of the cyclic-nucleotide binding domain. Figure S2 Amino acid sequence of hERG PAS domain and overview of NMR data. The secondary structure elements are labelled according to the hERG PAS domain PDB structure (2L0W). Hydrogen bonds constraints used in the structure calculation are indicated as black circles. Chemical shift index (CSI) prediction of the secondary structure is shown immediately above the amino acid sequence. Thick and thin bars indicate strong and weak NOE cross-peaks intensities for the sequential proton-proton NOE connectivities (dNN, daN and dbN). The observed medium-range NOEs dNN(i, i+2), daN(i, i+2), daN(i, i+3), dab(i, i+3) and daN(i, i+4) and are indicated by lines connecting the two residues that are related by the NOE. (EPS) Figure S3 Relative amplitudes of tfast and tslow that comprise deactivation rates. Tail currents recorded over a range of potentials (Vm) following a test pulse to +40 mV are fit with a double exponential function (see methods). Relative amounts of tfast and tslow components are then plotted against voltage. At negative potentials, where tfast dominates, there is little difference in relative amplitudes between WT (black) and mutant channels (D2-9: blue; D2-25: green; GGSmut: red). However, at less negative potentials (.290 mV) there is a significant increase in the relative amounts of tfast in mutant channels (D2-9, D2-25, GGSmut) compared to WT hERG. (EPS)
|
v3-fos-license
|
2018-12-18T02:26:56.334Z
|
2016-06-13T00:00:00.000
|
58898603
|
{
"extfieldsofstudy": [
"Materials Science"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.fracturae.com/index.php/fis/article/download/IGF-ESIS.37.06/1736",
"pdf_hash": "291ba870238bc806fd6b4924dd42fc866ef3a0f9",
"pdf_src": "Anansi",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44300",
"s2fieldsofstudy": [
"Materials Science"
],
"sha1": "291ba870238bc806fd6b4924dd42fc866ef3a0f9",
"year": 2016
}
|
pes2o/s2orc
|
Focussed on Multiaxial Fatigue and Fracture Analysis of initial crack path in fretting fatigue
The initial crack path is analysed in a fretting fatigue test with cylindrical contact, where there is a stress gradient and a multiaxial and non-proportional stress state. For this, a cylindrical pad is pressed, with a constant normal load, N, against a dog-bone type fatigue test specimen. Then, the test specimen is subjected to a cyclic axial stress, σ. Due to the cyclical axial stress, the assembly used and the friction between the contact pair, a tangential cyclic load Q is generated. In these tests, both components are made of Al7075-T651 alloy. The crack initiation path along the fracture surface is optically measured using a focus variation technique. The contact stress/strain fields obtained analytically, in junction with the Fatemi-Socie (FS) and Smith-WatsonTopper (SWT) multiaxial fatigue parameters, allow us to determine the controlling parameters of the crack initiation process observed in the tests and to estimate the crack path during the early stage of the crack growth.
INTRODUCTION
retting is a mechanical contact related damage.This phenomenon is prone to arise in those mechanical joints subjected to time variable loads.As a consequence of these loads, and in most common situations, small relative displacements between contacting surfaces arise, leading to a contact strain/stress field which generates surface cracks [ [1]].In many actual situations, the mechanical joint is also subjected to a global fluctuating load, which by itself may be able to produce the failure of the joint, but in junction with the fretting-initiated cracks, make more likely the failure [ [2]].Today fretting fatigue is a well-documented phenomenon; agents are identified [ [3]], many palliatives are available [ [4]- [6]], the influences of many parameters have been investigated [ [2], [7]- [9]], and some models have been developed in an attempt to predict the fatigue life [[10]- [14]].Regarding the proposed fatigue models, although many of them incorporate the crack initiation phase in their predictions, it is modelled in a coarse manner [ [11], [12], [15]].Despite this, none of them makes an exhaustive analysis of the crack initiation path resulting in fretting/fretting fatigue.This work deepens into the crack path initiation shape, both experimentally and analytically.For this, a series of experimental tests have been carried out, and the resulting fracture surfaces were analysed in order to characterize the crack when it has only a few microns.Two frequently multiaxial fatigue parameters were used for the crack initiation analysis: the Fatemi-Socie and Smith-Watson-Topper [ [16], [17]].The former is a shear strain based parameter, although it also incorporates the effect of the opening stress; tentatively, this is the most suitable fatigue parameter to be used in the analysis of fretting-initiated cracks.The latter, is a fully based opening stress parameter, and therefore, a less successful behaviour than the former is expected, although this parameter is used in the fretting bibliography giving good results [ [12]].
F EXPERIMENTAL RESULTS
he fretting fatigue tests analysed are better described elsewhere [ [18]].The material is Al 7075-T651.The setup is shown in Fig. 1.The rectangular cross section of the specimens were 7x10 mm, where the contact is produced at the 7 mm side.The specimens were photographed after failure with the optical microscope, SEM and confocal microscope with the objective of finding the crack initiation points and study the crack at the early stage.In a fretting fatigue test of this type, forces are applied on both sides of the specimen, therefore, there are two contact zones (one of them shown in Fig. 1).But final failure is due mainly to only one of them because usually cracks do not initiate exactly at the same time on both sides.Nevertheless, since the stress concentration in fretting is high, the initiation is fast and after fracture small cracks are always found growing from the other contact zone.These cracks are the ones analysed here.This paper shows the results obtained on a specimen with the following loads: N = 5800 N, Q = 850 N and σ = 50 MPa.The number of cycles to failure was 676704.T Fig. 2 shows the fretting scar and the fracture surface in this test on the opposite side of the main crack.Three semielliptical cracks can be clearly seen.The initiation points are marked with a red dot.The fretting pad surface roughness caused the appearance of the scar with vertical bands.The black area corresponds to the slip zones of the contact.Also, it can be seen that the cracks initiated at the contact trailing edge (x = a) or inside the slip zone.In order to measure the crack initiation angles and path, the fracture surface was analysed with a confocal microscope, obtaining the image shown in Fig. 3, where colour code means the distance from the failure surface to a reference plane.The resolution of the data obtained is 0.65 μm in the plane of the crack (yz plane) and 2 -3 μm in the x direction.The initiation point is marked with a white dot.This image shows that the initiation point appears at the top of a "hill", i.e. the crack grows around the initiation point with a certain angle towards the inside of the contact and then becomes perpendicular to the contact surface.This paper pays attention to the first zone, which appears red coloured in the image.The surface data was exported to a text file.Afterwards, several straight radial lines were drawn starting at the initiation point and at different angles (-45º, -35º, -25º, -15º, -5º, 5º, 15º, 25º, 35º, 45º), being 0º the line perpendicular to the surface.The crack surface along these lines is shown in Fig. 4 for each of the three cracks in the specimen.The cracks profile are adimensionlised with the fictitious (from a linear elastic analysis) semi-width of the contact, a = 2.33 mm in Fig. 1.This dimension is calculated assuming that the real contact length is not 7 mm but 3.45 mm, as explained later.It can be seen how the crack grows almost perpendicular to the surface for the first 20 μm and then turns towards the inside of the contact with an angle between 20 and 28 degrees from the line perpendicular to the contact.
CRACK INITIATION ANALYSIS
n this section, an attempt to reproduce the experimentally observed crack initiation path is made.Three different crack initiation procedures are used in this analysis.The three of them are based on the well known Fatemi-Socie (FS) [ [16]] and Smith-Watson-Topper (SWT) [ [17]] multiaxial fatigue parameters and will be described later.As mentioned earlier, the fretting surface scar, Fig. 2, is not ideal and it is obvious that the contact zone is far from being uniform due to an excessive roughness -along the horizontal direction-in the specimen surface and mainly in the fretting pad.To consider this effect in the crack initiation procedures, it is assumed that both normal and tangential loads per unit length are obtained dividing the total load by the real contact length in the horizontal direction.By a digital image processing it is obtained that the real contact length in the horizontal direction is 3.45 mm, leading to a normal and tangential loads per unit length of 1682.2 and 246.4 N/mm respectively.A 2D (plain strain) linear-elastic analysis using the analytical equations for the mechanical contact between a half-plane and a cylinder [ [19]], and considering the above loads with an axial stress of 50 MPa, shows that the maximum von Mises stress produced is less than the yield stress for this material.If in addition, it is considered that the crack initiation phase is only influenced by the very-near surface strain/stress field, therefore, the use of the above linear-elastic model can be justified to analyse the crack initiation process.
First Crack Initiation Analysis Procedure
A scheme of such a procedure is shown in Fig. 5. First, at the trailing edge (x=a, y=0) the direction which gives the maximum value of the parameter (critical plane) is obtained.The search for this maximum is only done in the xy plane and using increments of 0.1 degrees for the search.Once the critical plane at the trailing edge is found, it is assumed that I 0 100 200 μm the crack advances 1 µm in this direction, (point 2 in Fig. 5).Then at point 2, again the critical plane is determined and a new 1 µm increment of crack length in this direction is assumed, leading to a new point 3. Repeating again and again this procedure a crack path is obtained.This procedure is not orthodox since this is not the way a crack propagates but, nevertheless, it is interesting to see the results and compare them with the other alternatives.Fig. 6 shows, in addition to the averaged paths for the experimentally observed cracks, the paths obtained with this procedure.First, note that FS parameter, although being the most suitable fatigue parameter for fretting -it combines both shear and normal stresses and would therefore predict better the experimentally observed initiated cracks-produces the worst results, i.e., the predicted crack path grows outside the region beneath the contact zone.On the other hand, the SWT parameter predicts a crack path growing into the region beneath the contact zone, similar to the one obtained experimentally.It is important to recall that the FS parameter and, therefore, the critical plane orientation, depend on the value considered for the k parameter.This parameter, k, measures the relative importance of the stress normal to the crack plane in the Fatemi-Socie criterium.Nevertheless, the results are very similar even when varying the value of k from 0.44 [ [12]] -being this value used for the path shown in Fig. 3-up to an exaggerated value of 100.It is worthwhile pointing out that in the analysis using the FS parameter, and at the trailing edge, two possible critical planes appear, i.e., two planes producing the same maximum value for the parameter.One critical plane points towards the region beneath the contact zone (path 1) and the other outside (path 2), Fig 6b .The path considering the first critical plane (path 1) rapidly and abruptly rotates and follows a path nearly identical to that pointing towards the region outside the contact zone (path 2).According to this, it is virtually irrelevant whether one or the other initial critical plane is considered.
Second Crack Initiation Analysis Procedure
This procedure only shows the zones with a higher level of damage and gives a hint of the crack initiation path.Fig. 7 shows, in addition to the paths of Fig. 6 and the experimental cracks, the contours plot obtained using both fatigue parameters.Regarding the FS parameter contour plot, it is clear that the most likely area for crack initiation is that beneath the contact, and the reasoning for this fact is clear, this area presents, in terms of the parameter value, a lower gradient than the region that is not beneath the contact zone.Therefore, beneath the contact any possible crack initiation path presents higher mean values of the FS parameter than a crack path growing outside the region beneath the contact zone.
For the SWT parameter, although less significant than in the FS parameter, the same behaviour is observed but now only in the very close area to the surface.While this procedure may give a "visual approach" to the crack initiation path, it is important to note that only the maximum values of the parameter are here considered, and no specific information is given about the critical plane orientation (crack initiation angle).Under this circumstance, it is possible that nearby points have completely different critical plane orientations.
Third Crack Initiation Analysis Procedure
This is a variation of the second procedure, introducing the information of the critical plane.However, this third procedure has the drawback that only assumes straight crack initiation paths.A scheme of this procedure is shown in Fig. 8. First, a straight line starting from the contact trailing edge and forming an angle θ related to the vertical is considered.Then, the parameter value is calculated along this line, but with the characteristic that for all these points, the material plane orientation considered for the parameter evaluation coincides with that defined by θ.This is repeated for different values of θ obtaining the contour plot in Fig. 9. Now, and because all points along a straight line have the same material plane for the fatigue parameter evaluation, it is feasible to pose that those lines having the higher mean values of the parameter along a certain length are more likely to initiate cracks.Fig. 9 also includes the paths of Fig. 6 and the experimental cracks.Now, clearly both, FS and SWT parameters, predict crack initiation paths that point toward the region beneath the contact.To obtain more precisely the preferred direction, the mean values obtained for each parameter in a distance of 50 µm -which is the average grain size for the Al-7075-T65-are represented in Fig. 10 as a function of angle θ.This graph shows that angles of 54º and 9º offer the maximum mean values for the FS and SWT respectively.Lines having these angles are plotted in Fig. 9. First, note that, although with an angle far from the experimentally observed, now the FS parameter produces a crack initiation path that qualitatively has the right direction.On the other hand, once again the SWT parameter offers the best prediction, and also, note that this path is quite similar to the previous one obtained with the first procedure.
Figure 1 :
Figure 1: Experimental setup in the fretting fatigue tests.
Figure 2 :
Figure 2: Fretting scar and the fracture surface.
Figure 5 :
Figure 5: Scheme for first crack initiation procedure.
Figure 6 :
Figure 6: Crack paths predicted with the first procedure.
Figure 7 :
Figure 7: Contour plots for the FS and SWT parameters.
Figure 8 :
Figure 8: Scheme for the third crack initiation procedure.
Figure 9 :
Figure 9: Polar contour plots for the FS and SWT parameters.
|
v3-fos-license
|
2018-04-03T00:05:53.766Z
|
2003-03-14T00:00:00.000
|
8581860
|
{
"extfieldsofstudy": [
"Biology",
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "HYBRID",
"oa_url": "http://www.jbc.org/content/278/11/9290.full.pdf",
"pdf_hash": "749bc98cc63dbb9de7179d31e3ee5cc43a66cc57",
"pdf_src": "Adhoc",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44301",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"sha1": "49d4bba9b742f82daeec770752abd51b67ea1ee4",
"year": 2003
}
|
pes2o/s2orc
|
Platelet-derived Growth Factor Induces the (cid:1) - (cid:2) -Secretase-mediated Cleavage of Alzheimer’s Amyloid Precursor Protein through a Src-Rac-dependent Pathway*
The (cid:1) -amyloid peptide (A (cid:1) ) present in the senile plaques of Alzheimer’s disease derives from the cleavage of a membrane protein, named APP, driven by two en-zymes, known as (cid:1) - and (cid:2) -secretases. The mechanisms regulating this cleavage are not understood. We have developed an experimental system to identify possible extracellular signals able to trigger the cleavage of an APP-Gal4 fusion protein, which is detected by measuring the expression of the CAT gene transcribed under the control of the Gal4 transcription factor, which is released from the membrane upon the cleavage of APP-Gal4. By using this assay, we purified a protein contained in the C6 cell-conditioned medium, which activates the cleavage of APP-Gal4 and which we demonstrated to be PDGF-BB. The APP-Gal4 processing induced by PDGF is dependent on the (cid:2) -secretase activity, being
strated to bind to APP and AID through their PTB domains (16 -19).
The evident complexity of this protein-protein interaction network suggests that APP could be a multifunctional molecule that anchors several different oligomeric complexes close to the membrane, possibly in specific subdomains such as caveolae (20), and/or that APP regulates the availability of these complexes in their final destination, upon APP cleavage and detachment of the APP cytodomain from the membrane.
A crucial point that should be addressed to further study the possible interplay of APP and transduction pathways involving the above mentioned proteins concerns the molecular mechanisms that induce APP processing. In this report we show that PDGF-BB is a potent activator of APP -␥ cleavage, giving rise to an increased generation of A through a pathway involving the non-receptor tyrosine kinase Src and the small G-protein Rac1.
Restriction sites are underlined, and the ten glycine codons are in italic. This APP-Gal4 expressing vector, containing the neomycin resistance gene, has been transfected into HeLa cells by calcium-phosphate method, and after a 14-day G418 selection (900 g/ml final concentration), several G418-resistant clones have been isolated. Two pools of these clones have been used (HeLaAG).
The cleavage of APP-Gal4 fusion protein has been assayed by transiently transfecting HeLaAG cells (5 ϫ 10 5 cells/60-mm dishes) by calcium-phosphate method with G5BCAT vector (3 g), in which the transcription of chloramphenicol acetyltransferase (CAT) gene is under the control of a Gal4-dependent promoter (21). CAT expression was measured by using colorimetric CAT enzyme-linked immunosorbent assay (Roche Molecular Biochemicals). Other transfections of HeLaAG cells were carried out by using the calcium-phosphate method; all the plasmids were used at 3 g each, and the total amount of DNA in co-transfections was always brought to 10 g with RcCMV vector.
Cell Culture Conditions-Wild type HeLa and HeLa AG cells were grown at 37°C in the presence of 5% CO 2 in Dulbecco's modified Eagle's medium (DMEM, Invitrogen) supplemented with 10% fetal bovine serum, 100 units/ml penicillin, and 100 g/ml streptomycin (all from HyClone). C6 and NIH3T3 cells and co-cultures were at 37°C in the presence of 5% CO 2 in RPMI medium (Invitrogen) supplemented with 10% fetal bovine serum and antibiotics.
Wild type HeLa or HeLa AG cells, 24 h after transfection with G5BCAT vector, were treated for the indicated times with 40 ng/ml recombinant human PDGF-BB (Sigma), 5 or 10 g/ml protein fraction precipitated with 40% AS or 200-l fractions eluted from Sephadex G-75, diluted in DMEM without serum to a final volume of 2 ml. Inhibitors were added at the indicated times at the following concentrations: 10 M PP2 (Calbiochem), 2 M AG1296 (Calbiochem), 30 M genistein (Calbiochem), 10 M ␥-secretase inhibitor compound X (Calbiochem), 100 M PD098059 (Sigma), and 100 nM Wortmannin (Sigma).
HEK293 and CHO cells were grown in the same conditions as He-LaAG. HEK293 were transfected with 0.5 g of human APP 695 expression vector and 0.5 g of SrcYF vector by LipofectAMINE 2000 (Invitrogen) in 35-mm plates. Total A peptide was measured by sandwich enzyme-linked immunosorbent assay with 6E10 and 4G8 antibodies.
Purification of the Activity That Induces APP-Gal4 Cleavage-C6 cells were grown as described above in 100 dishes of 150-mm diameter (Falcon) to confluence; cell sheets in each dish have been washed twice with phosphate-buffered saline (PBS), and then cells were cultured in RPMI medium without serum. After 3 days of incubation, 3 liters of conditioned medium was harvested, centrifuged at 1000 rpm for 20 min to remove debris, and concentrated to 480 ml by using a Centriplus YM-3000 (Amicon). Then, 104 g of ammonium sulfate was added to the concentrated conditioned medium to obtain a 40% ammonium sulfate solution, which was stirred overnight at 4°C and then centrifuged at 9000 rpm for 2 h. 70% AS saturation was reached by adding 87.4 g of ammonium sulfate to the 40% saturated solution, and 100% AS saturation was reached by adding 100.3 g of ammonium sulfate to the 70% saturated solution.
The precipitates were dissolved in 15 ml of PBS and dialyzed against 5 ϫ 2-liter changes of PBS. 1 mg of this sample was separated by FPLC onto a Sephadex G-75 column (30 g of swollen resin, pre-equilibrated in PBS) and run in this solvent at 0.25 ml/min while collecting 400-l fractions every 1.6 min.
Bands from SDS-PAGE were excised from the gel, triturated, and washed with water. Proteins were reduced in-gel, S-alkylated with iodoacetamide, and digested with trypsin as previously reported (22). Digested aliquots were subjected to a desalting/concentration step on ZipTipC18 (Millipore Corp., Bedford, MA) before MALDI-TOF mass spectrometry analysis. Peptide mixtures were loaded on the instrument target, using the dried droplet technique and ␣-cyano-4-hydroxycinnamic as matrix, and analyzed by using a Voyager-DE PRO mass spectrometer (Applied Biosystems, Framingham, MA). The PROWL software package was used to identify proteins unambiguously from an independent non-redundant sequence data base (23).
Immunodepletion of 40% AS fraction was obtained by incubating 20 g of the fraction diluted in 500 l of PBS with 60 g of anti-PDGF antibody (Sigma) or with 60 g of mouse IgG (Sigma) for 2 h at 4°C. Then the mixtures were chromatographed on 20 l of Protein AG-Sepharose (Santa Cruz Biotechnology) for 30 min at 4°C, and, after centrifugation, the supernatants were diluted in DMEM without serum to a final volume of 2 ml.
Preparation of Cell Extracts and Western
Blotting Analyses-For CAT assay, transiently transfected HeLa AG cells were harvested in cold TEN (40 mM Tris-HCl, pH 7.5, 1 mM EDTA, 150 mM NaCl), frozen at Ϫ80°C for 30 min, and resuspended in lysis buffer (10 mM Hepes, pH 7.9, 0.1 mM EGTA, 0.5 mM dithiothreitol, 5% glycerol, 0.2 mM phenylmethylsulfonyl fluoride, 400 mM NaCl). Total extracts were clarified by centrifugation at 14,000 rpm at 4°C, and protein concentration was determined by Bio-Rad assay; for CAT concentration measurement, 150 g of each protein extract was used.
For Western blotting analyses, HeLa AG cells were harvested in cold PBS, resuspended in lysis buffer (40 mM Tris-HCl, pH 7.2, 1% Triton X-100, 150 mM NaCl, 1 mM EDTA, 0.2 mM phenylmethylsulfonyl fluoride, 100 g/ml aprotinin, 100 g/ml leupeptin) and kept in ice for 15 min. Then total extracts were clarified by centrifugation at 14,000 rpm at 4°C. 20 g of each extract or 20 l of pooled fractions eluted from the G-75 column was electrophoresed on 4 -12% SDS-polyacrylamide gradient gel under reducing conditions and transferred to Immobilon-P membranes (Millipore). Filters were then blocked in 5% nonfat dry milk in T-PBS solution (PBS and 0.05% Tween) and incubated with appropriate dilutions of primary antibody, overnight at 4°C. The excess antibody was removed by sequential washing of the membranes in T-PBS, and then a 1:5000 dilution of the appropriate secondary antibody (horseradish peroxidase-conjugated) was added to filters for 30 min, at room temperature. The excess was removed by sequential washing of the membranes in T-PBS, and the signals were detected by chemiluminescence, using the ECL system (Amersham Biosciences). The antibodies used and their dilutions were: anti-PDGF (Sigma), 1:750; anti-Gal4DBD (Calbiochem), 1:1000; anti-APP 6E10 (Sigma), 1:1000; anti-APP CT695 (Zymed Laboratories Inc.), 1:250; anti-phosphoERK (Santa Cruz Biotechnology), 1:1000; and anti-phos-phoAkt (Santa Cruz), 1:1000.
C6 Cell-conditioned Medium Induces Gal4-dependent-CAT Gene Transcription in HeLa Cells Expressing APP-Gal4 Fusion
Protein-We examined the possibility that extracellular signals could induce the -␥-secretase-mediated cleavage of APP.
To address this point we developed an experimental system based on a recombinant protein in which the yeast Gal4 transcription factor is fused to the cytosolic C-terminal end of APP 695 . This system is based on the prediction that, in cells expressing APP-Gal4, upon the cleavage of this molecule by -␥-secretase activities, AID-Gal4 is released from the membrane and should become available to activate the transcription of the chloramphenicol acetyltransferase (CAT) gene cloned under the control of five Gal4 cis-elements in the G5BCAT vector (21) (see Fig. 1A). Based on this experimental design, HeLa cells were transfected with a vector driving the expression of APP-Gal4 fusion protein and G418 resistance gene, and several clones stably expressing APP-Gal4 have been isolated. Fig. 1B shows a Western blot of the extracts from several HeLa clones challenged with either APP or Gal4 antibodies and demonstrates the expression of a protein recognized by both antibodies. The experiments reported below were conducted by using two pools of these clones, HeLaAG1-8 and HeLaAG9 -14, thereafter indicated as HeLaAG.
To evaluate both cell-anchored and secreted factors that could activate APP-Gal4 proteolytic processing, the first experimental approach we used consisted of 1) a co-culture of HeLaAG cells, transiently transfected with G5BCAT plasmid, with various cell lines of different origin and 2) an assay of CAT accumulation in HeLaAG cultures pure or co-cultured with these cells. Fig. 1C shows that co-culturing of HeLaAG with C6 cells resulted in a significant increase of the CAT expressed by HeLaAG, whereas no change was observed in the co-cultures with other cell lines, such as NIH3T3 fibroblasts. C6 cells are derived from a rat glioma and are known to secrete several growth factors (24). Therefore, we examined whether the conditioned medium from C6 cells mimics the effect of CAT accumulation observed in the co-cultures. As shown in Fig. 1C, HeLaAG cells, grown in the presence of C6-conditioned medium, express higher levels of CAT compared with the cells grown in the conditioned medium from HeLa cells or from NIH3T3 cultures.
Purification of the APP-Gal4 Cleavage-inducing Activity-A large-scale preparation of C6-conditioned medium was used as a source for the purification of the one or more molecules that induce the CAT accumulation in HeLaAG cells transfected with G5BCAT vector. Fig. 2 shows the steps of this purification based on ammonium sulfate (AS) precipitation, size-exclusion chromatography, and SDS-PAGE. The activity is restricted to the 40% AS fraction ( Fig. 2A), which was applied on FPLC equipped with a Sephadex G-75. Eluted fractions from the chromatography were assayed for their ability to induce CAT accumulation in HeLaAG cells, and the results allowed us to identify two peaks of activity of about 70 and 30 kDa, respectively (Fig. 2B). SDS-PAGE of the proteins present in the relevant fractions, compared with fractions devoid of activity, suggested that one band of about 15 kDa could be a good candidate (see Fig. 2C). This band, separately excised from the lanes of the corresponding active fractions, was digested with trypsin and analyzed by MALDI-TOF mass spectrometry. Peptide mass fingerprint analysis and non-redundant sequence data base matching in both cases allowed its unambiguous identification as PDGF-B. To confirm the identification, the relevant fractions were electrophoresed and blotted with a PDGF antibody. This blot demonstrated that PDGF is present only in the fractions that activate the accumulation of CAT in HeLaAG cells (Fig. 2D).
PDGF Induces a ␥-Secretase-dependent Cleavage of APP-Gal4 Protein-HeLaAG were exposed to 40 ng/ml of purified human PDGF-BB for 12 h, and this resulted in a dramatic induction of CAT expression (Fig. 3). This phenomenon depends on the activation of the PDGF receptor (PDGF-R), considering that the treatment of HeLaAG cells with a TKnonspecific inhibitor such as genistein or with PDGF-R inhibitor AG1296 resulted in a significant decrease of CAT accumulation, following the treatment with PDGF. Although, unexpectedly, PDGF is present also in the ϳ70-kDa fraction of the size exclusion chromatography (see Fig. 2, C and D), it cannot be excluded that other molecules with an activity similar to that of PDGF could be also present in the C6conditioned medium. To address this point, HeLaAG cells were treated with the 40% ammonium sulfate fraction and with the PDGF-R inhibitor. Also in this case, the accumulation of CAT was prevented by the PDGF-R inhibitor (see Fig. 3), thus strongly supporting that PDGF is the only fac- Western blots are with APP 6E10 antibody, recognizing the extracellular domain of APP, and with Gal4 antibody, respectively. One asterisk indicates wild type APP bands, two asterisks indicate APP-Gal4 bands, and the arrowhead indicates wild type Gal4 bands. C, HeLaAG cells transfected with G5BCAT vector were co-cultured with either NIH3T3 or C6 cells or cultured in the presence of the conditioned medium of these cell lines. For co-cultures, cells were plated at the indicated cell numbers, harvested 72 h after plating and their extracts were assayed for CAT concentration. Conditioned media from 72 h cultures of the indicated cells were added to 2.5 ϫ 10 5 HeLaAG cells. Extracts from cells harvested 48 h after the exposure to conditioned medium were assayed for CAT concentration. Standard deviations of triplicate experiments are reported. tor, present in C6-conditioned medium, that activates APP-Gal4 cleavage. Furthermore, immunodepletion of the 40% AS fraction with anti-PDGF antibody resulted in the abolishment of the CAT accumulation observed upon exposure of HeLaAG cells to pure 40% AS fraction, whereas the depletion with mouse IgG was completely ineffective (see Fig. 3). Therefore, the activity present in the 70-kDa fraction could be a multimer of the PDGF-B subunit.
To rule out the possibility that PDGF treatment induces CAT accumulation through a mechanism independent from the cleavage of APP-Gal4, we exposed wild type HeLa cells mock transfected or transfected with Gal4 to 40 ng/ml PDGF-BB. As shown in Fig. 4A, PDGF-BB treatment did not modify the accumulation of CAT in these experimental conditions. Therefore, the increase of CAT concentration observed in HeLaAG cells exposed to PDGF could be due to an activation of the CAT gene transcription by GAL4 released upon the cleavage of APP-Gal4. To address this point, extracts from HeLaAG cells exposed to PDGF or AS 40% fraction were analyzed by Western blot with anti-APP or anti-Gal4 antibodies. These experiments showed the presence, in extracts from HeLaAG cells exposed to PDGF or 40% AS fraction, of a band of a size very similar to that of Gal4, with both the Gal4 antibody and the CT695 antibody, recognizing the APP C-terminal domain (see Fig. 4B). A similar blot was challenged with the 6E10 antibody, which was directed against the N-terminal sequence of the -amyloid peptide. This antibody recognizes the uncleaved APP-Gal4 but not the cleaved molecule. This indicates that the cleaved molecule contains the C-terminal domain of APP (AID-Gal4) and not the N-terminal sequence of A.
To evaluate whether the observed cleavage of APP-Gal4 requires the ␥-secretase activity, we treated HeLaAG cells exposed to PDGF or to the 40% AS fraction with the ␥-secretase inhibitor compound X (25). As shown in Fig. 4C, the treatment of HeLaAG cells exposed to either PDGF or partially purified fraction with 10 M of the ␥-secretase inhibitor resulted in an almost complete abolishment of the effects on CAT accumulation. These results indicate that PDGF, through the activation of its receptor, induces a proteolytic cleavage of APP-Gal4, which requires the ␥-secretase activity.
PDGF-induced APP-Gal4 Cleavage Functions through an Src-dependent Pathway-There are many pathways activated following PDGF-R interaction with its cognate growth factor. Tyrosine-phosphorylated PDGF-R activates the Ras-MAPK pathway through Grb2/SOS and Shc/Grb2/SOS. The possible involvement of this pathway in the APP-Gal4 processing was explored by treating HeLaAG cells exposed to PDGF with the inhibitor of ERKs, PD098059; this inhibitor does not modify the effects of both PDGF and 40% AS fraction (Fig. 5). Another pathway that mediates the effects of PDGF-R activation is that of PI3K-Akt. Also, Fig. 5 shows that the PI3K inhibitor wortmannin does not affect the CAT accumulation induced by both PDGF and 40% AS fraction.
Src and other members of the Src non-receptor TK family interact with and are activated by PDGF-R (26). To explore this pathway, HeLaAG cells have been treated with a specific inhibitor of Src TK, PP2, and with a related compound unable to inhibit Src (PP3). The treatment with PP2 of HeLaAG cells almost completely abolished the accumulation of CAT observed upon the exposure to either PDGF or 40% ammonium sulfate fraction, whereas the treatment with PP3 was completely ineffective (see Fig. 5).
To further explore this finding, HeLaAG cells were transiently transfected with SrcYF vector expressing a constitutively active Src mutant (27). As shown in Fig. 6A, the expression of active Src resulted in the accumulation of CAT in the absence of stimulation by either PDGF or purified fractions. Accordingly, HeLaAG cells transfected with a dominant negative mutant of Src (SrcY-FKM) (28) and exposed to PDGF or to 40% AS fraction showed a significantly decreased accumulation of CAT, compared with mock transfected cells exposed to PDGF.
The possible effectors downstream of Src are not completely understood. One of these downstream factors is the non-receptor TK Abl (29). The possible role of Abl TK in APP-Gal4 cleavage was explored by transfecting HeLaAG cells with a constitutively active mutant of Abl (Abl-PP) (30). Under these conditions no induction of APP-Gal4 cleavage was observed, thus indicating that this kinase is not involved in this phenomenon. On the contrary, another molecule that has been recently observed to be activated by PDGF and Src is Rac1, which belongs to the family of Rho G-proteins. The transfection of HeLaAG cells with a vector driving the expression of a constitutively active form of Rac (RacQL) (31) resulted in an increase of CAT comparable to that observed upon the transfection with SrcYF, and the co-transfection of SrcYF with a dominant negative mutant of Rac (RacN17) strongly decreased the amount of CAT compared with that accumulated in the cells transfected only with SrcYF (see Fig. 6A). Furthermore, a similar inhibition of CAT accumulation, following the exposure to either PDGF or 40% ammonium sulfate fraction, was observed in the cells transfected with RacN17.
To ascertain whether Src and Rac1, like PDGF, also activate APP-Gal4 processing through a ␥-secretase-dependent pathway, HeLaAG cells were transfected with SrcYF or with RacQL, the constitutively active mutants of these two proteins, and treated with the ␥-secretase inhibitor compound X. As shown in Fig. 6B, the ␥-secretase inhibitor almost completely abolished the effects of SrcYF and RacQL transfections.
PDGF Induces the Generation of A from Wild Type APP through an Src-dependent Pathway-The above reported results indicate a clear dependence of the PDGF-Src-induced cleavage of APP upon the ␥-secretase activity, but they don't allow us to distinguish between ␣-secretase and BACE activities, whose actions are known to precede the ␥-secretase-induced cleavage. To address this point, we examined the effects of the PDGF-Src pathway on the processing of APP, by measuring the accumulation of A in cultured cells, in which this pathway is activated or blocked. To do this, HEK293 cells were transfected with APP 695 alone or with APP 695 plus SrcYF. As shown in Fig. 7A, there is a significantly increased accumulation of A in the medium of cells expressing the constitutively active form of Src. Furthermore, CHO cells stably expressing APP 695 , which generate high levels of A, were treated with two concentrations of the inhibitor of Src TK, PP2. In these conditions, A generation is significantly decreased, whereas the analogous molecule PP3, not affecting Src TK activity, was completely ineffective (see Fig. 7B).
DISCUSSION
The proteolytic processing of APP leading to the generation of A peptide is an extensively studied phenomenon, due to its implication in the pathogenesis of Alzheimer's disease (AD). The great effort to understand the machineries involved in the various types of cleavages of APP resulted in the identification and in the molecular characterization of two out three of the secretases, i.e. ␣and -secretases, and many preliminary results indicate that, despite its complexity, also ␥-secretase is near to be understood. On the contrary, the mechanisms regulating this proteolytic processing are not completely understood. Here, we report experiments demonstrating that the -␥ processing of APP is under a positive control by PDGF through a pathway involving Src and Rac1.
Most of the available data on the regulation of APP processing concerns with sAPP secretion (for a review see Ref. 32). It is well demonstrated that the activation of muscarinic receptor induces an increased secretion of sAPP (33), and a similar phenomenon has been reported also for metabotropic glutamate receptor (34) and for serotonin receptors (35). These effects are regulated through a PKC-dependent pathway (33),
FIG. 3. PDGF-BB contained in C6 cell-conditioned medium induces CAT accumulation in HeLaAG cells.
HeLaAG cells transfected with G5BCAT vector were exposed to 40 ng/ml recombinant PDGF-BB or to 10 g/ml 40% AS fraction for 24 h before harvesting. In the same conditions the cells were also exposed, as indicated, to 30 M genistein or to 2 M AG1296, which are a general TK inhibitor and a PDGF-R TK inhibitor, respectively. To ascertain whether the 40% AS fraction also contains factors, other than PDGF-BB, activating CAT expression, the 40% AS fraction was immunodepleted either with anti-PDGF antibody (␣-PDGF) or with mouse IgG (mIgG), and HeLaAG cells were exposed to these mixtures (ID 40%AS). Standard deviations of triplicate experiments are reported. and, accordingly, it is well known that activated PKC induces sAPP secretion and inhibits A generation (36,37). On the contrary, very little is known of the possible effects of the activation of tyrosine kinase receptors on APP processing.
The relevance of the reported results for neuronal APP functions and for the pathogenesis of AD should be addressed through further work. However, PDGF-R, Src, and Rac, although widely expressed in many different cell types, are known to play significant roles in the nervous system. In fact, it was clearly documented that PDGF ␣-receptor is expressed in neurons of various districts of mouse and rat CNS. This expression, detected as early as postnatal day 1, is observed during all the postnatal life, whereas the expression of PDGF ␣-receptor in oligodendrocytes is abundant during development, but is restricted in the adult to few precursor cells (38). These results are in agreement with several observations indicating a protective role for PDGF in several neuronal cells (39 -41). PDGF-A and PDGF-B are constitutively expressed by neurons in vivo (42), and this suggests further that these growth factors, which regulate proliferation and differentiation of oligodendrocytes (42), could also regulate the functions of the neurons themselves (38). Src, and the related non-receptor TK Fyn, are expressed in the neurons, are enriched in growth cones (43), and are involved in several neuronal functions, such as for example Ig CAM-mediated neurite growth and guidance (44). The three members of the Rho family of small GTPases, Rho, Rac1, and Cdc42, are ubiquitously involved in actin cytoskeleton regulation, affecting cell attachment and contraction, lamellipodia formation, and filopodia formation, respectively (45). Their involvement in the regulation of neuronal functions is well documented. In particular, Rac has been implicated in neurite outgrowth and axonal pathfinding (46). In addition to numerous in vitro results, this is demonstrated by the expression of a constitutively active form of Rac1 in Purkinje cells, which resulted in an ataxic phenotype of mice that FIG. 4. PDGF-induced CAT accumulations depends on APP-Gal4 cleavage by ␥-secretase activity. A, wild type HeLa cells were transfected with G5BCAT vector alone (mock) or with both G5BCAT vector and Gal4 expression vector (Gal4) and treated with 40 ng/ml recombinant PDGF-BB or with 10 g/ml of the 40% AS fraction for 24 h before harvesting. The amount of CAT was measured in triplicate experiments, and standard deviations are reported. B, cell extracts from HeLaAG cells exposed or not for 48 h to 40% AS fraction or to PDGF-BB were electrophoresed on SDS-PAGE and analyzed by Western blot with CT-695 or 6E10 APP antibodies or Gal4 antibody, as indicated. This demonstrated that the exposure to 40% AS fraction or PDGF-BB results in a change of the size of the APP-Gal4 bands, toward a major band of about 100 kDa, similar to that of wild type Gal4 (indicated by an arrowhead), and recognized by both Gal4 antibody and CT-695 antibody directed against the C-terminal domain of APP. On the contrary, 6E10 antibody failed to recognize the cleaved protein, thus demonstrating that it does not contain the N-terminal -amyloid epitope. The asterisk indicates the APP-Gal4 bands. C, HeLaAG cells transfected with G5BCAT vector and exposed to PDGF-BB or 40% AS fraction, as in Fig. 3, were treated with 10 M ␥-secretase inhibitor compound X for 12 or 24 h, as indicated. Standard deviations of triplicate experiments are reported.
FIG. 5. Inhibition of Src TK activity prevents APP-Gal4 cleavage in HeLaAG cells exposed to either PDGF-BB or partially purified C6-conditioned medium. HeLaAG cells transiently transfected with G5BCAT vector were exposed to 40 ng/ml recombinant PDGF-BB or to 10 g/ml of the 40% AS fraction, as reported in Fig. 4. These cells were also treated for 24 h before harvesting with either 100 M ERK inhibitor PD098059, 100 nM PI3K inhibitor wortmannin, 10 M Src inhibitor PP2, or with a PP2-like molecule, PP3, devoid of Src TK inhibiting activity. Standard deviations of triplicate experiments are reported.
is accompanied by alterations of dendritic spines (47). Accordingly, the phenotypes induced by combined mutations of the three Rac GTPases of Drosophila are characterized by defects of branching, guidance, and growth of axons (48).
The most known effectors of Rac1 are the PAK serine/threonine kinases, which are activated through the binding of Rac-GTP or Cdc42-GTP to their N-terminal autoinhibitory domain (for a review see Ref. 49). A second known effector downstream of Rac1 is the kinase Cdk5 (50), and the observation that APP -␥ processing is activated by Rac1 and the well demonstrated function of Rac1 in the activation of p35/Cdk5 suggest a possible crucial role for this small G-protein in the generation of the pathological signs of AD. In fact, the two histologic hallmarks of the disease are A accumulation in senile plaques and the organization of hyperphosphorylated tau protein in fibrillary tangles. It is well demonstrated that one of the two kinases involved in anomalous tau phosphorylation is p35/Cdk5 (51), and therefore, activation of Rac1 could, at the same time, increase A generation and cause tau hyperphosphorylation, leading to conditions that favor plaque and tangle formation.
The results reported here support the hypothesis that other extracellular signals, different from PDGF and known to induce Src and/or Rac1, could trigger the processing of APP. In fact, numerous signaling pathways converge on Src: (i) several other tyrosine kinase receptors, such as nerve growth factor receptor, epidermal growth factor receptor, and fibroblast growth factor receptor, are able to activate Src (52-54); (ii) Src is activated by engagement of integrins during cell interaction with extracellular matrix (55); (iii) G-protein-coupled receptors activate Src, as in the case of thrombin (56,57); and (iv) voltage-dependent and ligand-gated channels have been demonstrated to interact with Src such as, for example, the N-methyl-D-aspartic acid channel (58).
Among the possible targets of the pathway described above are secretases and APP. The structure of ␥-secretase is not completely known, and little information is available on the regulation of BACE activity; on this basis, it is hard to hypothesize mechanisms through which these machineries could be total A peptide present in the culture medium was measured by enzyme-linked immunosorbent assay. In the case of the 36-h point of the cells transfected with APP alone, some measurements were below the detection limit (n.d., non-detectable). B, CHO cells stably expressing APP 695 were exposed to 5 or 20 M concentrations of the Src TK inhibitor PP2 or to 20 M PP3. The bars indicate the amount of total A present in the medium after 1, 3, and 12 h. The values are means of at least triplicate experiments, and standard deviations are reported. activated. On the other hand, there are experimental results suggesting that phosphorylation of APP does not affect its processing. In fact, it was well documented that APP is phosphorylated on Ser and Thr in vitro and in vivo (59), but these post-translational modifications are not involved in the regulation of APP cleavage (60). APP is also phosphorylated at the level of Tyr-682 (APP 695 isoform numbering) of its intracellular domain (8). One of the kinases that is able to phosphorylate APP on Tyr-682 is the non-receptor tyrosine kinase Abl. However, we showed here that the expression of a constitutively active form of Abl does not affect APP-Gal4 processing (see Fig. 6) and that, in cells expressing a mutant form of APP in which Tyr-682 is substituted with a Phe residue, SrcYF induces an increase of A accumulation similar to that observed in cells expressing wild type APP (data not shown).
There are several results indicating that the processing of APP by ␥-secretase could have a role in signal transduction (61). In fact, we and others (10,62) demonstrated that Fe65, one of the ligands of APP cytodomain, is a nuclear protein and that APP functions as an anchor that restricts Fe65 outside of the nucleus. Following APP processing by ␥-secretase, the cytodomain of APP (AID) together with Fe65 is translocated into the nucleus (10, 62-63). Fe65 and/or AID⅐Fe65 complex, through the interaction with the transcription factor LSF (9) or with the histone acetyltransferase Tip60 (10), could regulate the transcription. In support to this hypothesis, we found that Fe65 overexpression in the nucleus regulates the transcription of the thymidylate synthase gene driven by LSF (64). These findings suggest that PDGF, or other molecules activating the Src-Rac1 cascade, could be signals that trigger the cleavage of APP and, in turn, nuclear translocation of Fe65 and/or Fe65-AID, which regulate gene expression.
Taken together these data suggest the possibility that the activation of PDGF-R, Src, and Rac1 could be relevant for the generation of A by neurons and that new possible targets for therapeutic interventions in Alzheimer's disease could be found in this pathway. Furthermore, the experimental system described in this report could be used to find molecules that inhibit the PDGF-Src-Rac-induced processing of APP and that, in turn, could be useful for the development of anti-AD drugs.
|
v3-fos-license
|
2018-04-03T04:52:01.234Z
|
2017-12-06T00:00:00.000
|
21581944
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0187018&type=printable",
"pdf_hash": "c9b261d12255c7056483667b81aa5ad130efe86c",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44302",
"s2fieldsofstudy": [
"Medicine"
],
"sha1": "efd09473d1854a9b1144f72e31b17035a52c7701",
"year": 2017
}
|
pes2o/s2orc
|
The association between socioeconomic factors and breast cancer-specific survival varies by race
Although racial disparity is well described for oncologic outcomes, factors associated with survival within racial groups remains unexplored. The objective of this study is to determine whether breast cancer survival among White or Black patients is associated with differing patient factors. Women diagnosed with breast cancer from 1998 through 2012 were identified in the Surveillance, Epidemiology, and End Results (SEER) database. Cox proportional hazard logistic regression was used to estimate cause-specific survival in the combined cohort, and separate cohorts of Black or White patients only. Main outcomes included cause-specific survival in cohorts of Black only, White only, or all patients adjusted for demographic and oncologic factors. A total of 406,907 Black (10.8%) or White (89.2%) patients diagnosed with breast cancer from 1998 through 2012 were isolated. Cancer-specific survival analysis of the combined cohort showed significantly decreased hazard ratio (H.R.) in patients from the higher economic quartiles (Q1: 1.0 (ref), Q2: 0.95 (p<0.01), Q3: 0.94 (p<0.01), Q4: 0.87 (p<0.001)). Analysis of the White only cohort showed a similar relationship with income (Q1: 1.0 (ref), Q2: 0.95 (p<0.01), Q3: 0.95 (p<0.01), Q4: 0.86 (p<0.001)). However, analysis of the Black only cohort did not show a relationship with income (Q1: 1.0 (ref), Q2: 1.04 (p = 0.34), Q3: 0.97 (p = 0.53), Q4: 1.04 (p = 0.47)). A test of interaction confirmed that the association between income and cancer-specific survival is dependent on patient race, both with and without adjustment for demographic and oncologic characteristics (p<0.01). While median county income is positively associated with cancer-specific survival among White patients, this is not the case with Black patients. Similar findings were noted for education level. These findings suggest that the association between socioeconomic status and breast cancer survival commonly reported in the literature is specific to White patients. These findings provide insight into differences between White and Black patients in cancer-specific survival.
Introduction
Racial disparity in survival has been reported for multiple cancer types including breast, prostate, colorectal, pancreatic, and lung [1][2][3][4]. Consistently, adjusted analyses including both Black and White patients have demonstrated that Black patients have significantly worse survival than White patients after adjusting for demographic and oncologic variables [1,3,4]. Using the Surveillance, Epidemiology, and End Results (SEER)-Medicare linked database, Silber et al have previously shown that among patients older than 65 years old, Black patients have worse survival than White patients [2]. They attributed these findings primarily to differences in presentation; however even after matching on presentation characteristics (e.g. tumor stage, size, grade, hormone status), they noted differences in treatment which may account for additional disparity. For example, Black women have longer delays in treatment and reduced chemotherapy utilization [2].
Studies have additionally demonstrated that socioeconomic factors, such as lower income or education, are also associated with poor survival [5]; these factors may be associated with treatment characteristics. Iqbal et al used the SEER database to show that even after adjusting for income and hormone status, Black women are more likely to die from small tumors, suggesting that disparity affects outcomes even in the setting of more favorable tumors [6]. Other studies have suggested that differences in tumor biology may account for differences in survival, based on studies of the tumor microenvironment and epigenetics [7][8][9][10]. Although these studies have established racial disparity when comparing White and Black patients, an improved understanding of how patient factors associate with survival among patients of each race separately is required in order to guide intervention.
We hypothesized that patient factors associate with survival differently when analyzed in Black or White cohorts separately. We used the National Cancer Institute's (NCI) Surveillance, Epidemiology, and End Results (SEER) database to generate separate survival models for Black or White breast cancer patients, and compare these models to identify differences among factors associated with patient survival.
Compliance with ethical standards
All procedures performed in studies involving human participants were in accordance with the ethical standards of the institutional and/or national research committee and with the 1964 Helsinki declaration and its later amendments or comparable ethical standards. Informed consent was not required as this analysis was performed using a publicly-available, de-identified database of patients with breast cancer treatment.
Data source
Case-level de-identified data from 1998 to 2012 were extracted from the Surveillance, Epidemiology, and End Results (SEER) cancer database (November 2014 submission) with follow-up and survival cut-off until December 31 st , 2012. The SEER database is a national effort that collects patient-level data for all index malignant tumors in 18 cancer registries across the United States and captures roughly 28% of the nation's population. This database is regarded as nationally representative and contains detailed demographic, socioeconomic, oncologic, and treatment information. To ensure data accuracy, chart abstractors undergo extensive training. Malignant tumors are encoded by use of the ninth revision of the International Classification of Diseases for Oncology.
Inclusion/Exclusion criteria
Data were extracted from the SEER database for all Black or White female patients with a diagnosis of in situ or invasive ductal breast cancer (International Classification of Diseases for Oncology code 8500) who underwent surgical treatment (lumpectomy, unilateral mastectomy, or bilateral mastectomy). Patients with unknown stage or histology code other than 8500 were excluded.
Statistical analysis
Chi-square tests were performed to compare demographic and oncologic characteristic of Black patients and White patients. Demographic characteristics accounted for in this analysis included patient race, age ( 30, 31-45, 46-60, and >60 years), quartile of median family income by county of residence (1 = lowest, 4 = highest), and quartile of median education level by county of residence (1 = lowest, 4 = highest). Oncologic characteristics in this analysis included tumor size ( 2 cm, 2.1-5.0 cm, and >5 cm), lymph node involvement (0 nodes, 1-3 positive nodes, >3 positive nodes), receipt of radiation therapy (yes or no), surgery type (lumpectomy, unilateral mastectomy, or bilateral mastectomy), and receipt of reconstruction (yes, no, or not applicable (for lumpectomy cases)). Separate unadjusted and adjusted Cox proportional hazards regression models were used to evaluate the association of these variables and survival in black or white patients or the combined cohort. The median income of the county where the patient resides was categorized as quartiles. To test if the effect of income on survival is significantly different between blacks and whites, Cox regression model with race, income and race-income interaction as a predictor with or without controlling for demographic and oncologic characteristic were fitted to the combined cohort.
All statistical analyses were performed with SAS version 9.3 (SAS Institute Inc) and R version 2.15 (R Development Core Team for the R Foundation for Statistical Computing). Tests were deemed statistically significant at the α level of 0.05.
Demographic characteristics of cohorts of Black and White patients
A total of 406,907 patients were included in this analysis, of which 362,797 were white and 44,110 were black. A higher proportion of Black patients were in the lowest income (p<0.001) and lowest education quartiles (p = 0.001) when compared with White patients ( Table 1).
Oncologic characteristics of cohorts of Black and White patients
A higher proportion of Black patients had tumors over 2 cm in size (p<0.05), and had estrogen receptor-negative (p = 0.001) or progesterone receptor-negative (p<0.05) tumors ( Table 1). Unadjusted analysis did not show a significant difference with respect to lymph node involvement, type of surgery, radiation therapy, or reconstruction ( Table 1).
Cause-specific survival in a single cohort including Black and White patients
Adjusted Cox regression analysis of the combined cohort showed that Black patients have significantly worse hazard of death when compared with White patients (HR 1.33 (1.28, 1.37) v. 1.00, p<0.001) ( Table 2). Patients with larger tumors, positive lymph nodes, ER-negative, or PR-negative tumors also had worse hazard of death, as expected ( Table 2). Furthermore, patients from counties in the lowest quartiles for mean household income or education level also had worse hazard of death ( Table 2). Lower median income was similarly associated with reduced survival when income was treated as a continuous variable (H.R. 0.96, p<0.0001).
Cause-specific survival in cohorts of Black or White patients
Mean survival time among Black patients was 61.5 months. Unadjusted and adjusted analyses of the cohort of Black patients showed that traditional oncologic variables including higher tumor grade, tumor size, lymph node involvement were associated with worse cause-specific hazard of death (Table 3). Similarly, Black patients with receptor-negative tumors had worse cause-specific hazard of death. There was no statistically significant relationship between survival and median county income quartile ( Mean survival time among White patients was 68.9 months. Adjusted and unadjusted analyses of the cohort of White patients showed that traditional oncologic variables including higher tumor grade, tumor size, lymph node involvement were associated with worse causespecific hazard of death (Table 4). Similarly, White patients with receptor-negative tumors had worse cause-specific hazard of death. White patients living in counties in the lowest education quartile had significantly higher hazard of death when compared with White patients from the highest education quartile counties (HR 0.93 (0.89,0.97) v. 1.00, p = 0.001) ( Table 4). Furthermore, White patients living in counties in the highest median household income had significantly higher survival when compared with White patients from counties with the lowest median household income (HR 0.86 (0.83,0.9) v. 1.00, p<0.0001). This was also confirmed when income was treated as a continuous variable (H.R. 0.96, p<0.0001).
A test of interaction confirmed that the association between income and cancer-specific survival is dependent on patient race, both with and without adjustment for demographic and oncologic characteristics (p<0.01).
Discussion
In this study, we perform separate cause-specific survival analyses for White and Black breast cancer patients to identify differences in the associations between oncologic and demographic factors and cancer-specific survival. As may be expected, oncologic variables including tumor size, lymph node status, and tumor grade were associated with patient survival in the combined cohort of patients, and among Black or White patients separately. Increased tumor size, lymph node involvement, and tumor grade have all been shown to be associated with worse patient survival consistently across multiple studies [6]. Interestingly, we found that median county family income and education, which have been shown to be associated with survival in patient cohorts combining White and Black patients, were not associated with survival among the cohort of Black patients despite inclusion of over 40,000 patients [11].
Myriad studies have shown that Black patients have poorer cancer survival when compared with White patients after controlling for socioeconomic factors such as education level and income [1][2][3][4] [6]. However, these analyses using combined cohorts do not allow interrogation of associations within specific sub-groups. Our sub-group analysis on the basis of race provides insight into patient factors which are most closely associated with survival. Our findings suggest that adjusted analysis of combined cohorts of White and Black patients is more representative of the White patient population, which is not surprising as over 90% of patients in our combined cohort were White. We also noted a relative lack of literature interrogating patient factors which are specifically associated with survival among Black or White patients separately. We were surprised to find that socioeconomic factors often cited for their close association with patient survival, do not appear to be associated with survival among Black patients. For example, after adjusting for demographic and oncologic factors, cancer-specific survival among Black patients remains relatively similar across median county family income quartiles and even education. To our knowledge, no other studies have compared the findings from subgroup analyses with findings from combined cohorts, as we have done here.
The underlying causes for racial disparity remain unresolved. Socioeconomic disparity may account for differences, although disparity persists despite adjusting for these factors as we have confirmed in the current analysis of the combined cohort. Using the SEER database, it is not possible to determine whether racial disparity exists even with access to similar medical facilities or resources. However in one study of patients treated in one of two hospitals in Memphis, Tennessee, Black patients had poorer survival when compared with White patients [12]; this was determined to be due in part to delays in diagnosis and triple-negative breast cancer. In a smaller study of underinsured patients from a single institution, Black patients had worse outcomes when compared with White patients; however adjusting for clinical and sociodemographic factors eliminated racial disparity [13]. Further population-level studies are required whereby patients are matched on factors including the specific treating hospital to obtain generalizable results. Increasingly, tumor biology is receiving attention as a contributor to cancer-specific survival disparities. Even after adjusting for hormone status (ER/PR status), Black women have worse survival suggesting that this is not the sole biologic factor of importance. Differences in the tumor microenvironment such as presence of different inflammatory components have been noted, as have differences in the genetic and epigenetic landscape of these tumors [7-10, 14, 15]. However, it is unclear whether these differences account for the observed differences in outcomes.
The implications of our findings are several-fold. First, they suggest that future studies need to move in the direction of performing race-specific sub-group analysis in order to better understand the needs of each race with respect to cancer survival. Secondly, although socioeconomic disparity may certainly remain a cause of survival disparity between Black and White patients, interventions tailored based on income or income-associated survival may not alleviate survival disparity among Black patients. As a result, these interventions may not be the most effective at improving survival among Black patients, and may disproportionately benefit White patients. While reducing survival differences between Black and White patients is at the center of reducing disparity, an appreciation for the needs of specific patient sub-populations is required for efficient and effective interventions.
|
v3-fos-license
|
2019-05-12T14:23:29.577Z
|
2019-05-22T00:00:00.000
|
150300961
|
{
"extfieldsofstudy": [
"Psychology"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://doi.org/10.1108/jwam-05-2018-0011",
"pdf_hash": "23c97a3a02ff14edba6a6961f9aca814d7a1047c",
"pdf_src": "ScienceParseMerged",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44303",
"s2fieldsofstudy": [
"Education",
"Business",
"Engineering"
],
"sha1": "23c97a3a02ff14edba6a6961f9aca814d7a1047c",
"year": 2019
}
|
pes2o/s2orc
|
Leveraging experiential learning training through spaced learning
Purpose – Spaced learning (SL) and experiential learning (EL) have been identified as being more efficient to long-term knowledge retention than other forms of learning. The purpose of this paper is to confirm these benefits of SL and EL in a work-based learning environment. Design/methodology/approach – This case study research monitored changes in learning outcomes of a work-based EL training, the Model Warehouse, when adding SL. The Model Warehouse of the Karlsruher Institute for Technology, Germany intends to educate professionals in lean warehouse logistics. Following a pragmatic standpoint, two groups of students were considered and compared by using multiple-choice question based knowledge tests where one group participated in an additional SL session. The experiences and perceptions of students were assessed by conducting in-depth interviews. Findings – Findings revealed that adding SL to the EL training resulted either in students’ knowledge retention or knowledge improvement. Additionally, participants of the SL session did not perceive it as being required to strengthen understanding of lean warehouse management. Practical implications – This study recommends considering SL as an effective means to significantly enhance long-term knowledge retention of any work-based or EL training. Originality/value – This study confirms the benefits of SL and EL drawn from laboratory-based studies in a real business context. Adopting both learning theories in training programmes which converge with realities of the workplace results in a significant improvement of long-term knowledge retention.
Introduction
Since 1885, researchers argue that distributing learning across time increases efficient long-term knowledge retention as it leads to the spaced learning (SL) effect (Kang, 2016;Sobel et al., 2011). The SL effect facilitates the establishment of new neural representations in the brain which are needed to build long-term memory (Cepeda et al., 2006;Dempster, 1988;Spitzer, 2009). Experiential learning (EL) is described to be more effective than any other form of learning as it enhances the motivation to learn and allows for better knowledge retention by supporting the learner to actively engage in the learning process (Austin and Rust, 2015;Egbert and Mertins, 2010).
The benefits of both EL and SL to knowledge retention have been analysed in-depth in previous studies (Kolb, 1984;Schenck and Cruickshank, 2015;Cepeda et al., 2006;Carpenter et al., 2012). What remains to be investigated is how learning retention develops when combining EL and SL. By utilising the Model Warehouse, it is intended to investigate whether even better learning outcomes are possible when adding an SL session or not. The Model Warehouse is a work-based training offered by the Karlsruher Institute for Technology in corporation with a globally based consulting company. It intends to primarily educate experienced professionals in lean warehouse logistics in an EL environment.
Based on the results of this research, recommendations are derived for any work-based EL training programme which seeks a means to help ensure knowledge retention thereafter. Moreover, this research contributes to academic research in the field of learning as it offers new insights about a specific phenomenon, learning in the Model Warehouse and a way to improve it.
Summing up, the purpose of this paper is to enhance the learning outcomes of work-based and EL training programmes with the help of SL, whilst deriving results from participants' experiences and points of view. Thereby, this research is guided by the following research question:
RQ1.
How and to what extent does spaced learning improve the learning outcomes of students in an experiential learning programme in Germany?
The case of the Model Warehouse training offered by the Karlsruher Institute for Technology.
Critical literature review Learning theory
To date, learning is defined in different ways and as a consequence, no universal definition of the term exists (Ertmer and Newby, 2013). This is due to the fact that different practitioners focus on different criteria central to learning (Schunk, 2012). Still, many of those different approaches employ similar elements (Ertmer and Newby, 2013) which can be summarised as follows: Learning is an enduring change in behaviour, or in the capacity to behave in a given fashion, which results from practice or other forms of experience. (Schunk, 2012, p. 3) The roots of modern learning theories trace back to the works of Plato and Aristotle. At the end of the nineteenth century, Ebbinghaus (1885) and Wundt (1893) pioneered in taking higher mental processes into experimental laboratories and laid the foundation for the psychological study of learning. In the second half of the nineteenth century, behaviourism arose. It promotes that the most critical reasons for learning are the environmental conditions in which learning takes place. Therefore, the learner is not an active participant in the learning process but reactive to environmental conditions (Ertmer and Newby, 2013). As a shift away from the behavioural learning approach, cognitivism evolved, which considers the learner to be an active participant in the learning process, as learning requires the learner to actively code and to structure the newly learnt internally. Constructivism is the latest development in learning theory and is a branch of cognitivism. Yet, it distinguishes itself from the latter as it emphasises that the experience in which learning takes place needs to be considered when examining the learning process (Khalil and Elkhider, 2016).
Several learning theories exist, yet behaviourism, cognitivism and constructivism are seen as the main learning theories used in children, teenage and adult education to date (Taras, 2005). These three theories can be differentiated in terms of definition, the role of the learners and the best learning methods used (Khalil and Elkhider, 2016). Table I summarises the major arguments. Those different learning theories provide different frameworks of how to handle learners and the learning material: behavioural approaches recommend periodic, spaced repetitions to strengthen the recall of a response, whereas cognitivist and constructivist approaches argue that a meaningful presentation of learning materials allows participants to organise and recall it better in the future (Schunk, 2012).
The SL effect
The SL effect is widely recognised as being one of the oldest, most reliable and remarkable phenomena in the field of human learning (Carpenter et al., 2012;Dempster, 1988). The SL effect refers to the finding that long-term memory retention and recollection are higher when reviewing learnt materials spaced out over time compared to a massed, single study session as previous studies revealed (Kang, 2016;Kornell and Bjork, 2008;Sobel et al., 2011). Ebbinghaus (1885) was the first who studied the SL effect and argued that learning and recalling depend on how often someone was exposed to the material. Since then, numerous studies have confirmed the benefits of the SL effect (Cepeda et al., 2006;Dempster, 1988;Glenberg, 1976;Melton, 1970). The SL effect is applicable to various domains such as animal conditioning, verbal learning, motor learning, as well as learning of educational materials (Kornell and Bjork, 2008).
Nevertheless, research on learners' (college students as well as undergraduates) experience reveals that learners feel that massed learning is superior to and achieves better results than SL, although better test results were achieved after the SL sessions (Simon and Bjork, 2001;Zechmeister and Shaughnessy, 1980). SL can be assigned to the learning theory behaviourism as it, amongst other thoughts, assumes that intervallic, spaced repetitions strengthen the recall of a response (Schunk, 2012). Therefore, one can argue that although theoretically SL is a behavioural approach, in practice it has traits of the cognitive approach. Hintzman (1974), Dempster (1988, and Russo et al., (1998) propose three predominant reasons for the spacing effect to occur: (1) Encoding variabilitytaking information and record it in memory.
Numerous researchers question whether there are particular schedules to follow or not and how long the spacing gap should be to achieve the most efficient improvement in long-term memory retention (Carpenter et al., 2012). Karpicke and Bauernschmidt's (2011) approach distinguishes between absolute and relative spacing. Absolute spacing refers to the total number of repetitions that take place between all tests undertaken; relative spacing refers to the repeated tests distributed relative to each other. According to Landauer and Bjork (1978) relative spacing can follow an expanding, an equal or a contracting schedule (see Table II). Karpicke and Bauernschmidt (2011) conclude that the highest improvement in long-term memory retention is achieved by increasing the absolute spacing of repetitions, although they could not find evidence that one relative spacing schedule achieves better results than another. Any form of spacing, whether expanded, equal or contracted promotes learning and long-term (Carpenter et al., 2012). Still, it is proven that the longer the retention intervals and the more often the repetitions, the higher the likelihood of correct recall (Cepeda et al., 2006). However, having extensive gaps between repetitions may result in forgetting what was learnt previously and the SL effect becomes offset (Carpenter et al., 2012).
Nevertheless, it should be mentioned that within the majority of SL research, no feedback on occurred errors is given to restudy them for future tests. This is due to the fact that SL research mainly focuses on the direct effects of SL rather than on mediated effects (Karpicke and Bauernschmidt, 2011).
Still, researchers like Dempster (1988), Grote (1995) and Seabrook et al. (2005) claim that besides the widely proven evidence for the SL effect, it receives little attention in educational programmes, class-based learning and teacher training. Cognitive processes can be stimulated and problem-solving skills can be enhanced through incorporating SL techniques either prior to or after the study session. The most important concepts and activities of the study sessions can be previewed and re-presented. This allows participants to develop a more profound understanding of the new material (Dunlosky et al., 2013).
Interactive learning, meaning the combination of class-based learning and e-learning, should be considered for implementing SL into academic and industry training (Chang, 2016). Engaging in interactive learning, participants can take advantage of both the consolidation of their core skills and knowledge during the classroom-based training and individual rehearsing schedules which are adaptable to the participants' learning style, weaknesses, and learning progress during e-learning (Chang, 2016). Thereby, a 15 per cent improvement of the participants' satisfaction and learning performance is revealed (Chang and Wills, 2013). Karpicke and Roediger (2007) postulate to increase knowledge testing, arguing that when learners engage in tests, they must access information stored in their memories, transfer and apply it to new situations.
EL theory
The theory of EL has evolved in the last three decades. Well recognised is Kolb's work on EL which is seen as universal (Daraban and Byrd, 2011;Schenck and Cruickshank, 2015). Kolb (1984, p. 41) defined learning as: […]the process whereby knowledge is created through the transformation of experience. Knowledge results from the combination of grasping and transforming experience.
The EL theory traces back to the work of Dewey, Lewin and Piaget. Kolb and Kolb (2005, p. 194) put forward six propositions on which the EL theory is built on, namely: (1) Learning is best conceived as a process, not in terms of outcomes.
(3) Learning requires the resolution of conflicts between dialectically opposed modes of adaption to the world.
JWAM
(4) Learning is a holistic process of adaption to the world.
(5) Learning results from synergetic transactions between the person and the environment.
(6) Learning is a process of creating knowledge.
Concluding, EL asks for the learner to actively engage in the learning process and therefore encourages reflective thinking, to understand the how of a process and to find the meaning of the same whilst doing it (Alkan, 2016;Kolb and Kolb, 2005). Yet, Illeris (2007) argues that experience is a subjective matter. Thus, if the learners do not experience a situation in which they are personally encouraged to learn, an external person cannot label it to be experiential (Illeris, 2007). Nevertheless it has to be acknowledged that it also is a constructivist approach to learning yet it has cognitive traits (Corbett, 2005).
Based on his definition, Kolb (1984) develops a dynamic, holistic learning cycle that comprises of two dialectical modes of grasping experience: concrete experience and abstract conceptualisation; and two dialectical modes of transforming experience: reflective observation and active experimentation (see Figure 1). Kolb's learning cycle can be entered at any stage; however, all stages follow a sequential order. Whilst students take part in the learning cycle several times, feedback is provided, which allows taking new actions to evaluate those actions taken (Akella, 2010).
Research claims that EL is more effective than any other form of learning as it enhances the motivation to learn, allows for better knowledge retention and enables transformational thinking in real-life context, meaning that people are prepared to actively apply what was learnt (Austin and Rust, 2015;Egbert and Mertins, 2010). Learners are taking part in an interactive experience in which they can experiment and have the freedom to fail in a risk-free environment (Whitmore, 2002). Furthermore, neuroscientific research supports the EL theory arguing that memory pathways and connections are established (Schenck and Cruickshank 2015). Leveraging EL training through SL Engaging in EL also enhances an individual's lateral (what we know) and vertical (how we know) development, which helps individuals to handle emerging challenges and to create new "realities" (Spence and McDonald, 2015).
Learning in the model warehouse
Learning in the Model Warehouse is based on the previously described principles of Kolb's EL theory. It represents a form of work-based learning which is defined as: an educational process which drives learners to engage intellectually, socially, emotionally and physically in an unpredictable work-related environment where they will go through the experiential process of potential failure, taking measured risks, experiencing adventure through creativity and innovation, and […] achieving successful outcomes. (Chrisholm et al., 2009, p. 327) As discussed earlier, observative reflection is one of the four major parts of Kolb's learning cycle. Also, Siebert and Walsh (2013) and Chrisholm et al. (2009) point out that reflective thinking is the most important aspect to learn from any work-based learning environment. Additionally, Helyer (2011) argues that the combination of theory and work-based skills leads to active critical reflection, which in turn encourages changes and innovations in current workplace practices. Yet, when designing assessments focus needs to be put on developing theoretical frameworks rather than one-off memories.
All in all, one can claim that any EL training in which learning takes place through work and for the means of improving both workplace and life skills of participants is also a form of work-based learning. Thus, the EL taking place in the Model Warehouse can also be defined as being work-based.
Context for the study
The Model Warehouse is a capability-building centre of the Karlsruhe Institute of Technology that imparts the latest knowledge on the approaches used in lean warehousing (Institute for Material Handling and Logistics, 2018). The training aims at optimising existing warehouse operations and thus, is mainly tendered for experienced professionals in lean warehouse logistics. Yet, the training is also offered to students enroled in supply chain degree programmes.
Research methodology and design
This research has been conducted as a single case study research. Case studies are intended to investigate real-life phenomena thoroughly when boundaries between the phenomena itself and its' real-life context cannot be drawn (Yin, 2014). Thereby, this research aimed to draw conclusions to explore and improve the practice, understanding and the situation in which learning takes place in the Model Warehouse as a single typical case of an educational institution. It followed the philosophical approach of pragmatism, which is argued to emerge from actions, situations and consequences by intervening in a work-based training programme while SL sessions are added and learning outcomes are measured. According to Creswell (2014), pragmatism allows combining multiple methods of data collection and analysis to best meet the need and purpose of the research to be conducted. Therefore, a mixed-method design, which allowed the researcher to use interpretations and adapt to the unanticipated (Robson and McCartan, 2016), has been used. In accordance with Yin (2014) for a single case study research, based on a typical case, multiple sources of evidence are required. Thus, this research applied a quantitative and qualitative data collection and analysis to examine the learning outcomes and experiences of the involved students.
Assuming that learning is socially constructed as there are as many realities to learning as there are participants, the results drawn from this research were used to generate positive effects within a specific training programme, which enables capability building in lean warehousing. Summing up, this research sought to address the question as of how SL influences an EL training programme whereby an existing EL training has been modified to monitor changes in participants' learning outcomes. Thereby, an SL session in form of elearning was added for participants of group 2 five weeks after the EL training to revise the topics learnt during the initial training.
Knowledge tests to collect primary data Multiple-choice question knowledge tests generated quantitative data which were collected in two student groups with seven participants each. Participants were recruited by self-selection sampling: Members of the Lean Student Group of the Karlsruhe Institute for Material Handling and Logistics were invited to express their desire to participate in the study. Those individuals who signed up for the study were the study population and were divided into two groups by using systematic sampling. None of the students was known by the researcher and the research followed the guidelines of the Ethics Committee of Liverpool John Moores University.
The multiple-choice question knowledge tests contained 15 items covering all levels of Bloom's Taxonomy (remembering, understanding, applying, analysing, evaluating) in accordance to Dubins et al. (2016) on lean warehousing and aimed at getting quantitative evidence for the change of the participants' learning outcomes. Significant differences could be examined taking the multiple-choice knowledge tests as pre-and post-interventions. Multiple-choice questions are a popular tool to assess competencies and knowledge in professional curricula as they are reliable, easy to administer and analyse (Dubins et al., 2016). However, to overcome the major criticisms of this type of tests, the lack of familiarity with the multiple-choice question format, the danger of over-exaggerating pattern recognition for answering the multiple-choice questions and the perceived luck in participants' performance, the steps outlined in Figure 2 were taken.
The first multiple-choice question test was conducted prior to the very first Model Warehouse training. The second test took place on the same day after the training Leveraging EL training through SL session ended. Five weeks later, group 2 took part in a repeating online study session and thereafter a four-week test delay occurred. All in all, both group 1 and 2 conducted the third multiple-choice question test nine weeks after the Model Warehouse training. As in previous SL research, the test questions remained the same (Arnold and McDermott, 2013).
All test results of the participants in both groups were tallied into a numerically ordered table. Conclusions on relative frequency and percentage distribution of the number of correct answers and their development over the three tests were drawn. Afterwards, the mean average of correct answers of both groups was calculated after each test. From this, the improvement of knowledge retention of the groups over the duration of the three tests was evaluated. Thus, the focus was put on the participants' individual overall results rather than question-specific results.
Interviews to collect primary data Qualitative data were collected through semi-structured in-depth interviews in group 2 only, as this research was aiming at gathering detailed insights into their experiences and beliefs with and regarding the influence of the SL session to the EL training. All interviews were conducted by the researcher in form of face-to-face and telephone interviews and lasted between 20 and 35 min. Open and closed questions, in form of scale items, with regards to the participants' perceptions and experience during and after the training sessions were asked. To allow the interviewees to expand on their responses and to explore those that were substantial to the research, probes and prompts were used. The thematic coding analysis was used to analyse the interview data. The steps taken are listed in Figure 3.
In accordance to Miles et al. (2014), initial codes, such as magnitude, descriptive or values codes, were developed during first cycle coding. Within the second cycle coding, themes were identified. Benefits of experiential and SL Prior to the Model Warehouse training, it was queried whether participants had already taken part in a Model Warehouse before or not: none of the participants had participated in it before, yet six of the 14 participants already had a basic understanding of lean warehousing or lean management in general.
Consistently, participants named two aspects they appreciated about the training: interactivity and the learning environment (see Figure 4). Thereby, the latter was seen as more encouraging as it resembled a real-life situation, equipped with state of the art technology and the opportunity to try out and fail without causing damage. Yet, without the participants' active participation and their interaction within a process, as well as the interpersonal exchange with others, the learning environment of any EL training programme would not be successful. One could argue that the participants' interactivity led to a better understanding of the materials taught in the work-based training. The participants' increased enjoyment factor might have been due to the combination of interactivity and learning environment which also led to a stimulation of their cognitive processes.
Furthermore, an important level of curiosity on how things are being done was evoked. Therefore, it can be assumed that a high degree of attention of the students was given to the session. Yet, to ensure prominent levels of attention, focus has been put on the participants' personal needs and prior knowledge as the qualitative analysis revealed that participants with different background knowledge had different learning needs during the training programme. Students acknowledged e-learning to be their preferred SL medium.
Participants named the repetition, the time-efficiency, the re-stimulation of cognitive processes as well as the flexibility to bridge times without active operation as the benefits of SL (see Figure 5). The following quotes taken from the students' interviews support this: Yet, SL is only judged to be supportive and would not work without the initial classroomtraining and EL of lean logistical processes.
Application of SL to an EL training
The analysis of the mean average of correct answers in both groups 1 and 2 over the duration of the study revealed that: (1) both groups had a profound understanding of lean warehousing prior to the EL training; (2) in the initial test group 1 scored better than group 2; (3) the EL training led to an increase of correct answers in both groups; and (4) following the SL session group 2 furthermore increased their number of correct answers, whilst group 1 experienced a loss.
The mean averages of correct answers in both groups can be found in Table III. The exact development of correct answers achieved of both groups over the duration of the three tests can be found in Figure 6.
This can further be visualised by comparing both groups' cumulative average percentage score of mean average of correct answers (Figure 7).
Effect on the learners' self-perceptions and self-confidence
No participant had a negative feeling about the SL session. Yet, all participants unanimously voted that the EL training was their preferred training as it was a learning environment which engenders and supports a high degree of interactivity. During the interviews, only one student acknowledged that the SL session allowed him to remember the material better than just the Model Warehouse training, whilst advocating that the SL session offered the possibility to revise forgotten parts. Additionally, the same one student claimed to feel more self-confident with regards to the materials learnt after the Leveraging EL training through SL SL session. The remaining six participants argued that the SL session did not improve their learning, did not entail any additional information, failed to stimulate cognitive processes beyond those from the Model Warehouse training and was unenjoyable.
Benefits of EL
The findings of this research suggest that from the participants' point of view the learning environment, the interactivity, and the resulting comparatively high stimulation of cognitive processes are the main benefits of the Model Warehouse training. Additionally, participants rate the enjoyment which results of the Model Warehouse training as a benefit.
Participants argue that the EL work-based learning is much different to normal class-based training, they are more greatly committed to actively participate and engage in the process, effective group dynamics and a higher exchange of information which lead to the development of new viewpoints and the stimulation of reflective thinking.
These findings affirm what is already revealed by previous research: EL differentiates itself from, thus is more beneficial than, teacher-centred learning seeing as learners are actively taking part in an interactive, risk-free experience in which they can experiment with new ideas and fail without causing damage (Austin and Rust, 2015;Egbert and Mertins, 2010).
Through building up emotive connections within an EL environment, cognitive processes are launched that create neural linkages which lead to lasting knowledge retention (Schenck and Cruickshank, 2015). These findings mean that by utilising the Model Warehouse training, a strong method is in use to provide initial training to learners in the field of lean warehousing as it is a stimulating atmosphere where curiosity is created and the learner feels not obliged to, but wants to learn. The learner effectively gains knowledge in the subject and can relate this information to real-world scenarios. Furthermore, during the interviews it became apparent that EL leads to reflective thinking which in real-life can stimulate change processes in current businesses. This appears important as it confirms the current practice of the Model Warehouse training. Finally, it is in line with the major assumptions of Kolb's EL theory model (Kolb, 1984).
Benefits of SL
The findings of this research advocate several benefits of the SL session. Especially, the spaced revisions itself, as well as its adaptability to the individual, are two of the beneficial aspects of the SL session. In addition, the participants emphasise the small-time requirement of the SL session and the continuation of the cognitive processes that are initiated by the EL training.
It appears that participants appreciate the repetitive nature of the SL session, which aims to capture the material taught in the EL training to strengthen overall knowledge building. Moreover, SL represents a means which can bridge times without active operations for participants not to forget what has been taught in the Model Warehouse training.
Also, participants highlight that the gap between the initial Model Warehouse training and the succeeding SL session is very helpful to be able to reflect on what they have learnt during the initial training. When engaging in the SL session, the cognitive processes initially started during the Model Warehouse training are re-stimulated, existing linkages are strengthened and obscurities that may have developed after processing the information of the Model Warehouse training are eliminated through the following SL session. This perception of participants is in line with what previous research has proven, namely, that the recollection of information is higher when reviewing that information spaced out over time compared to a massed, single study session, in this case the Model Warehouse training (Kang, 2016;Sobel et al., 2011).
Participants claim the SL session offers another additional benefit, namely, the little time investment compared to the previous Model Warehouse training. However, at the same time the participants note that although the SL session requires less time investment from their side, it does not impart as much learning output as the Model Warehouse training (see Figure 8).
A further observation is that participants appreciate economical learning, which means that more can be achieved in a short amount of time. In other words, participants prefer high learning outcome with low time investment. Yet, they acknowledge that the SL session would not have had such a significant impact in case of the Model Warehouse training would not have laid substantial cognitive foundations. Therefore, the little time investment should be disregarded as a general benefit.
The participants' choice of the preferred SL medium, namely E-learning, is in line with their appreciation of economical training. They prefer a bespoke medium in which every single participant can autonomously decide when and how what to revise of the previous Model Warehouse training to achieve their individual optimum learning result. Choosing any form of e-learning as the SL medium to be added to the Model Warehouse training would resemble what Chang (2016) defined as interactive learning in which an in-person training is combined with an e-learning-based rehearsal session. As the findings of both the primary data and previous research are corresponding it can be concluded that they are valid for the case of this research. Moreover, they appear important in view of future training and how to design the spaced rehearsal sessions.
Application of SL to an EL training
The comparison of the knowledge tests of group 1 and group 2 suggests that the SL session (group 2 participated in) has a significant impact on the knowledge development of the students in this group (see Figure 9).
Participants of the SL session are either able to retain or improve upon the knowledge level. It indicates that a single SL session has a positive influence on the participants' knowledge retention. Assuming that the research on the SL effect is correct and three or more repetitions are even better with regards to increasing knowledge retention than one single revision Leveraging EL training through SL session (Bahrick, 1979;Bahrick et al.,1993;Shebilske et al., 1999), one could assert that the participants' knowledge retention could still be improved much further.
Yet, participants regret that they did not receive any feedback on their test results to understand their areas of improvement which they could restudy for future correct application. However, in SL research this is not applied either as focus is put on the direct effects an SL session has rather than the mediated effects (Karpicke and Bauernschmidt, 2011).
Concluding, letting learners know the errors they made may lead to an adaption of the learning strategy of the individual learner. Thereby, information is said to be processed more intensely and better encoding strategies are developed (Bahrick and Hall, 2005). Hence, previous failures will be diminished and an improvement in knowledge retention is achieved as learners who engage in tests have to access information stored in their memories, transfer and apply it to new situations and hence, lead to an increase in learning and knowledge retention (Karpicke and Roediger, 2007). This underpins the third predominant reason for the SL effect to occur and it appears inevitable to give test feedback to the participants in the SL session.
Overall, the above-mentioned results lead to the suggestion that SL sessions should be incorporated into any existing EL or work-based training programmes, especially into the Model Warehouse training, to enhance knowledge retention.
Effect on the learners' self-perception and self-confidence
The findings of this research indicate that most participants who engaged in the SL session did not properly appreciate the impact the SL session had towards their knowledge retention. They argue that the SL session is not as effective as the real-life environment of the Model Warehouse training they participated in: the initial training has a higher input; more detailed knowledge and they were actively engaged and part of the overall process. Yet, the minority of participants indicate that the SL session helps to remember material better than solely the Model Warehouse training. Furthermore, they say that the Model Warehouse training is overloaded with information and time is needed to restructure and organise them; after a break they can revise the information again and structure everything according to their needs. This also highlights that training groups should be differentiated according to skill levels.
Still, most of the participants recognise the added value the SL session offered to deepen understanding and refresh materials learnt during the Model Warehouse training. They argue that the external push they got to sit down and revise again is useful to reorganise and freshen up memory in a customised way. However, the minority of participants does not 10 5 0 Mean group score improvement from beginning of course (%) Figure 9. Cumulative percentage polygons of the groups' overall improvement judge the SL session to be an added value to the Model Warehouse training. They reason that the SL session would only be an added value for novices who do not have any prior understanding of lean warehousing. Nevertheless, when considering the majority who argue that the SL session is inferior to the Model Warehouse training, their test results show an improvement in knowledge retention following SL in comparison to those whom did not undertake SL. Zechmeister and Shaughnessy (1980) trace that back to the participants' perception that a massed learning session, in this case the Model Warehouse training, gives learners the feeling to be more familiar with the material taught and therefore, participants assume to have a greater understanding of this material.
Zechmeister and Shaughnessy's (1980) explanation on why participants perceive the way they do can be also considered with regards to why most participants says that they feel more self-confident after the Model Warehouse training than after the SL session. In this research, all participants claim that they enjoy the interactive part of the Model Warehouse training better and therefore, both interpersonal and intrapersonal engagement in the training is increased. Moreover, this aspect is missing during the SL session; thus, most participants perceive it to be less effective than the Model Warehouse training.
Regarding the minority of participants who rate the SL session higher in terms of self-confidence and claim to recall information more easily after the SL session, it should be noted that they still say that the Model Warehouse training is most important in building the foundation for the following SL session and its associated learning success. Still, by revising those aspects they forget about and which they do not get in full during the Model Warehouse training, their self-confidence in the overall topic increases as well as the ability to recall information. Thus, the SL session would not have been needed and would not have been as successful as it is without an enlighten Model Warehouse training.
Conclusion
Implications of adding SL to an EL programme It can be concluded that the benefits of the SL session are the spaced repetition itself, the adaptability within and the re-learning SL imparts. However, compared to the EL training, the benefits of SL are perceived to be inferior as the latter will not lead to any success if the benefits of the EL training are not made use of to their fullest extent. Therefore, it is required for practitioners to be aware of the benefits any EL training possesses, to make best use of them and then transfer them to the subsequent SL sessions.
Regarding the effect a single SL session has on the participants' knowledge development it can be concluded that compared to those participants who only participated in the EL training, 57 per cent of SL learners were able to retain the knowledge level they achieved on completion of the EL training. Yet, 29 per cent of SL participants were even able to gain further knowledge. Moreover, it could be shown that when participants participated in an SL session five weeks after the EL training and taking part in a knowledge test four weeks later, no loss in knowledge occurred. Whereas the participants that only participated in the EL training experienced an average loss in knowledge of 11.42 per cent over the nine-week test-delay period compared to what they knew after the work-based EL training.
In terms of self-perception of most participants that participated in the SL session it should be concluded that the extra session was not viewed as being a help to improve knowledge outcomes or self-confidence with regards to the subjects studied. Yet, the test results verified that the SL session had a positive impact on the participants' test results. Therefore, participants' awareness on the influence and impact SL has on their knowledge retention should be established for them to build a more accurate self-perception.
Leveraging EL training through SL
Recommendations for future research Analysing more than one unit of the Model Warehouse training will increase the validity of the findings of this research. Thus, further analysis of work-based EL training programmes should be considered whilst replicating the research at hand.
In doing so, it needs to be figured out how many SL sessions are needed to achieve best results in knowledge retention for the case of the Model Warehouse training. In addition, it should be worked out which spacing intervals are best to achieve long-term success and which e-learning format should be used to best create anticipation to sustain the learners' enjoyment over the course of the SL sessions. Furthermore, it is recommended to measure the mediated effects feedback on participants' learning results has on their overall knowledge improvement.
Introduction
It is good to see you again Thank you for being willing to take part in a follow-up interview to the MWH learning sessions you participated in Please be assured that all data and information are treated confidentially Would I be allowed to tape our conversation? May I ask you to explain that in more detail?
Main body
Looking back, which type of learning did you preferthe MWH session or the repetition? Why would you say so? From your perspective, did the experiential learning session or the spaced learning session helped better to remember the material taught Would you say that the spaced learning session added value to the MWH training session?
Yes: to what extend? No: why not? What could have been better for you to achieve added value? Comparing both your perceptions after the experiential learning programme and the spaced learning session, have you felt more self-confident with regards to the material learned after either? Please evaluate your satisfaction with your learning results after each session by answering the following, using a scale of 1-5 Overall, I am very satisfied with the learning outcomes of the experiential learning session Overall, I am very satisfied with the learning outcomes of the spaced learning session Overall, neither of the learning sessions were satisfactory Did you encounter any difficulties in any learning session?
Yes: Please explain the reasons in more detail In your opinion, did you pay more attention to material in one type of practice?
Why/why not? Please evaluate the spaced learning experience by answering the following on a scale of 1-5 The SLE facilitated the understanding of the input given in the MWH session?
The SLE was important in getting a greater understanding of the input given in the MWH session?
The SLE would not have been needed to fully acquire How did you come to this opinion? May I ask you to explain that in more detail? Please feel free to give examples on how you experienced the increase in self-confidence 1 strongly disagree 2 disagree 3 undecided 4 agree 5 strongly agree Please explain your evaluation and add further comments, if you wish Did you encounter this in previous learnings? What kind of learnings were these? Please comment on why you answered this way 1 strongly disagree 2 disagree 3 undecided 4 agree 5 strongly agree Please explain your evaluation and add further comments, if you wish 1 strongly disagree 2 disagree 3 undecided 4 agree 5 strongly agree Please explain your rating and add further comments, if you wish (continued ) On a scale of 1-5, would you agree that the spaced learning session was more efficient than the experiential learning session in terms of learning outcome? On a scale of 1-5, would you agree that the experiential learning session was more efficient than the SL session on in terms of learning outcome? On a scale of 1-5, would you agree that the experiential learning session increased in efficiency by combining it with the SL session in terms of learning outcome? Cool-off Are there further aspects you would like to mention/ evaluate that have not been covered in this interview? Closure Thank you very much for your time today. Your insights are highly valuable for the outcome of the research project and my studies Table AI.
For instructions on how to order reprints of this article, please visit our website: www.emeraldgrouppublishing.com/licensing/reprints.htm Or contact us for further details: permissions@emeraldinsight.com
|
v3-fos-license
|
2020-07-29T13:06:22.828Z
|
2020-07-28T00:00:00.000
|
220842180
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GREEN",
"oa_url": "https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7605090",
"pdf_hash": "4ca5d9f8d53bc18f232404aad81d9fcd7ecdd1e9",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44304",
"s2fieldsofstudy": [
"Medicine"
],
"sha1": "49be55b96de17914822fba02423d56943ab4b8a2",
"year": 2020
}
|
pes2o/s2orc
|
Posthepatectomy liver failure
Liver surgery is one of the most complex surgical interventions with high risk and potential for complications. Posthepatectomy liver failure (PHLF) is a serious complication of liver surgery that occurs in about 10% of patients undergoing major liver surgery. It is the main source of morbidity and mortality. Appropriate surgical techniques and intensive care management are important in preventing PHLF. Early start of the liver support systems is very important for the PHLF patient to recover, survive, or be ready for a liver transplant. Nonbiological and biological liver support systems should be used in PHLF to prepare for treatment or organ transplantation. The definition of the state, underlying pathophysiology and treatment strategies will be reviewed here.
generation of reactive oxygen species, and endothelial cell damage. In the reperfusion period, a cycle of cell adhesion molecule upregulation, cytokine release, T cell and polymorphonuclear cell recruitment and activation are initiated. Finally, microvascular injury, Kupffer cellmediated inflammation, and hepatocyte death occur [12,13].
Risk and prevention
Risk factors for PHLF are summarized in Table 1. PHLF is divided into three subgroups according to the classification made by ISGLS (Table 2) [1]. Patients in group A with temporary liver dysfunction that do not require invasive treatment should be monitored. Group B and C patients with multiple organ failure or severe liver failure should be monitored under intensive care conditions. Patients should be closely monitored for signs of systemic inflammatory response syndrome (SIRS). Serum bilirubin, aminotransferase, albumin, international normalized ratio (INR), ammonia, lactate, and, C-reactive protein (CRP) levels should be closely monitored with serial measurements. Also, it is recommended that the patient group, whose antithrombin-3 activity measures below 61.5% on the first postoperative day, should be carefully monitored for failure [14]. Whether there is a problem with arterial, portal venous blood supply or venous outflow (hepatic veins) of the liver in patients who develop PHLF should be evaluated by Doppler ultrasonography, computerized tomography (CT), or angiography. In the presence of arterial stenosiscongestion, tissue plasminogen activator (t-PA) infusion or balloon angioplasty can be applied to the relevant area; the factors that reduce or stop the flow of the artery of interest should be eliminated with relaparotomy, and if necessary, reanastomosis should be performed [15]. In the presence of portal vein stenosis or thrombosis, systemic heparinization should be initiated with caution. In case of stenosis or bending, additionally, t-PA infusion can be tried by percutaneous entry into the portal vein and this stenosis or bending can be corrected by the endovascular stent in the early period. The obstructive jaundice condition in the postoperative period that may occur after surgery should also be examined, and the treatment process should be carefully managed in case of its presence (percutaneous drainage or relaparotomy is planned according to the patient's condition) [16]. Strategies for prevention of PHLF are summarized in Table 3 [17][18][19].
Medical support therapy
The approach to patients with PHLF starts with medical support therapy. When SIRS is observed in patients, hypotension and relative hypovolemia, observed due to decreased systemic vascular resistance should be monitored by invasive monitoring. Colloid-weighted fluids should be used in fluid replacement, and albumin support should be provided. Vasoactive agents may be required in cases that do not respond despite adequate volume support. Extracellular fluid accumulation should be avoided [20]. Hydrocortisone support is recommended for the control of persistent lactic acidosis caused by hypoperfusion and vasopressor agent use. N-acetyl cysteine should also be administered in the treatment of liver failure [21,22]. Proton pump inhibitor therapy should be applied to prevent the development of stress ulcers. Early intubation and mechanical ventilator therapy may be needed since patients with liver failure may develop acute lung injury (PaO2 / FiO2 ratio < 300 mmHg) or acute respiratory distress syndrome (PaO2 / FiO2 ratio < 200 mmHg). Tidal volume should be 6 mL/kg in adult ventilator therapy and PaO2 should be kept above 80 mmHg. Also, high positive end-expiratory pressure (PEEP) administration should not be applied at high levels as it will cause hepatic congestion, portal hypertension, acid development, and decreased liver regeneration. Hyperventilation (PCO2; 25-30 mmHg) protocol should be applied to decrease the intracranial pressure in patients who need mechanical ventilator treatment. The most important underlying cause of the encephalopathy in the liver failure is ammonia accumulation and cerebral edema due to hyponatremia. Since brainstem herniation or hypoxic brain injury are complications that may develop due to brain edema and cause a rapid deterioration of the patient, treatment preventing the formation of brain edema should be started (mannitol therapy, hyperventilation, sodium thiopentone, hypertonic fluid therapy, etc.). [20,23]. Treatment using oral rifaximin, laxative (lactulose), and enema limits the formation of ammonia. In patients with grade 3-4 encephalopathy, monitoring intracranial pressure, close blood sugar monitoring, and controlled hypothermia are recommended [3]. The development of resistant hypoglycemia (disruption of hepatic gluconeogenesis and hyperinsulinemia) is a poor prognostic marker. First, enteral nutrition should be applied and parenteral nutrition should be given to patients with limited oral intake. The daily calorie need of patients should be calculated between 25 and 35 kcal/kg and daily protein support between 1 and 1.2 g/kg. Branched-chain amino acid solutions (leucine, isoleucine, or valine) should be preferred to meet protein needs. Most of the calorie needs should be met with carbohydrate and fat solutions. Acute tubular necrosis due to SIRS or development of hepato-renal syndrome (HRS) due to underlying liver disease should be monitored and treatment of complications such as hypokalemia (resistant to diuretic therapy), hypophosphatemia oliguria, hyponatremia, and water retention should be carried out immediately [20,21]. Massive ascites is particularly observed in patients with preoperative portal hypertension. Furosemide/spironolactone should be administered at a rate of 2/5 (20 mg / 50 mg) in diuretic treatment. The diuretic response may be limited due to acute renal injury due to surgery, SIRS, or HRS. Also, diuretic use deepens the existing hyponatremia. When sodium levels fall below 120 mEq/L, diuretic therapy should be discontinued and patients with intravascular volume deficits should be given albumin support. Intermittent paracentesis should be performed in case of impaired patient comfort, restricted breathing, impaired oral intake, or leakage of ascites from the surgical area (in patients with liver failure). In the case of paracentesis more than 5 L, 8 g of albumin replacement should be performed for each liter taken to prevent renal failure, hyponatremia, and hypotension. TIPS or peritoneovenous shunt may be required in the presence of prolonged acid (4-6 weeks and above postoperatively in liver failure patients). Bacterial infections (80%) are found in the majority of patients with liver failure. Although prophylactic antibiotic therapy is not recommended, it is recommended to start broad-spectrum antibiotic therapy without waiting for culture results in the presence of the smallest suspicion [20,21]. It is also recommended to add antifungal drugs to treatment. Factor II, VII, IX, and X dysfunction occurs depending on the decarboxylation of the degraded vitamin K in the liver failure. Also, disorders of thrombocytopenia and thrombocyte function are observed due to renal dysfunction and uremia. Fresh frozen plasma (FFP) is used to control oncotic pressure and prevent INR rise. However, large amounts of FFP transfusions should be used with caution as they can lead to the development of brain edema and acute lung injury. The risk of bleeding should be taken into account during deep vein thrombosis prophylaxis application to patients.
Liver support systems in treatment
These systems are developed to support patients with liver failure until the patients' condition improves or a transplant is made. The complex physiological, biochemical, and metabolic functions of the liver do not make it possible to perform a truly complete replacement therapy. Also, the complexity of the pathophysiology of liver failure, especially the inability to reveal the underlying mechanisms affecting prognosis, such as cerebral edema and encephalopathy, is an important barrier to supportive therapies. Approaches to liver support are divided into two groups as nonbiological and biological systems. Nonbiological systems are based on nonspecific detoxification using a limited permeable membrane. Biological support systems try to create a detoxification environment close to natural liver tissue by utilizing various cell (hepatocyte) cultures [20,21,24]. Nonbiological support units are used in most of the European countries and our center due to the high cost of biological systems, technical difficulties in supplying hepatocytes and maintaining their viability for a long time.
Nonbiological liver support
Nonbiological support units are applied with extracorporeal pump machines with different features. Generally, there are options on the used pump machines that allow different support units to be applied. Applications are made through 2-way wide lumen catheters placed in the subclavian, internal jugular, or femoral vein. The main purpose in these applications is to remove the molecules other than essential hormones, growth factors, immunoglobulins, coagulation factors, and complement system proteins (molecular weight > 50-60 kDa), which are bound to carrier proteins, from circulation. In this way, water-soluble toxins (ammonia, urea, lactate, creatinine, etc.) and oilsoluble toxins (bile acids, bilirubin, aromatic amino acids, short and medium-chain fatty acids, etc.) can be effectively removed. Also, tumor necrosis factor (TNF)-α (17.5 kDa), Interleukin (IL)-1β (17 kDa), IL-6 (21 kDa), IL-8 (8 kDa) and IL-10 (18.7 kDa) are among the main cytokines that play an active role in the etiopathogenesis of liver failure and removing them from circulation also aims to correct the clinical picture of the patients. [25,26].
Nonbiological liver support systems are divided into 4 main groups [27]
. Continuous renal replacement therapies (CRRT)
Although it is usually done by a large lumen central catheter, it can also be done using arteriovenous (AV) fistula. Venous blood from the patient enters the peristaltic pump with a venovenous circuit. According to the intermittent hemodialysis application, it is aimed to lessen the patient's hemodynamics by drawing fluid continuously in a limited volume [26]. During the cycle, coagulation is prevented using citrate or heparin. CRRT is mainly used to extract excess fluid in the extracellular space. They are used effectively in removing toxins that are not bound to albumin [27,28]. The membranes used in the units are made of biocompatible material (polyacrylonitrile, polymethylmethacrylate, etc.) to limit the activation of complement and other humoral systems. The tendency towards coagulation is minimal due to the high ultrafiltration constant. Dialysate and replacement fluid are used during the procedure with the selected CRRT technique. Dialysate is the liquid in which toxins and waste materials collected from the blood exist. The replacement fluid is a balanced electrolyte solution added to the venous blood which returns to the patient to maintain body homeostasis before or after the filter through which the blood passes. It is aimed to maintain the normal electrolyte and acid-base state while forming the composition. The sodium concentration in the liquids used is 150 mmol/L. If necessary, KCl, calcium, and magnesium can be added.
The pH can be buffered using bicarbonate or lactate. Although heparin (nonfractionated) is often preferred for anticoagulation of the system, low molecular weight heparin, citrate, prostacyclin, or nafamostatmesylate can also be used [26,29]. After the procedure, the blood is given to the patient again with the replacement fluid or without replacement. Five different CRRTs can be made. Diffusion, convection, or a combination of both methods used in CRRT. The diffusion method is based on the exclusion of toxins dissolved in the blood. Toxins pass from one side of the semipermeable membrane (low permeability) to the other, depending on the electrochemical (concentration) gradient. The molecules move from the high concentration section to the low concentration section. Low molecular weight (5-15 kDa) toxins such as acid, potassium, and uremic toxins are discarded with this method but molecules reaching up to 30 kDa can be removed from the circulation with the use of synthetic polymeric membranes (polyacrylonitrile, polymethylmethacrylate, etc.). The convection method is based on ensuring the excretion of toxins dissolved in the blood. It works with a mechanism like the normal function of the human kidney. Solubles dissolve in the high-pressure zone with solvent and move from the high-pressure section to the low-pressure section through the high permeability membrane. In this mechanism, the transmembrane pressure gradient is important. Convection depends on filtration rate, membrane permeability, and soluble concentration. Medium-sized molecules (<60 kDa) are removed more effectively than in the diffusion method [23,24,26].
Plasmapheresis, plasma exchange, and continuous plasma filtration adsorption
In plasma exchange application, while plasma which is separated from the blood of the patient with the help of high permeability membrane is taken out, the patient is given fresh frozen plasma and so the change is made. In the plasmapheresis method, plasma separated from the patient's blood by centrifugation method is not replaced [27]. In continuous plasma filtration adsorption (CPFA) method, patient's plasma is filtered with a high permeability plasma filter that allows it to pass through a bed of adsorbent material (carbon or resins) (Figure 2). Each treatment method, like the nonbiological liver support treatments mentioned earlier, is applied by central venous catheter and with module alteration of the same machines. The aim is to remove circulating antibodies and reduce cytokine load [29][30][31]. Large molecules ( >60 kDa) are removed using this method. Since these molecules include molecules such as growth hormones, immunoglobulins (150-900 kDa), albumin (66.3 kDa), transferrin (76 kDa), fibrinogen (341 kDa), the plasmapheresis method is especially used in many autoimmune diseases and ABO-incompatible or cross-match positive kidney transplantation. The plasma exchange method is used to remove bilirubin effectively from circulation, especially in cases of hyperbilirubinemia [32,33]. In the treatment of liver failure, plasma exchange, or plasmapheresis treatment together with CRRT is recommended [34][35][36]. In this way, it is aimed to ensure that the growth factors and hormones that remain useful for the patient remain in circulation. In a study published in Japan, CVVHDF and plasma exchange methods were used together in the treatment of acute liver failure. Patients' consciousness improved with this treatment and brain edema and HRS did not develop during treatment [37]. In this study, the average number of sessions is 21 , 20% of patients with liver failure due to acute hepatitis B infection, and 57% of patients with liver failure due to an unknown cause. In the plasma exchange application, the sessions take 4 h and the plasma removed from the patient (40-50 mL/kg/ session) is replaced with fresh frozen plasma (8-10 units/ session) or Human albumin (5% H. Albumin, 2500 mL/ session) or saline (3000 mL/session) [29,30].
Hemoperfusion
Hemoperfusion is the process of passing high blood volume (300 mL/min) of patient blood through an adsorbent surface especially to remove water-soluble toxins (ammonia, urea, lactate, creatinine, etc.) from the blood and give it back to the patient. The adsorbable chemical sorbents used in hemoperfusion are resin, activated carbon, or coal. Coal hemoperfusion is the most studied nonbiological liver supplement treatment. Initially, although it was observed that it was more effective than hemodialysis treatment in survival and improvement of the neurological picture in patients with liver failure; this difference could not be demonstrated in controlled studies. However, activated charcoal is still used in the most effective nonbiological liver support systems (MARS, PROMETHEUS) [38]. Resins (neutral, anionic, and cationic) separate substances that are protein-bound and cannot be removed by dialysis, such as bilirubin, bile acids, and barbiturates-nephrotoxic drugs from plasma. However, it causes hypotension, thrombocytopenia, leukopenia, and bleeding since it also holds clotting factors and other molecules [39]. The hemoperfusion method is applied in 4-5 h sessions, and the pump speed is adjusted to 160 mL/h during the procedure.
Albumin dialysis (MARS and PROMETHEUS)
MARS (Molecular Adsorbent Recirculating System, Gambro AB, Stockholm, Sweden) or PROMETHEUS (Fractionated Plasma Separation, Adsorption, and Dialysis system, Fresenius Medical Care AG & Co. KGaA, Homburg, Germany) can be applied via the vascular route used in the treatment of continuous renal replacement. MARS consists of three main units. Continuous albumin dialysis circuit allows removal of protein-bound toxins (polysulfone membrane that allows passage of molecules smaller than 60 kDa that albumin cannot pass). The column that holds toxins bound to albumin reactivates albumin and ensures a return to circulation, thus preventing the support of albumin in large volumes. A continuous renal replacement circuit allows for classic hemofiltration or hemodialysis. The MARS cartridge needs to be replaced every 8 h [40][41][42][43]. In the PROMETHEUS system, the plasma of the patient containing albumin is separated by a membrane with a molecular permeability of 250 kDa and passed through two columns with different adsorbents. The substances dissolved in water are cleaned with a high exchange dialyzer. With both methods, the excretion of water-soluble metabolites such as ammonia, urea, creatinine, and albumin-bound substances such as bile acid and bilirubin is effective.
In nonbiological support units, heparinization is generally systemic, but rarely applied regionally (heparin infusion is initiated before the filter, and 10-20 mg/h protamine is given to the circulation after the filter). Heparin is given with a dose of 5-10 u/kg, ACT 200-250, and PTT are kept in the range of 1.5-2 times the normal value. Anticoagulation is not applied in patients with thrombocytopenia (<80,000/mL) or in plasmapheresis using isotonic NaCl solution as the clot is unlikely to form in the filter. With the application of citrate and anticoagulant, which have been used recently, complications related to heparin have also been eliminated. Citrate-anticoagulant application is included in the set. It is also neutralized with Calcium. Problems that may be encountered in applications with nonbiological support units are summarized in Table 4.
Treatment algorithm
Nonbiological support units are activated in the presence of problems that arise or become more prominent during medical support treatment in the treatment of liver failure [44]. Although there is no consensus on which of the nonbiological support units should be started in patients who comply with the clinical parameters indicated in Figure 3, our preferred algorithm is summarized. When the occurrences that dominate the clinical course of liver failure (HRS, encephalopathy, hyperbilirubinemia, hepatopulmonary syndrome, and multiorgan failure-MOF) are considered, treatments are shaped by emphasizing the different features of the support units. Treatment of CVVH or CVVHDF is preferred as the first option in the liver failure table dominated by HRS, plasma exchange, and CRRT or albumin dialysis and CRRT should be used together if there is no response. CVVH or CVVHDF is preferred in the occurrence of liver failure dominated by hepato-pulmonary syndrome, plasma exchange and CRRT or albumin dialysis and CRRT should be used together in unanswered cases. In liver failure, where mild hepatic encephalopathy is dominant, CVVH or CVVHDF is preferred as the first approach. In the presence of severe encephalopathy, plasmapheresis and controlled hypothermia are applied in addition to these treatments. In cases where there is no response to these treatments, plasma exchange and CRRT or albumin dialysis and CRRT should be used together. In the liver failure table where only hyperbilirubinemia is dominant, plasmapheresis treatment is started; plasma exchange and CRRT or albumin dialysis and CRRT should be used together if no response is obtained. Plasmapheresis, hemoperfusion, or albumin dialysis can be used as the first option in cases where hepatic encephalopathy is accompanied by hyperbilirubinemia and bleeding parameters are normal. In cases where there is no response, plasma exchange and CRRT or albumin dialysis and CRRT should be used together. Plasma exchange should be used as the first choice in cases where hyperbilirubinemia is accompanied by hepatic encephalopathy and bleeding parameters are impaired. In cases where there is no response, plasma exchange and CRRT or albumin dialysis and CRRT should be used together. Unlike other clinical pictures, plasma exchange and CRRT or albumin dialysis and CRRT should be used together in the MOF table.
When the transaction costs of nonbiological support units are examined, a fixed expense of around 5000 € is required per session in MARS or PROMETHEUS applications (the prices of extracorporeal pump machines are ignored). Fixed expenditure per session varies between 100 and 1000 € in CRRT [24]. The fixed expense is around 1000 € per session in plasma exchange applications. CRRT and plasma exchange applications come to the fore as more economical options in the creation of the treatment algorithm when considering the costs mentioned above.
Biological liver support systems
In a study conducted for the first time in 1956, urea was obtained from ammonium chloride using homogenate obtained from cow liver [45]. This study was followed by studies using the liver from many different animal species [38]. The complexity of the preparation process and the loss of effectiveness of the prepared homogenate in a short time made it difficult to adapt this approach to clinical use. The livers of different kinds of animals were used for perfusion (xenogenic extracorporeal liver perfusion) and the improvements in biochemical parameters and neurological signs were noted in a limited number of clinical studies [46,47]. The successful level achieved in hepatocyte isolation techniques paved the way for the use of hepatocytes in different configurations in liver support systems. The usage area of hepatocytes in liver failure can be summarized under two headings; implantation (hepatocyte transplantation) and extracorporeal systems. The beneficial effects of human hepatocyte transplantation in the treatment of liver failure have been demonstrated in a limited number of case reports [48]. However, there is no data on the use of xenogenic hepatocyte transplantation in the treatment of liver failure in humans. The most important obstacle to hepatocyte transplantation treatment is that toxic or viral factors leading to liver failure prevent the transplanted hepatocytes from organizing [38]. Extracorporeal systems or bioartificial liver support systems have been developed to perform detoxification by intermittently connecting to the human circulatory system, just like nonbiological systems. These systems consist of two main parts. The artificial unit consists of a bioreactor and parts of this reactor, while the other unit, the biological unit, consists of hepatocytes [38]. For the first time in 1987, Matsumura et al. placed isolated rabbit hepatocytes in the unit of the device separated from the patient's circulation by cellulose membrane in a treatment they applied to a 45-year-old patient undergoing hepatic insufficiency due to inoperable biliary tract tumor [49]. Two years after this case report, Margulis et al. used the support unit with pig hepatocytes in their 126 patient series, providing a significant survival advantage, especially for patients before the coma [50]. Today, there are many bioartificial liver support systems developed by different study groups and used in clinical studies ( Table 5). The human hepatocyte cell line (C3A) was used only in the ELAD (Extracorporeal Liver Assist Device) system among these systems. These cells have been cloned from the human hepatoblastoma cell line, their tumor-forming activities have been reduced and their albumin-alpha fetoprotein production activities have been increased [51]. In other systems, pig hepatocytes are used [52][53][54][55][56]. In Table 5, the cost of treatment of bioartificial liver support systems, which are briefly explained as working systems and treatment processes, is around 50,000-60,000 € and therefore not primarily preferred in our country and European countries [24].
Conclusion
PHLF continues to be a serious surgical complication of the liver occurring in approximately 10% of patients undergoing major liver surgery. PHLF ranges from a mild hepatic impairment, characterized by transient hyperbilirubinemia, to hepatic impairment, which causes multiple system insufficiency that requires invasive treatment in the intensive care unit. Neoadjuvant therapy with obesity, diabetes, chemotherapy, underlying cirrhosis, increased age, male sex, extended liver resection need, and long-term operation with high intraoperative EBL (estimated blood loss) increases the risk of PHLF. Early start of the liver support systems is very important for the PHLF patient to recover, survive, or be ready for a liver transplant. The nonbiological and biological liver support systems described above should be used for treatment or organ transplant in PHLF, and the method of application should be with the joint approach of the Organ Transplant Clinic and Intensive Care Unit. The most effective treatment of liver failure is a liver transplant. However, since the organ pool is far from meeting expectations, both biological and nonbiological liver support systems should be expected to be used more and more effectively in the treatment of PHLF in the future.
|
v3-fos-license
|
2018-04-03T00:15:57.834Z
|
2010-09-01T00:00:00.000
|
206202479
|
{
"extfieldsofstudy": [
"Biology",
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "HYBRID",
"oa_url": "http://mic.microbiologyresearch.org/deliver/fulltext/micro/156/9/2587.pdf?isFastTrackArticle=&itemId=/content/journal/micro/10.1099/mic.0.042689-0&mimeType=pdf",
"pdf_hash": "c2bccc6bd847cfe7f5cbfe5318b882110e0da3a3",
"pdf_src": "MergedPDFExtraction",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44305",
"s2fieldsofstudy": [
"Biology"
],
"sha1": "c2bccc6bd847cfe7f5cbfe5318b882110e0da3a3",
"year": 2010
}
|
pes2o/s2orc
|
Assembly of outer-membrane proteins in bacteria and mitochondria
The cell envelope of Gram-negative bacteria consists of two membranes separated by the periplasm. In contrast with most integral membrane proteins, which span the membrane in the form of hydrophobic a-helices, integral outer-membrane proteins (OMPs) form b-barrels. Similar b-barrel proteins are found in the outer membranes of mitochondria and chloroplasts, probably reflecting the endosymbiont origin of these eukaryotic cell organelles. How these b-barrel proteins are assembled into the outer membrane has remained enigmatic for a long time. In recent years, much progress has been reached in this field by the identification of the components of the OMP assembly machinery. The central component of this machinery, called Omp85 or BamA, is an essential and highly conserved bacterial protein that recognizes a signature sequence at the C terminus of its substrate OMPs. A homologue of this protein is also found in mitochondria, where it is required for the assembly of b-barrel proteins into the outer membrane as well. Although accessory components of the machineries are different between bacteria and mitochondria, a mitochondrial b-barrel OMP can be assembled into the bacterial outer membrane and, vice versa, bacterial OMPs expressed in yeast are assembled into the mitochondrial outer membrane. These observations indicate that the basic mechanism of OMP assembly is evolutionarily highly conserved.
Introduction
The cell envelope of Gram-negative bacteria is composed of two membranes, the inner membrane and the outer membrane, which are separated by the periplasm containing the peptidoglycan layer. While the inner membrane is a phospholipid bilayer constituted of glycerophospholipids, the outer membrane is highly asymmetrical, containing glycerophospholipids in the inner leaflet and lipopolysaccharides (LPSs) exposed to the cell surface (Fig. 1). The outer membrane functions as a permeability barrier protecting the bacteria against harmful compounds, such as antibiotics and bile salts, from the environment. Most nutrients pass this barrier via a family of integral outer-membrane proteins (OMPs), collectively called porins (Fig. 1). These trimeric proteins form open, water-filled channels in the outer membrane, which allow for the passage of small hydrophilic solutes, such as amino acids and monosaccharides, via passive diffusion (Nikaido, 2003). Other OMPs have more specialized transport functions, such as the secretion of proteins and the extrusion of drugs, or function as enzymes or structural components of the outer membrane (Koebnik et al., 2000). Besides integral OMPs, the membrane also contains lipoproteins, which are attached to the membrane via an N-terminal lipid moiety.
All constituents of the outer membrane are synthesized in the cytoplasm or at the inner leaflet of the inner membrane. An area of intense research is how these components are transported and assembled into the outer membrane. An obvious model organism to study such fundamental questions is Escherichia coli, but Neisseria meningitidis has also proven to be a very suitable organism to address these questions. N. meningitidis normally resides as a commensal in the nasopharynx but occasionally causes sepsis and meningitis. Besides generally useful features, such as a relatively small genome size (~2200 genes) and natural competence and recombination proficiency, which facilitate the construction of mutants, the organism has several properties particularly useful for the study of outer membrane biogenesis. Firstly, in contrast with E. coli, N. meningitidis is viable without LPS (Steeghs et al., 1998). Such mutants defective in LPS biosynthesis still produce an outer membrane into which OMPs are assembled (Steeghs et al., 2001). Since N. meningitidis is viable without LPS, the genes encoding the components of the LPS transport route can be knocked out and the properties of such mutants can be studied (Bos et al., 2004;Tefsen et al., 2005). Secondly, studies on OMP assembly in E. coli are thwarted by a stress response that is activated when unfolded OMPs accumulate in the periplasm. Activation of this stress response, which is dependent on the alternative s factor s E , results in the increased production of periplasmic chaperones that aid in OMP assembly and of the protease DegP that degrades these unfolded OMPs . In addition, small regulatory RNAs are produced that inhibit the translation of the mRNAs for OMPs by stimulating their decay (Johansen et al., 2006;Papenfort et al., 2006). Thus, OMP synthesis is inhibited under these conditions until unfolded OMPs are cleared from the periplasm. Consequently, mutations resulting in OMP assembly defects do not normally result in the extensive accumulation of unfolded OMPs in the periplasm, but in decreased OMP levels (Chen & Henning, 1996;Sklar et al., 2007b). Since other signals such as altered LPS structure (Tam & Missiakas, 2005), and even cytoplasmic signals (Costanzo & Ades, 2006) can also trigger the s Edependent stress response, decreased OMP levels do not necessarily reflect an OMP assembly defect. Since this s Edependent stress response is absent in N. meningitidis (Bos et al., 2007a), unfolded OMPs normally accumulate in the periplasm of assembly-defective N. meningitidis mutants, which facilitates these studies. This paper focuses on the current knowledge of OMP biogenesis in bacteria and on the evolutionary conservation of the OMP assembly machinery.
Structure of bacterial OMPs
Whereas most integral membrane proteins, including bacterial inner-membrane proteins, span the membrane in the form of a-helices entirely composed of hydrophobic amino acids, bacterial OMPs present an entirely different structure (Fig. 1). These proteins form b-barrels composed of antiparallel amphipathic b-strands (Koebnik et al., 2000).
The hydrophobic residues in these b-strands are exposed to the lipid environment of the membrane, whereas the hydrophilic residues point towards the interior of the protein, which is the aqueous channel in the case of porins. These b-barrel structures are very stable, usually withstanding incubation in 2 % SDS (i.e. as present in standard sample buffer for SDS-PAGE) at ambient temperature. This property explains the heat-modifiable behaviour of many OMPs in SDS-PAGE analysis: the native form of these proteins migrates differently in the gel compared with the heat-denatured form (Dekker et al., 1995;Nakamura & Mizushima, 1976). Also, natively folded OMPs are usually highly resistant to proteases. Heat modifiability and protease resistance are facile parameters to probe the folding of OMPs into their native configuration.
Transport of OMPs across the bacterial inner membrane
The unusual structure of bacterial OMPs is probably imposed by their biogenesis pathway. OMPs are synthesized in the cytoplasm as precursors with an N-terminal signal sequence, which marks them for transport across the inner membrane via the Sec system (Fig. 2). The proteinconducting channel of the Sec system, which is composed of the integral membrane proteins SecY, SecE and SecG (Driessen & Nouwen, 2008), releases OMPs and periplasmic proteins at the periplasmic side of the membrane. The SecYEG translocon is also implicated in the assembly of integral inner-membrane proteins. When large hydrophobic protein segments are inserted into the translocon, the channel opens laterally to allow for the insertion of these proteins into the inner membrane ( (Rutten et al., 2009), and of a typical a-helical inner-membrane protein, i.e. the SecYE translocon of Thermus thermophilus (PDB file 2ZQP) (Tsukazaki et al., 2008), are shown on the left and the right, respectively.
Nouwen, 2008). Thus, the presence of similar hydrophobic segments in OMPs would prevent them from reaching their final destination, while the amphipathic b-strands that constitute the transmembrane segments of OMPs are compatible with transport via the SecYEG translocon to the periplasm. Indeed, the insertion of hydrophobic segments into the outer membrane porin PhoE of E. coli was shown to affect the biogenesis of the protein (Agterberg et al., 1990).
Transport of OMPs through the periplasm
In E. coli, three chaperones have been reported to guide nascent OMPs during their intermediate periplasmic stage (Fig. 2): Skp, SurA and the protease DegP, which also has chaperone qualities (Spiess et al., 1999). Recent structural analysis showed that DegP in its activated state can form large oligomeric cage-like structures of 12 or 24 subunits that could harbour a folded OMP in its central cavity without degrading it (Krojer et al., 2008). None of these chaperones is essential in E. coli, but double mutants show synthetic, often lethal, phenotypes, suggesting redundancy in chaperone activities. Detailed analyses of single and double mutants suggested the existence of two parallel pathways of chaperone activity in the periplasm, a major SurA-dependent route and an alternative Skp-and DegPdependent route that deals with substrates that fall off the SurA pathway (Rizzitello et al., 2001;Sklar et al., 2007b). However, skp and degP mutations have also been reported to show a synthetic phenotype (Schäfer et al., 1999), which is inconsistent with the idea that these chaperones operate within the same pathway. Furthermore, a recent proteomic analysis indicated that SurA has only a few substrates, including the OMP LptD, which is involved in LPS biogenesis, and that the reduced levels of many other OMPs in surA mutants may be solely a consequence of activation of the s E -dependent stress response (Vertommen et al., 2009). The study of Vertommen and colleagues argues against the hypothesis that the SurA pathway is the major periplasmic chaperone pathway for OMPs in the periplasm.
An alternative explanation for the synthetic phenotypes of double chaperone mutants is that these proteins have different, but complementary functions (Bos et al., 2007a;Walther et al., 2009b). Skp selectively binds unfolded OMPs (Chen & Henning, 1996;de Cock et al., 1999), presumably while they are still engaged with the Sec translocon (Harms et al., 2001). The crystal structure of this trimeric protein has been solved (Korndörfer et al., 2004;Walton & Sousa, 2004); it resembles a jellyfish that can hold nascent OMPs between its tentacles, thereby preventing their aggregation in the aqueous environment of the periplasm (Walton et al., 2009). SurA appears to play a role in the folding of OMPs into their native configuration (Lazar & Kolter, 1996;Rouvière & Gross, 1996). SurA is a peptidyl-prolyl cis/trans isomerase (PPIase) with two PPIase domains, which, however, appear to be dispensable for the chaperone qualities of the protein (Behrens et al., 2001). In this model, Skp is a 'holding chaperone' that prevents folding and aggregation of OMPs in the periplasm, whereas SurA acts as a 'folding chaperone' that assists in the folding of OMPs once they arrive at the assembly machinery in the outer membrane. OMPs. Porins and other OMPs are synthesized in the cytoplasm as precursors with a signal sequence, which is cleaved off during or immediately after their transport to the periplasm via the Sec translocon. While still engaged with the Sec translocon, the nascent OMPs are bound by the chaperone Skp, which prevents their aggregation in the periplasm. Folding is initiated when they arrive at the Bam complex in the outer membrane and is, at least for some OMPs, aided by the chaperone SurA. The Bam complex mediates their assembly into the outer membrane. How exactly the nascent OMPs pass the peptidoglycan layer is unknown, but the Bam complex components extend into the periplasm (Fig. 3a, b) and some of them might modulate the peptidoglycan to facilitate the passage of the OMPs. The main function of DegP is probably the degradation of misfolded OMPs. The Sec complex also processes nascent inner-membrane proteins (IMPs) and opens laterally to insert them into the inner membrane. OM, PP, PG and IM are defined in the legend to Fig. 1. The synthetic lethality of a skp surA double mutant is explained by an increased requirement for a holding chaperone when the folding of the OMPs is compromised by the absence of SurA, and, vice versa, efficient folding is increasingly important when the holding chaperone Skp is absent. The main role of DegP in this model is to prevent toxic accumulation of misfolded OMPs in the periplasm, either by degrading them (Fig. 2) or by sequestering them within the multimeric cage, thereby preventing them from engaging with the assembly machinery in the outer membrane (Bos et al., 2007a;Walther et al., 2009b). Obviously, this role of DegP becomes more important when the activity of Skp or SurA is compromised.
The role of the periplasmic chaperones has also been studied in N. meningitidis, where the s E -dependent stress response is absent (E. Volokhina, M.P. Bos & J. Tommassen, unpublished results). An important role for Skp in OMP biogenesis in this organism has been confirmed. However, inactivation of the surA gene had no notable effect on OMP assembly; this is consistent with the aforementioned proteomics study in E. coli (Vertommen et al., 2009), which suggested that SurA has only a very restricted number of substrates. Furthermore, inactivation of surA in an skp mutant of N. meningitidis did not aggravate the OMP assembly defect of the skp single mutant. A homologue of DegP is non-existent in N. meningitidis, but there is a homologue of the closely related protease DegQ (Bos et al., 2007a). Inactivation of this degQ gene caused no OMP assembly defect and again no synthetic phenotype was observed when the mutation was combined with an skp or surA mutation (E. Volokhina, M.P. Bos & J. Tommassen, unpublished results). Thus, at least in N. meningitidis, Skp appears to be the major periplasmic chaperone involved in OMP biogenesis.
The bacterial OMP assembly machinery
After travelling through the periplasm and reaching the outer membrane, OMPs have to fold and insert into this membrane. The first component of the OMP assembly machinery identified was a protein known as Omp85 in N. meningitidis. Homologues of Omp85 were identified in all available Gram-negative bacterial genome sequences (Voulhoux et al., 2003;Voulhoux & Tommassen, 2004), and previous attempts to inactivate the gene in Haemophilus ducreyi and Synechocystis sp. were reported to be unsuccessful (Reumann et al., 1999;Thomas et al., 2001), suggesting an important function for the protein. Furthermore, the omp85 gene was found to be located in many genome sequences immediately upstream of the skp gene encoding the periplasmic OMP chaperone, suggesting that Omp85 might be involved in OMP biogenesis as well. To assess the function of Omp85, the gene was cloned under an IPTGinducible promoter (Voulhoux et al., 2003). In the absence of IPTG, the resulting mutants stopped growing and all OMPs examined were found to accumulate as unfolded proteins as shown (amongst other characteristics) by their protease sensitivity and their lack of heat modifiability.
These results demonstrated an essential role of Omp85 in OMP assembly.
Non-denaturing SDS-PAGE (Voulhoux et al., 2003) and cross-linking experiments (Manning et al., 1998) indicated that Omp85 is part of a multi-subunit complex in N. meningitidis. These results were confirmed in E. coli, where the Omp85 homologue is now called BamA (Bam stands for b-barrel assembly machinery). BamA forms a complex with four lipoproteins, BamB-E (Fig. 3a) (Wu et al., 2005;Sklar et al., 2007a). Whereas Omp85/BamA homologues are present in all Gram-negative bacteria, the accessory lipoproteins are less well conserved. For example, in the N. meningitidis Bam complex, the BamB component is lacking and this complex contains an additional component, RmpM, an OMP with a peptidoglycan-binding motif (Fig. 3b) (Volokhina et al., 2009). In the case of Caulobacter crescentus, the BamC component is absent and a different protein with a peptidoglycan-binding motif, the lipoprotein Pal, is present as an additional component (Anwari et al., 2010). In some alphaproteobacteria, both BamB and BamC appear to be absent (Gatsos et al., 2008). Also, the function of the accessory lipoproteins is less vital. In E. coli, BamD is the only essential lipoprotein component of the complex, whereas mutational loss of the other lipoproteins causes only mild OMP assembly defects (Malinverni et al., 2006;Sklar et al., 2007a). However, even in the closely related bacterium Salmonella enterica, BamD appears to be dispensable (Fardini et al., 2009). Also, in Neisseria gonorrhoeae, a viable knockout mutant in the bamD homologue, designated comL, has been described (Fussenegger et al., 1996) but the gene appears essential for viability and OMP assembly in N. meningitidis (Volokhina et al., 2009). Thus, the Bam complex in bacteria consists of one essential central component, Omp85/BamA, and a variable number of accessory components, the importance of which is variable and depends on the specific component and the bacterium being studied.
Interaction of substrate OMPs with BamA/Omp85
Electrophysiological experiments demonstrated that purified BamA reconstituted into planar lipid bilayers forms narrow ion-conductive channels (Robert et al., 2006;Stegmeier & Andersen, 2006). The physiological significance of these channels is still unclear, but this property could be used to study the interaction of the protein with its substrate OMPs. Addition of denatured OMPs to BamA-containing planar lipid bilayers increased the conductivity of the pores, demonstrating a direct interaction between BamA and its substrates (Robert et al., 2006). Since addition of periplasmic proteins to the bilayers had no such effect, this interaction between BamA and its substrates was specific.
The specificity of the interaction between BamA and its substrates indicated the presence of a recognition signal within these substrates. Previously, a signature sequence was recognized at the C terminus of the vast majority of bacterial OMPs (Struyvé et al., 1991). This signature consists of a phenylalanine (or occasionally tryptophan) at the C-terminal position, a tyrosine or a hydrophobic residue at position -3 relative to the C terminus, and also hydrophobic residues at positions -5, 27 and -9 from the C terminus. Furthermore, the importance of the C-terminal Phe in vivo was demonstrated by its deletion or substitution in porin PhoE (Struyvé et al., 1991). Such mutations severely affected the assembly of the protein into the outer membrane. Of note, however, is that Phe was not absolutely essential: while a mutant protein deleted for the C-terminal Phe accumulated in periplasmic inclusion bodies when it was highly expressed (Struyvé et al., 1991), it was still assembled into the outer membrane when expression levels were reduced (de Cock et al., 1997). This observation could be explained if the mutation decreases but does not abrogate the recognition of the mutant protein by the assembly machinery resulting in its periplasmic aggregation. So, reduced expression will decrease the aggregation kinetics, thereby increasing the time span needed for the assembly machinery to deal with the suboptimal mutant protein.
The hypothesis that the C-terminal Phe is part of the recognition signal for BamA was confirmed in planar lipid bilayer experiments with reconstituted BamA (Robert et al., 2006). In contrast with wild-type PhoE, the mutant protein lacking the C-terminal Phe did not stimulate the conductivity of the BamA channels. However, at higher concentrations, it blocked the BamA channels, indicating that it can still interact with BamA but differently from the wild-type protein. The latter result indicates that either the recognition signal is not completely disrupted by the deletion or the PhoE protein contains additional signals. This is consistent with the observation that a mutant protein lacking the Cterminal Phe can still be assembled in vivo if the expression level is low (de Cock et al., 1997). The existence of a Cterminal recognition signal in PhoE was further confirmed by using synthetic peptides (Robert et al., 2006). Like the full-length PhoE, a synthetic peptide comprising its last 12 aa stimulated the conductivity of the BamA channels, while control peptides did not.
Omp85/BamA was predicted to consist of two domains, an N-terminal periplasmic domain and a C-terminal domain embedded as a b-barrel into the outer membrane ( Fig. 3a and b) (Voulhoux et al., 2003). The periplasmic part was predicted to consist of five repeated domains, named polypeptide transport-associated (POTRA) domains Sánchez-Pulido et al., 2003). Considering their periplasmic location, it seems likely that these POTRA domains interact with the substrate OMPs. The structures of BamA fragments containing several POTRA domains have been solved by Xray crystallography (Kim et al., 2007;Gatzeva-Topalova et al., 2008) and NMR spectroscopy (Knowles et al., 2008). Although these domains show only very limited sequence identity, they have a common structure consisting of a threestranded b-sheet overlaid with two a-helices. It was suggested that these POTRA domains interact with the substrate OMPs and/or with the accessory lipoproteins of the Bam complex by b-augmentation (Kim et al., 2007). NMR experiments indeed revealed that several peptides derived from porin PhoE could weakly bind to either side of the b-sheets in the POTRA domains (Knowles et al., 2008). Unfortunately, a Cterminal fragment of PhoE could not be tested in those experiments because of solubility problems.
OMP biogenesis in mitochondria
Other than in the outer membranes of Gram-negative bacteria, integral b-barrel membrane proteins are also found in the outer membranes of mitochondria and chloroplasts, probably reflecting the endosymbiont origin of these eukaryotic cell organelles. It should be noted that these organelles also contain a-helical OMPs (Walther & Rapaport, 2009), which will not be discussed further here. Soon after the discovery that Omp85/BamA is an essential component of the bacterial OMP assembly machinery (Voulhoux et al., 2003), several research groups identified a homologue in mitochondria and showed that it is involved in the assembly of b-barrel proteins into the mitochondrial outer membrane (Gentle et al., 2004;Kozjak et al., 2003;Paschen et al., 2003). This protein was named either Omp85, Sam50 or Tob55, and will be referred to from here on as Tob55. Tob55 was shown to be part of a complex (called the TOB or SAM complex) with at least two other proteins, which are known under various names, i.e. Tob38/Sam35 and Mas37/Tom37/Sam37 (Fig. 3c) Ishikawa et al., 2004;Milenkovic et al., 2004;Waizenegger et al., 2004). These accessory components are exposed to the cytosolic side of the outer membrane and show no homology to the lipoprotein components of the bacterial Bam complex.
Much of the genome of the endosymbiont that evolved into mitochondria has been transferred to the nucleus. Consequently, most mitochondrial proteins are synthesized in the cytoplasm of the eukaryotic cell from where they are transported into the mitochondria via the TOM complex in the outer membrane and the TIM complexes in the inner membrane (Chacinska et al., 2009). Also the precursors of bbarrel OMPs are synthesized in the cytoplasm from where they have direct access to the mitochondrial outer membrane. Nevertheless, these proteins are first imported via the TOM complex into the intermembrane space of the mitochondria (i.e. the equivalent of the bacterial periplasm) (Rapaport & Neupert, 1999;Krimmer et al., 2001;Model et al., 2001) to approach the assembly machinery from the same side as occurs in bacteria (Fig. 3c). This extension of the biogenesis route is consistent with an evolutionarily conserved assembly mechanism.
Mitochondrial b-barrel OMPs must carry a signal that is recognized by the assembly machinery in the outer membrane. This signal, termed the b-signal, was recently identified by Kutik et al. (2008). Like the C-terminal signature sequence in bacterial OMPs described above, this b-signal is located near the C terminus of the OMPs. However, it is never located at the very end and is always followed by another 1-28 residues. As shown in Table 1, the bacterial and mitochondrial signals, although not identical, appear to be rather similar and are probably evolutionarily related. Curiously, whereas the bacterial OMP signature sequence is recognized by the conserved central component BamA/Omp85 of the assembly machinery (Robert et al., 2006), the b-signal in the mitochondrial OMPs appears to be recognized by the accessory component Tob38 (Kutik et al., 2008). It should be noted, however, that the N-terminal POTRA domain of Tob55 has also been reported to interact with substrate proteins (Habib et al., 2007).
Comparison of b-barrel OMP assembly in bacteria and mitochondria
Comparison of b-barrel OMP assembly in bacteria and mitochondria reveals several similarities but also considerable differences. Firstly, the substrates in both cases are b-barrel proteins. However, while all bacterial OMPs Table 1. Comparison of the b-signal of mitochondrial OMPs and the C-terminal signature sequence of bacterial OMPs, which are recognized by their respective OMP assembly machineries The b-signal of the mitochondrial porin VDAC from Neurospora crassa and the signature sequence of the bacterial porin PhoE from E. coli are included in the comparison as examples. The one-letter code for amino acids is used. X, Any amino acid; w, hydrophobic residue; p, polar residue; n, 1-28 residues. The mitochondrial b-signal is given in bold type.
Sequence
appear to contain an even number of b-strands (Koebnik et al., 2000), the only mitochondrial b-barrel OMP of which the structure has been solved, i.e. the voltagedependent anion channel VDAC or mitochondrial porin, is a 19-stranded b-barrel (Bayrhuber et al., 2008;Hiller et al., 2008;Ujwal et al., 2008). It is interesting to note that a mutant form of porin PhoE lacking the first Nterminal b-strand has been reported to be functionally assembled, albeit inefficiently, into the E. coli outer membrane (Bosch et al., 1988), demonstrating that the bacterial Bam complex can deal with b-barrels with an odd number of strands. Secondly, the OMP assembly machineries contain a conserved central component, Omp85/BamA in bacteria and Tob55 in mitochondria. However, Tob55 is considerably smaller than its bacterial homologues. It contains only a single POTRA domain at its N terminus (Fig. 3c), while the bacterial proteins contain five of these domains ( Fig. 3a and b). A deletion analysis in N. meningitidis, however, revealed that a mutant expressing an Omp85 variant with only a single POTRA domain was viable and assembled OMPs into the outer membrane with only slightly decreased efficiency in the case of larger OMPs (Bos et al. 2007b). Thirdly, the bacterial and mitochondrial machineries contain several accessory components, which, however, show no mutual homology. Fourthly, signals for recognition by the assembly machineries have been identified near the C termini of both bacterial and mitochondrial b-barrel OMPs. These signals are similar but not completely identical. Moreover, they are recognized by different components of the assembly machineries, i.e. by Omp85/ BamA in the bacterial system and by Tob38 in the mitochondrial system.
A mitochondrial b-barrel OMP can be assembled into the bacterial outer membrane The similarities between the bacterial and mitochondrial b-barrel OMPs and their assembly machineries suggest a common evolutionary origin. However, as described above, there are also considerable differences between the systems. Therefore, it was of interest to determine whether a mitochondrial OMP can be assembled into the bacterial outer membrane. To test this possibility, VDAC of Neurospora crassa was genetically fused to a signal sequence to mediate transport across the bacterial inner membrane via the Sec system, and the construct was expressed in E. coli (Walther et al., 2010). Cell fractionations, protease-sensitivity assays and immunofluorescence microscopy showed that VDAC was assembled into the bacterial outer membrane where it formed functional pores. Furthermore, assembly into the outer membrane was dependent on the C-terminal b-signal in VDAC and on the expression of a functional E. coli BamA protein (Walther et al., 2010). These results demonstrated that the bacterial OMP assembly machinery can still deal with the b-barrel OMPs that evolved in mitochondria.
Bacterial OMPs can be assembled into the mitochondrial outer membrane
It was also of interest to determine whether the b-barrel OMP assembly machinery that evolved in mitochondria is still able to handle bacterial OMPs. This question was more complicated to address, since b-barrel OMPs in mitochondria first have to be taken up via the TOM complex before reaching the TOB complex from the right side of the membrane (Fig. 3c). The mitochondrial b-barrel OMPs do not contain a cleavable signal for their targeting to mitochondria but rather an uncleavable internal signal. The nature of this signal has not been characterized and may be dispersed over the entire polypeptide rather than being confined to a discrete segment (Walther & Rapaport, 2009). Such a signal would be difficult to fuse genetically to a bacterial OMP. However, it was also proposed that bbarrel-specific structural elements are recognized by the mitochondrial import machinery (Walther & Rapaport, 2009), in which case, bacterial OMPs might also be recognized. To test this possibility, porin PhoE of E. coli was expressed in Saccharomyces cerevisiae without its signal sequence, which would presumably lead the protein to the endoplasmic reticulum (Walther et al., 2009a). The protein was found to accumulate in the mitochondria of the yeast in a TOM-dependent manner. Similar results were obtained for a diverse set of other bacterial OMPs. Thus, apparently, the bacterial OMPs contain the appropriate signals to be taken up into mitochondria via the TOM complex. These results indicate that no eukaryote-specific import signals were required to evolve in mitochondrial bbarrel OMPs to ensure their import into mitochondria when, during endosymbiont evolution, their structural genes were transferred to the nucleus.
The accumulation of PhoE in the mitochondria was also dependent on a functional TOB complex. The protein was inserted into the mitochondrial outer membrane in its native trimeric state and it was detectable at the surface of intact mitochondria with PhoE-specific monoclonal antibodies that recognize conformational epitopes (Walther et al., 2009a). The efficiency of the assembly into the mitochondrial outer membrane was dependent on the expression level; at low expression levels, all PhoE detected was correctly assembled into the trimeric configuration, whereas at high expression levels considerable amounts of the protein also accumulated as aggregates, presumably in the mitochondrial intermembrane space (Walther et al., 2009a). Thus, apparently, the capacity of the TOB complex to deal with the heterologous substrate protein is limited. Assembly of PhoE into the mitochondrial outer membrane was also dependent on its C-terminal signature sequence; when the mutant PhoE protein lacking the C-terminal Phe was expressed in S. cerevisiae, it was taken up into the mitochondria but it was not assembled into the outer membrane in its native trimeric state (Walther et al., 2009a). Thus, collectively, bacterial OMPs can be assembled into the mitochondrial outer membrane and this assembly depends on their C-terminal signature sequence and on the mitochondrial TOM and TOB complexes.
Conclusions
In recent years, much progress has been made in studies on the biogenesis of bacterial OMPs. This progress is mostly related to the identification of the components of the machinery that assemble these proteins into the outer membrane and also on the resolution of the structures of the periplasmic chaperones involved, some in complex with their substrate OMPs. Progress was also stimulated by the discovery of a similar machinery for the insertion of bbarrel OMPs into the mitochondrial outer membrane. The basic mechanism of OMP assembly is conserved to such an extent that a mitochondrial OMP can be assembled in vivo into the bacterial outer membrane, and vice versa, bacterial OMPs can be assembled into the mitochondrial outer membrane. It is likely that a similar mechanism operates in chloroplasts (Hsu & Inoue, 2009). Thus, results in these fields will be mutually profitable. Mechanistic insight into the assembly process and the function of the individual components of either of these systems is still very limited.
Much progress is to be expected in the near future from the resolution of the structures of the components or, perhaps, of the entire machineries and from the development of reconstituted systems with purified components to study the assembly process in vitro.
|
v3-fos-license
|
2021-05-07T00:02:55.144Z
|
2020-12-29T00:00:00.000
|
233793608
|
{
"extfieldsofstudy": [
"Geology"
],
"oa_license": "CCBY",
"oa_status": "GREEN",
"oa_url": "https://www.mdpi.com/2076-3263/11/3/124/pdf",
"pdf_hash": "6767335f82d9d3f36d483c531dcd20ceafe0e3fe",
"pdf_src": "ScienceParsePlus",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44311",
"s2fieldsofstudy": [
"Geology"
],
"sha1": "9cf560c1e92c78a2ee200b23f70a2113f14b21a9",
"year": 2020
}
|
pes2o/s2orc
|
The Gavorrano Monzogranite (Northern Apennines): An Updated Review of Host Rock Protoliths, Thermal Metamorphism and Tectonic Setting
: We review and refine the geological setting of an area located nearby the Tyrrhenian seacoast, in the inner zone of the Northern Apennines (southern Tuscany), where a Neogene monzogranite body (estimated in about 3 km long, 1.5 km wide, and 0.7 km thick) emplaced during early Pliocene. This magmatic intrusion, known as the Gavorrano pluton, is partially exposed in a ridge bounded by regional faults delimiting broad structural depressions. A widespread circulation of geothermal fluids accompanied the cooling of the magmatic body and gave rise to an extensive Fe ‐ ore deposit (mainly pyrite) exploited during the past century. The tectonic setting which favoured the emplacement and exhumation of the Gavorrano pluton is strongly debated with fallouts on the comprehension of the Neogene evolution of this sector of the inner Northern Apennines. Data from a new fieldwork dataset, integrated with information from the mining activity, have been integrated to refine the geological setting of the whole crustal sector where the Gavorrano monzogranite was emplaced and exhumed. Our review, implemented by new palynological, petrological and structural data pointed out that: (i) the age of the Palaeozoic phyllite (hosting rocks) is middle ‐ late Permian, thus resulting younger than previously described (i.e., pre ‐ Carboniferous); (ii) the conditions at which the metamorphic aureole developed are estimated at a temperature of c. 660 °C and at a depth lower than c. 6 km; (iii) the tectonic evolution which determined the emplacement and exhumation of the monzogranite is constrained in a transfer zone, in the frame of the extensional tectonics affecting the area continuously since Miocene.
Introduction
The inner Northern Apennines (i.e., northern Tyrrhenian Sea and southern Tuscany), after having experienced HP/LT metamorphism during late Oligocene-early Miocene [1][2][3] was affected by extension since Burdigalian [4]. The clearest evidence of this process is the opening of the Tyrrhenian Basin [5] and the present 20-26 and 30-50 km crustal and lithospheric thickness, respectively [6][7][8]. Extension favoured partial melting in the lower crust and in the mantle, thus generating crustal and hybrid magmas (Tuscan Magmatic The Gavorrano pluton is an example of this process [25]. Such a pluton is an about 3 km 3 laccolith [26], dated at 4.9 Ma [27] and partially exposed few kms to the east of the Tyrrhenian seacoast ( Figure 1). It consists of cordierite-bearing monzogranite [28] with references therein with K-feldspar phenocrysts (up to 10 cm long), intruded by tourmaline-rich microgranite, porphyritic and aplitic dykes [25]. This magmatic intrusion and its contact aureole were mined from the last decades of the 19th century up to the 1981, to exploit sulphide (mainly pyrite) ore deposit, mostly occurring at the boundary between the igneous and host rocks, and in fault zones [29]. Although numerous studies were dedicated to this pluton, with the aim to reconstruct genesis and setting of the ore deposits (e.g., [26,[29][30][31]), contrasting interpretations still remain with regards to: (i) the nature and age of the quartzitic-phyllite hosting rocks, contrastingly referred to Permian [32] or pre-Carboniferous [33]; (ii) the thermal conditions across the contact aureole and the related P-T peak conditions in the contact aureole, pointing to significantly different emplacement depths (cfr. [25,26,34]); (iii) the tectonic evolution of the Gavorrano area that was explained in extensional (e.g., [25,29]), transtensional [35] or compressional framework [34,36]. The compressional setting was also taken into account by [37,38] to explain the emplacement of the Gavorrano pluton, assumed to be contemporaneous to Pliocene regional thrusts and associated roof-anticlines. In this scenario, these authors considered the Gavorrano pluton as a key example for explaining the pluton emplacement in a compressional scenario, basically active since the Cretaceous in the inner Northern Apennines and northern Tyrrhenian sea.
In this paper, the state of the art on these themes, the contrasting interpretations, and hypotheses are discussed in the frame of new datasets. As a main conclusion: (i) we document the Permian age of the quartzitic-phyllite hosting rocks; (ii) we point to a peak temperature of c. 660 °C at a maximum pressure of 150 MPa for the metamorphic conditions in the contact aureole; and iii) we reconstruct the deformation within that sector of a Neogene regional transfer zone, which controlled the emplacement and exhumation of the Gavorrano pluton in the extensional framework characterizing inner Northern Apennines.
Geological Outline
The Gavorrano pluton intruded the lower part of the Tuscan Unit, following the main foliations and lithological boundaries ( Figure 2) in the Palaeozoic-Triassic quartzite and phyllite, Triassic metacarbonate and late Triassic evaporite successions [25,29,39]. These rocks experienced, therefore, LP-metamorphism making particularly problematic the age attribution of the quartzitic-phyllite hosting rocks, contrastingly referred to Permian [32] or pre-Carboniferous [33], with different fallouts on the palaeogeography and context in which the overlying Triassic succession took place.
The initial studies on the intrusive rocks were carried out by [40][41][42][43][44]. Marocchi [44] firstly described the Gavorrano granite as a magmatic complex formed by a porphyritic granite, a tourmaline-bearing microgranite and mica-bearing microgranite. Furthermore, Martelli [45] presented a geochemical and crystallographic study of both magmatic rocks and pyrite, hence describing, for the first time, the habitus and morphology of the pyrite and K-feldspar. However, the most complete paper dealing with the Gavorrano intrusion was published by [25], who defined the porphyritic granite as a quartz-monzonite, crossed by tourmaline-rich microgranite and aplitic dykes. Barberi et al. [46] implemented the study of the pluton, in the meantime dated at 4.9 Ma by K/Ar radiometric data [27].
The laccolithic shape of the magmatic intrusion was constrained by data from the underground mining activity [29] and finally defined by [47] as a body with a maximum length of 3 km, a width of 1.7 km and a thickness of 0.7 km. The depth of the Gavorrano granite emplacement was estimated by [26] at a maximum value of 2-2.5 km, corresponding to a lithostatic pressure lower than 100 MPa. Differently, [47] indicate a maximum depth ranging between 4 and 5 km, corresponding to a lithostatic pressure lower than 200 MPa. Magma cooling was accompanied by a significant hydrothermal process that led to pyrite ore deposits. Mining activity was carried out nearby the partially exposed monzogranite (Figure 3a,b).
Today, the intrusive rocks are partially exposed at surface or were tunneled at shallow depth in tunnels dug during the mining activity (Figure 3a,b). Their exhumation was controlled by normal faults, well-constrained in terms of geometry and displacements by means of surface and mining data [26,29,35].
Main faults were named as Gavorrano (NNW-SSE striking) and Monticello (N-S striking) faults, delimiting the western and eastern margins of the pluton, respectively (Figure 3a,b). The Gavorrano Fault is described as a west dipping high angle normal fault (60-70°) and with an arcuate geometry [29]. Its total offset exceeds 600 m. The Monticello Fault is a middle-angle (35-50°) normal fault dipping to the east and is characterized by a total offset of about 1000 m [29]. Mining data highlight that the Gavorrano and Monticello Faults intersect each other in proximity of the Ravi village ( Figure 3a). Both faults are mineralized although with different hydrothermal parageneses: the Gavorrano Fault hosts pyrite-ore bodies associated to minor content of galena, chalcopyrite and blend [29]. Differently, the Monticello Fault was mineralized by a hydrothermal mineral paragenesis made up of quartz, barite, celestine, pyrite/marcasite, stibnite, fluorite, orpiment/realgar ( Figure 5).
The northern margin of the Gavorrano magmatic body is delimited by a SW-NE trending fault system, interpreted by some authors as the continuation of the Gavorrano Fault (e.g., [26,29]). Despite the significant role, this SW-NE trending fault is not mentioned by [47,34,36,38], although its occurrence is well documented by the mining data from the Rigoloccio mine (Figures 3 and 5) and described in several previously published geological maps and structural sketches [26,29,35].
Another N-S striking fault, named as the Palaie Fault (Figure 3a), was considered associated to the Gavorrano Fault, being almost parallel to this latter (cf. [29] with references therein). This structure delimits the western slope of the Monte Calvo [29] and was not interested by mining exploration. Nevertheless, this fault and the fault system delimiting to the east the monzogranite was investigated by [35] who presented a structural and kinematic dataset documenting a dominant strike-to oblique-slip kinematics. On the other hand, [47] account for a normal component of the Palaie, Gavorrano, and Monticello Faults, whereas [34] hypothesize a reverse/transpressive kinematics at least for the Palaie Fault. This view was later implemented by [36] who reported two adjunctive NW-SE trending faults, up to 2 km long (named as the Monte Calvo and Rigoloccio Faults: Figure 2 in [36]), and interpreted as cartographic scale reverse faults.
Age of Hosting Rocks
Protoliths of the LP-metamorphic rocks forming the contact aureole, consisting of metacarbonate and metapelite, are referred to the Tuscan Unit [57]. Dallegno et al. [26], Lotti [49 ], De Launay and Gites [52], Lotti [43] interpreted the dominantly metacarbonate succession exposed in the NW side of the magmatic intrusion and exploited at depth, as a part of the Late Triassic succession (i.e., Burano and Calcare a Rhaetavicula contorta formations; black limestone, In [77]). Part of this succession, tunneled in the Gavorrano mine, was considered by [25] as the transition from the late Triassic carbonate/evaporite to the Triassic metasiliciclastic succession of the Verrucano Group, later defined as the Tocchi Fm [78,79], never documented before in the Gavorrano area. Marinelli [25], Lotti [49], De Wukerslooth [57], Lotti [43] referred the andalusite-bearing metapelite exposed north and south of the monzogranite (Figure 2), to the Palaeozoic succession underlining the late Triassic carbonate one. Marinelli [25], Arisi Rota and Vighi [29] considered this succession as part of the Filladi di Boccheggiano Fm, attributed to the Permian or pre-Sudetian by [32,33], respectively. Dallegno et al. [26] agreed with the interpretations of the previously mentioned authors about the interpretation of the outcrops exposed at south of the monzogranite, in proximity of the Ravi village ( Figure 3a); furthermore, Dallegno et al. [26] proposed an alternative hypothesis regarding the northern exposure (at north of the Gavorrano village, Figure 3a) where the exposed pelitic hornfel and metaquartzite (mainly consisting of metasandstone and quartz-metaconglomerate) were related to the Triassic Verrucano Group [33,80], on the basis of their textural and compositional features, as well as the occurrence of tourmalinolite and red porphyry clasts. In order to better constrain the age of this discussed metapelite succession, we have analysed key samples from: i) the exposures along the main road in proximity of the Ravi village, and ii) the mining tunnel, named as Il Santo gallery, not so far from the previous exposure ( Figure 3a). Since LP-metamorphism reasonably obliterated the fossil contents, making any age determination impossible, we applied the study of palynological content, a useful methodology due to the fact that the wall of sporomorphs is characterized by a sporopollenin, a biopolymer of complex and not-completely known structure very resistant up to high temperatures (e.g., [81][82][83][84][85]) and provide a good chronological resolution (e.g., [86]). We collected key samples of spotted black metapelite and phylliticquartzite with high organic matter content. In particular, 2 samples have been collected in the exposures at north of the Ravi village (Rav 1 and Rav 2) and 3 samples (GSA 1-3) have been collected in the Il Santo gallery belonging to the Ravi mine ( Figure 3a). Samples were treated with HCl (37%) and HF (50%) to destroy the carbonate and siliciclastic component. Boiling HCl (30%) was then used to remove the insoluble fluorosilicate. The organic residue was sieved with a 20 mm filter. The yield of the sample was treated repeatedly with Schultz solution, due to strongly high degree of thermal alteration preventing the identification of black-colour (graphitized?) palynomorphs. Light microscope observations were made on palynological slides using a Leica DM1000 microscope with differential interference contrast technique in transmitted light. Images were captured using the digital camera connected to the microscope and strongly corrected for brightness and contrast and colour using the open-source Gimp software. Palynological slides are stored at the Sedimentary Organic Matter Laboratory of the Department of Physics and Geology, University of Perugia, Italy. Samples GSA1-3 resulted almost barren in terms of palynomorph content. The yield of the samples mainly consists of large opaque phytoclasts such as inertinite (ligneous fragments completely oxidised) and some indeterminate black organic microfossils. On the contrary, in the samples Rav1 and Rav2, despite the low preservation grade prevents the recognition of almost all microfloristic elements, some sporomorphs were identified ( Figure 6).
The Contact Aureole
The emplacement of the Gavorrano pluton produced LP-metamorphism on the host rocks resulting in a narrow contact aureole with a thickness of 200-300 m [25,26]. LPmetamorphism superimposed on the regional metamorphism which affected the preevaporitic metamorphic "basement" mainly represented by the Palaeozoic-Triassic phyllitic-quartzite units (i.e., dominantly pelitic successions), and the late Triassic carbonate rocks, producing hornfelses with different mineral assemblages, as firstly described by [25]. Concerning the pelitic rocks, [25,26] document a mineralogical assemblage made up of quartz + muscovite + K-feldspar + andalusite and chlorite + biotite + cordierite in Mg − and Fe-bearing phyllite. Differently, [47] describe quartz + plagioclase + K-feldspar + andalusite and blasts replaced by fine-grained white mica they interpret as relicts of cordierite. [25] also describes corundum and green spinel, replaced by biotite and plagioclase, found within xenoliths collected in the Gavorrano mine. Differently, in the carbonate rocks, calc-silicate hornfels, partially replaced by skarn, shows a mineral assemblage mainly formed by garnet + epidote + spinel + wollastonite + diopside + forsterite + scapolite + quartz + calcite + vesuvianite [25,26]. At depth, the contact between granite and hornfels was described at −50 m, −200 m and −250 m [26,46] and wollastonite + calcite + quartz and diopside + forsterite + calcite mineral assemblages, with local levels enriched of garnet + vesuvianite + scapolite, have been found [46]. In the deepest levels of the Gavorrano mine (−200 m depth b.s.l.), [26] document dolomitic marble characterised by centimetric calcite and dolomite crystals, intimately associated to calc-silicate hornfels. Similarly, at the contact with the monzogranite, the same authors describe 1-2 m thick mineral assemblages consisting of: i) diopside + garnet + dolomite + calcite approaching the hornfels, and ii) epidote + tremolite + diopside + scapolite + calcite + garnet approaching the monzogranite. Diopside + tremolite veins, classified as replacement skarn [103], have also been documented in veins that cut the hornfels; similarly, narrow bands of phlogopite + tremolite (± actinolite) composition have also been described at the boundary between hornfels and skarn. No data are available for the mineralogical assemblage of the pelitic rocks, at the depth where observations were carried out. LPmetamorphism was followed by a subsequent hydrothermal event which produced, among the Fe-ore deposit [26,29,65], the alteration of the forsterite and diopside into serpentine, tremolite, talc, and chlorite, and the formation of veins filled by quartz + adularia + epidote + sulphides± calcite ± albite ± tremolite indicating temperature of about 250-300 °C [26]. Speculation of maximum temperature of about 175 °C was proposed for the last hydrothermal circulation by [104] analysing goethite and clay minerals at the Rigoloccio Mine (Figure 3a), derived from the hydrothermal alteration of the monzogranite and pyrite body.
We have implemented the existing dataset by analysing key samples of pelitic and carbonate hornfels from some key outcrops nearby the Ravi mine ( Figure 3a) and from underground. These latter samples have been collected at: (i) the level −50 m b.s.l. of the Gavorrano mine; (ii) samples collected in the mining dump and possibly coming from the level −200 m of the Gavorrano mine. On the whole, our data agree with those reported by the previous Authors and provide additional information on the pelitic hornfels, particularly from the deep part of the Gavorrano mine.
The analysed pelitic and semipelitic rocks grade from spotted schist to hornfels (Figure 7). A compositional layering is generally recognisable being highlighted by an alternation of quartz-and mica-rich levels. In several cases, an intense deformation is observed in the form of serrated microfolds and winged d-porphyroclast (Figure 7a). In the spotted schist, the mineral assemblage is typically made up of quartz + biotite + muscovite + andalusite + tourmaline. Tiny elliptical cloudy spots are observed, probably derived from original cordierite (Figure 7b,c). In the hornfels from the deep level of the Gavorrano mine, muscovite-out conditions were reached as testified by the presence of K-feldspars and locally of corundum. Quartz crystals display variable grains size and are commonly characterised by polygonal shapes. In some cases, quartz shows lobate grain boundaries suggesting that a dynamic recrystallization took place. Biotite flakes increase in abundance from spotted schist to hornfels where they show orange-brown colour when oriented parallel to the lower polarizer. Andalusite porphyroblasts commonly show euhedral habit, with elongated and square diamond shapes (Figure 7d-f). The latter, usually contain the cross-shaped dark inclusion pattern typical of chiastolite (Figure 7d,e) as also described by [25]. Corundum is abundant and well recognisable at microscope scale in the form of spots made up of isolated crystals or aggregates within the biotite-rich levels devoid of quartz (Figure 7g). It shows a polygonal shape and a corona made up of K-feldspar, rare muscovite ± rutile (Figure 7h,j). It often displays a pale blue colour typical of sapphire variety. Tourmaline is zoned with brown to cyan colours being of dravite type and is mostly found within biotite-rich levels (Figure 7k). Among the accessory phases, zircon and opaque minerals are always present, whereas rutile is found in corundumbearing hornfels.
The analysed carbonate rocks collected in the Gavorrano mine (level-50m b.s.l.) consist of marbles with a variable grain size. In most cases, they contain olivine ( Figure 8a,b) without diopside suggesting that they derive from carbonatic-silica-pure protolith. Locally, in the fine-grained type a polygonal fabric of calcite can be recognised, indicating a static recrystallization (Figure 8a). In some cases, olivine-rich levels show diffuse serpentinization, with few olivine relicts still present (Figure 8c,d), justified by [25,26] as the effect of a later hydrothermal fluid flowing through the thermal aureole. Some considerations can be provided on the peak P-T conditions reached in the thermal aureole. In the pelitic hornfels, recording the maximum temperature in the contact aureole, muscovite-out conditions were reached through the reaction: Ms + Qtz = And + Kfs + H2O (1) Alternatively, in silica-poor domains, the genesis of corundum could be promoted by reaction: After muscovite disappearance, corundum could be produced by reaction provided by Pattison In the analysed samples, there is no evidence for the simultaneous blastesis of corundum and cordierite. Thus, reaction (2) is preferred for the genesis of corundum.
In order to constrain P-T conditions for the contact metamorphism, a look at a simple P-T grid is practical. The diagram in Figure 9, in addition to reaction curves (1) and (2), shows the wet solidus curve for granite and the andalusite-sillimanite equilibrium line. The absence in the hornfels of sillimanite and of microstructures indicative of partial melting indicate that the andalusite-sillimanite equilibrium line and the granite solidus curve were not crossed during the heating phase. On the other hand, the presence of corundum allows to constrain the metamorphic peak beyond reaction (2), within the grey area. A maximum limit for the pressure, provided by the intersection of reaction (2) with the andalusite-sillimanite equilibrium is of c. 170 MPa, corresponding to a temperature of c. 640 °C. At lower pressures, higher temperatures for the thermal peak are possible.
Quantitative estimates of the temperature were attempted by the Ti-in-biotite thermometer by Wu et al. [108]. This was calibrated for pelitic rocks containing a Ti-rich phase such as ilmenite or rutile at pressure higher of 100 MPa, thus being appropriate for the present case. On the basis of 7 biotite analyses of a corundum-bearing hornfels sample (Table A1), a mean value of c. 660 °C was obtained at a pressure of 170 MPa and of c. 650 °C at a pressure of 100 MPa. A check on the compatibility of these numerical results with the P-T extent of the grey area in the diagram of Figure 9, suggests a pressure value lower than c. 150 MPa, corresponding to a depth lower than c. 6 km, assuming an average density of 2650 kg/m 3 for the upper crust. However, considering the error of the thermometer, this latter P limit should be verified through more refined petrological methods and/or geological constraints. Figure 9. P-T grid from [109] here adopted to constrain conditions for the peak of contact metamorphism. Muscovite breakdown curves at PH2O = Ptotal are from [110], the granite wet solidus curve is from [111] and the andalusite-sillimanite equilibrium line from [112][113][114]. The grey area indicates peak P-T region compatible with the presence of andalusite + K-feldspar and, in silicapoor domains, of corundum + K-feldspar. Point (a) indicates maximum estimate for pressure on the basis of the corundum + K-feldspar presence, resulting in a value of 170 MPa. The dotted line connects points related to temperature estimates by Ti-in-biotite thermometer at 170 and 100 MPa, respectively. Point (b) indicates maximum estimate for pressure on the basis of the Ti-in-biotite thermometer, resulting in a value of c. 150 MPa.
Structural and Kinematic Data
The geological setting was already reconstructed by the large amount of mining data as reported in several papers (e.g., [25,26,29,65]). Nevertheless, still contrasting hypotheses are provided by different authors on the tectonic evolution that accompanied the pluton emplacement and its exhumation (cf. [26,35,36,115]).
In order to contribute to this issue, existing mining documents and a new dataset of structural and kinematic data have been integrated. Figure 3a shows the location of the stations where the structural analysis has been carried out. The results are shown in the stereographic diagrams, reported in the Annex 1.
Both cartographic and outcrop-scale evidence highlight superposed faulting events that can be categorized In (i) low-to middle-angle (<50° of dipping value) normal faults, affecting both granite and the carbonate succession; (ii) high angle (>50° of dipping value) strike-slip faults coexisting with the low-to middle-angle normal faults; (iii) high angle normal faults displacing the previous formed structures (Figure 3a,b).
As it regards the low-to middle-angle faults, the best example is the Monticello Fault (Figures 3a and 5), which decouples the monzogranite from the overlying sedimentary cover, by an almost ten-meter thick mineralized cataclastic zone, as it is well documented by the mining data (Figure 5a). Therefore, the consideration of the mining data changes the view of the Monticello Fault, previously interpreted as a high-angle normal fault, parallel to the Gavorrano Fault although dipping in the opposite direction and delimiting the monzogranite to the east [26,29,35,47].
By the new integration of data, the Monticello Fault assumes the role of an already existing fault decoupling the magmatic intrusion from the hosting rocks and contributing to the exhumation of the monzogranite. Such a structure was later affected by high-angle faults to which the Gavorrano Fault belongs (Figure 3a,b). It is worth to note that, on the basis on the mining data form the Ravi mine (located in the southern part of the Monticello fault), Marinelli [25] accounts for a shear zone, separating the magmatic intrusion from the hosting rocks, similarly to what is observed along the Monticello Fault.
Low-angle faults affecting the carbonate succession (Figures 3a and 10), also occur in the hanging wall of the Monticello Fault (Figure 3a). These are well-exposed in the abandoned quarries on the northern slope of the Monte Calvo area and are arranged in subparallel and anastomosed segments that define decameters-thick sheared and delaminated volumes with conjugated fault segments forming lozenge-shape geometries and meter-/decameter-scale extensional horses (Figures 11 and 12). Fault segments are characterized by kinematic indicators consisting of calcite fibres and steps, indicating normal, mostly topto-the E-NE sense of shear ( Figure 11, Figure A1). All these data contrast with the kinematic interpretation proposed by [36], although conducted in the same outcrops (cfr. Figure 7a,b in [36]). These authors, in fact, support a top-to-the west reverse kinematics of these faults, notwithstanding the fact that kinematic indicators clearly indicate a normal movement (Figure 11b,f). Furthermore, it is worth to underline that this particular kinematics is in agreement with the data collected in the whole Gavorrano area ( Figure A1) and with the geometrical setting of the low-angle faults, as visible in the quarry exposures ( Figure 12). Low-angle faults affecting the monzogranite (Figure 13a,b) have also been recognized. Here, these show striated slip-surfaces (Figure 13c) bounded by a centimeterthick core zone with ultra-comminuted grains (Figure 13d) and centimeters-thick level of foliated monzogranite, showing s-c structures, with a top-to-the west sense of shear (Figure 13e,f). Although faults exposure in granite are limited, their setting accounts for a lozenge-shaped geometry (Figure 13b), thus explaining the occurrence of both top to E-ENE (dominant) and top to W-WSW sense of shear on their slip planes, respectively.
Concerning the high angle faults, N-S and SW-NE strike-slip faults occur in the whole area (Figure 3a). The best exposures (especially for the N-S striking faults) were recognized in the quarries, north of the Monte Calvo ( Figure 10) and in the western part of the study area (i.e., Palaie Fault, Figure 3a). In the abandoned quarries, these faults define decameters-thick vertical brittle shear zones ( Figure 14) formed by sub-parallel and conjugate fault planes (Figure 14a-c), surrounded by well-developed damage zones. Left-lateral strike to oblique-slip kinematics is then suggested by indicators locally preserved on the slip-surfaces and consisting of calcite slicken-fibres and steps (Figure 14d,e). In some cases, syn-kinematic cm-to dm-thick banded calcite veins formed along the fault planes, or in extensional jogs (Figure 14f). This attests the role of such faults in controlling the hydrothermal fluid paths from the late magmatic events onwards, at least. This is in fact attested by the several S-N and SW-NE oriented microgranite dykes intruding both the monzogranite and the hosting rocks in fault zones, as documented in the outcrops ( Figure 15) and by the underground mining data (Figure 4). Thus, a local strike-slip regime is supposed to have controlled the deformation in the Gavorrano area, and probably the pluton emplacement. Nevertheless, although the interplay between the low-angle normal faults and the S-N to SW-NE striking strike-slip faults has not been directly documented in the field, it is reasonable to assume that the transcurrent faults were contemporaneously active with the low-angle normal faults, since both fault systems are affected by syn-tectonic hydrothermal circulation. Their contemporaneity is also confirmed by the inversion of the kinematic data collected on fault-slip surfaces of both low-angle normal faults and strike-slip faults: it highlights a strong kinematic compatibility, as shown by the orientation of the main kinematic axes (Figure 16a-c). We can therefore assume that these faults were active under a common stress field: in this view, the low-angle normal faults developed as a consequence of the crustal thinning, having triggered magmatism and favoured the development of SW-NE striking km-thick sub-vertical brittle shear zone (i.e., transfer zone: [116][117][118]) of which the Gavorrano area is a part. In the context of the deformation induced by a transfer zone, the N-S striking leftlateral and SW-NE striking right-lateral strike-slip faults are thus framed in the same setting, as indicated by their kinematic compatibility (Figure 16c,d). Consequently, those are interpreted as minor faults linking the SW-NE striking main structures, in a common left-lateral strike-slip shear zone.
Concluding, we can depict a tectonic evolution where low-angle normal faults and strike-slip faults (N-S striking left-lateral strike-slip, SW-NE striking left-and right-lateral strike-slip faults) coexisted during the emplacement and exhumation of the monzogranite, as sketched in the conceptual model of Figure 17a,b.
The Palaie Fault (Figure 3a) has been described by several authors as a strike-slip fault [35] or a transpressive fault, by the kinematics reconstructed in a single outcrop [36]. Our data ( Figure 18) highlight that what today is recognizable along the western slope of the Monte Calvo is the result of two superposed faulting events, at least: strike-slip fault segments are in fact preserved within lithons delimited by sub-parallel west-dipping normal faults (Figure 3a). In other words, the western slope of the Monte Calvo is delimited by a normal fault system partly reactivating and dissecting older N-S striking left-lateral strike-slip faults, thus determining lithons of which original attitude is reasonably modified. This can explain the singularity of the Palaie Fault, the single structure with visible kinematic indicators contrasting the general framework. Conceptual model illustrating the relationships between faulting and magma intrusion/exhumation. SW-NE striking left-lateral regional transfer zone enucleated in a wide area including that one where the Gavorrano monzogranite is exposed, today. The transfer zone was active contemporaneously with top-to-the ENE e WSW low-to middle-angle normal faults, during the extensional evolution of the inner Northern Apennines. (a) The transfer zone gave rise to SW-NE striking left-lateral strike-slip faults linked by N-S striking faults in releasing step-over zones. Minor faults (NNW-leftlateral and WNW-striking right-lateral strike-slip faults) are associated with the N-S striking first-order faults. (b) The shearing evolution within the transfer zone formed vertical highly permeable volumes centred on the N-S striking faults. Magma was channeled within the permeable volume and intruded at the base of the late Triassic evaporite level, within the Permo-Triassic succession, in a depth interval comprised between 6.3 and 5.2 km. (c) Normal faults followed the magmatic emplacement and were active in the same regional stress field that was active at the time of pluton emplacement. These normal faults contributed to the exhumation of the monzogranite and the present configuration of the whole Gavorrano area. High angle normal faults, NNW-SSE and N-S striking are the youngest structures. These dissect the previous formed low-angle faults (Figure 19a,b) and are characterized by oblique-slip to normal movements (Figure 19c). Fault zones are with meters-thick damage zones (Figure 19a) where well-organized minor fractures affect both their hanging wall and footwall (Figure 19c). Kinematic indicators mainly consist of groove and mechanical striations developed on the fault surfaces.
Inversion of kinematics data collected on the normal faults ( Figure 16e) show a kinematic compatibility with the low-angle normal faults (Figure 16a), thus supporting a stable E-NE trending extensional regime from the emplacement of the monzogranite until its exhumation (Figure 17c).
Conclusive Remarks
On the basis of the new dataset integrated with the pre-existing data we can state the following points: The laccolithic monzongranite emplaced within the upper part of the Tuscan metamorphic succession, at the base of the Late Triassic carbonate succession. The exposed contact aureole at north of Ravi village is referred to the phyllitic-quartzite succession, similar to part of that one exposed at north of the Gavorrano village, underlying the metasandstone and quartz-metaconglomerate of the Triassic Verrucano Group. The succession exposed in the Gavorrano village and neighbourhood is referred to a transitional succession (i.e., Tocchi Fm) interposed between the Verrucano and late Triassic evaporite. The thermo-metamorphic paragenesis and Ti-in-biotite geothermometer point to a peak Temperature of c. 660 °C at a depth probably lower than 6 km. Dynamic recrystallisation of LP paragenesis suggests a syn-kinematic evolution of the contact aureole, in agreement with the active tectonic setting that assisted the magma emplacement, cooling and exhumation. We do not confirm the occurrence of regional and/or cartographic scale reverse faults, or thrust-related roof-anticline triggering the magma emplacement and hosting the magmatic intrusion, since those previously proposed interpretations contrast with field data evidence. The pluton emplacement was coeval with coexisting strike-slip and extensional tectonics that continued also after the magma cooling and produced the exhumation of the magmatic system and of its contact aureole. The tectonic setting did not change through time: strike-slip and normal faults coexisted at least since the early Pliocene (age of the monzogranite emplacement). The Gavorrano pluton emplaced within a SW-NE trending sub-vertical strike-slip brittle shear zone (i.e., transfer zone) that accompanied the development of low-to middle-angle normal faults formed in a E-NE trending extensional setting. SW-NE striking strikeslip faults were mainly linked by NS striking strike-slip faults in releasing step-over zones, favouring the development of sub-vertical dilatational volumes with enough permeability to channel the magma from the deeper to upper crustal levels. Table A1. Biotite analyses of corundum-bearing hornfels used for the application of the Ti-in-biotite geothermometer by [109]. Mineral formulae, calculated according to the method by [119].
|
v3-fos-license
|
2020-07-30T02:02:38.656Z
|
2020-07-01T00:00:00.000
|
220855684
|
{
"extfieldsofstudy": [
"Biology",
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "HYBRID",
"oa_url": "https://www.nature.com/articles/s41586-020-2119-x.pdf",
"pdf_hash": "b75db0e01f40a3bb91aa022a23c1f4d648e47a66",
"pdf_src": "MergedPDFExtraction",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44312",
"s2fieldsofstudy": [
"Biology"
],
"sha1": "c339378e83dc21c789ad315c6261f8a8985bc6b8",
"year": 2020
}
|
pes2o/s2orc
|
Spatiotemporal DNA methylome dynamics of the developing mouse fetus
Cytosine DNA methylation is essential for mammalian development but understanding of its spatiotemporal distribution in the developing embryo remains limited1,2. Here, as part of the mouse Encyclopedia of DNA Elements (ENCODE) project, we profiled 168 methylomes from 12 mouse tissues or organs at 9 developmental stages from embryogenesis to adulthood. We identified 1,808,810 genomic regions that showed variations in CG methylation by comparing the methylomes of different tissues or organs from different developmental stages. These DNA elements predominantly lose CG methylation during fetal development, whereas the trend is reversed after birth. During late stages of fetal development, non-CG methylation accumulated within the bodies of key developmental transcription factor genes, coinciding with their transcriptional repression. Integration of genome-wide DNA methylation, histone modification and chromatin accessibility data enabled us to predict 461,141 putative developmental tissue-specific enhancers, the human orthologues of which were enriched for disease-associated genetic variants. These spatiotemporal epigenome maps provide a resource for studies of gene regulation during tissue or organ progression, and a starting point for investigating regulatory elements that are involved in human developmental disorders.
Mammalian embryonic development involves exquisite spatiotemporal regulation of genes 1,3,4 . This process is mediated by the sophisticated orchestration of transcription factors (TFs) that bind to regulatory DNA elements (primarily enhancers and promoters) and epigenetic modifications that influence these events. Specifically, the ability of TFs to access regulatory DNA is closely related to the covalent modification of histones and DNA 5,6 .
Cytosine DNA methylation is an epigenetic modification that is crucial for gene regulation 2 . This base modification occurs predominantly at cytosines followed by guanine (mCG) in mammalian genomes and is dynamic at regulatory elements in different tissues and cell types 7-11 . mCG can directly affect the DNA-binding affinity of a variety of TFs 6,12 and targeted addition or removal of mCG at promoters correlates with increases or decreases, respectively, in gene transcription 13 . Non-CG methylation (mCH; in which H denotes A, C or T) is also present at appreciable levels in embryonic stem cells, oocytes, heart and skeletal muscle, and is abundant in the mammalian brain 7-9,11,14-17 . In fact, the level of mCH in human neurons exceeds that of mCG 9 . Although its precise function(s) are unknown, mCH directly affects DNA binding by MeCP2, the methyl-binding protein in which mutations are responsible for Rett syndrome 18 .
Cytosine DNA methylation is actively regulated during mammalian development 19 . However, compared to pre-implantation embryogenesis 19-21 , epigenomic data are lacking for later stages, during which anatomical features of the major organ systems emerge and human birth defects become manifest 22 . To fill this knowledge gap, as part of the mouse ENCODE project, we used the mouse embryo to generate epigenomic and transcriptomic maps for twelve tissue types at nine developmental stages from embryonic day 10.5 (E10.5) to birth (postnatal day 0, P0) and, for some tissues, to adulthood. We performed whole-genome bisulfite sequencing (WGBS) to generate base-resolution methylome maps. In other papers published as part of ENCODE 23,24 , the same tissue samples were profiled using chromatin immunoprecipitation with sequencing (ChIP-seq), assay for transposase-accessible chromatin data using sequencing (ATAC-seq) 23,25 and RNA sequencing (RNA-seq) 24 to identify histone modification, chromatin accessibility and gene expression landscapes, respectively.
These data sets allow the dynamics of gene regulation in developing fetal tissues to be studied, expanding the scope of the previous phase of mouse ENCODE 26 , which focused on gene regulation in adult tissues. These comprehensive data sets are publicly accessible at http://encodeproject.org and http://neomorph.salk.edu/ ENCODE_mouse_fetal_development.html. Highlights of this paper include: • Identification of 1,808,810 genomic regions showing developmental and tissue-specific mCG variation in fetal tissues, covering 22.5% of the mouse genome. • Most (91.5%) of the mCG variant regions have no overlap with promoters, CpG islands or CpG island shores. • The dominant methylation patterns observed were a continuous loss of CG demethylation prenatally during fetal progression, and CG remethylation postnatally, primarily at distal regulatory elements. • During fetal development, non-CG methylation accumulated at the bodies of genes that encode developmental TFs, and this was associated with the future repression of these genes. • We used integrative analyses of DNA methylation, histone modifications and chromatin accessibility data from mouse ENCODE to predict 461,141 putative enhancers across all fetal tissues. • The putative fetal enhancers accurately recapitulate experimentally validated enhancers in matched tissue types from matched developmental stages. • Predicted regulatory elements showed spatiotemporal enhancerlike active chromatin, which correlates with the dynamic expression patterns of genes that are essential for tissue development. • The human orthologues of the fetal putative enhancers are enriched for genetic variants that are risk factors for a variety of human diseases.
Developing fetal tissue methylomes
To assess the cytosine DNA methylation landscape in the developing mouse embryo, we generated 168 methylomes to cover most of the major organ systems and tissue types derived from the 3 primordial germ layers (Fig. 1a). All methylomes exceeded ENCODE standards, with deep sequencing depth (median 31.8×) with biological replication, high conversion rate (over 99.5%) and high reproducibility; the Pearson correlation coefficient of mCG quantification between biological replicates is more than 0.8 (Supplementary Table 1, Methods). The reproducibility of liver methylomes is slightly lower because liver shows genome-wide hypomethylation, which causes higher sampling variation (Pearson correlation coefficient >0.73). To better understand the epigenomic landscape during fetal development, we also incorporated into our analyses histone modification (ChIP-seq), chromatin accessibility (ATAC-seq) 23 and gene expression (RNA-seq) data 24 from the same tissue and organ samples (Supplementary Table 2). The genomes of all fetal tissues were heavily CG methylated, with global mCG levels of 70-82% (with the notable exception of liver, 60-74%; Fig. 1b). Mouse fetal liver showed a signature of partially methylated domains (PMDs) 7 . Notably, the formation and dissolution of PMDs precisely coincided with fetal liver haematopoiesis (Supplementary Note 1, Extended Data Fig. 1).
Although levels of global mCG were similar in fetal tissues at different stages, we identified 1,808,810 CG differentially methylated regions (CG-DMRs; genomic regions in which methylation differs between tissue types and developmental stages), which are, on average, 339 bp long and cover 22.5% (614 Mb) of the mouse genome (Extended Data Fig. 2a, Methods). This comprehensive fetal tissue CG-DMR annotation captured around 96% (n = 272,858) of all previously reported adult mouse tissue CG-DMRs 11 , and identified more than 1.5 million new regions (Fig. 1c).
Notably, 76% of the CG-DMRs are more than 10 kb away from neighbouring transcription start sites (TSSs) (Extended Data Fig. 2b). Only 8.5% (n = 153,019) of CG-DMRs overlapped with promoters, CpG islands (CGIs) or CGI shores (Fig. 1d, Extended Data Fig. 2c-e). About 91.5% (1,655,791) of CG-DMRs were distally located and showed a high degree of evolutionary conservation, suggesting that they are functional (Fig. 1d, Extended Data Fig. 2f, g). By integrating these epigenomic data sets, we computationally delineated 468,141 CG-DMRs that are likely to be fetal enhancers (fetal enhancer-linked CG-DMRs or feDMRs) (see later section 'Enhancer prediction with multi-omic data'; Supplementary Data). We further categorized the remaining CG-DMRs into four other types according to the degree of mCG difference and their Heart (HT) E 1 1 .5 E 1 4 .5 P 0 E 1 6 .5 E 1 3 .5 E 1 5 .5 E 1 2 . Blue cells indicate published data, grey cells indicate tissues and stages that were not sampled because either the organ is not yet formed or it was not possible to obtain sufficient material for the experiment, or the tissue was too heterogeneous to obtain informative data. *Additional data were generated in duplicate for adult tissues. b, Global mCG level of each tissue across their developmental trajectories. The adult forebrain was approximated using postnatal six-week-old frontal cortex 9 . c, Fetal CG-DMRs identified in this study encompass the majority of the adult CG-DMRs from a previous study 11 . These results provided a comprehensive annotation of mCG variation throughout the mouse genome. The CG-DMRs show various degrees of difference in mCG level (effect size). The effect size of 71% of CG-DMRs is larger than 0.2, indicating that these CG-DMRs are present in at least 20% of cells in at least one tissue, while CG-DMRs in different categories showed distinct effect sizes (Extended Data Fig. 4a, b). On average, one CG-DMR contains 9 differentially methylated CG sites (DMSs), and in 62% of CG-DMRs, more than 80% of CG sites are DMSs (Extended Data Fig. 4c-d). CG-DMRs with more DMSs showed stronger predicted regulatory activity (Extended Data Fig. 4e). Similarly, as CG-DMRs with larger effect size are more likely to reflect bona fide mCG variation, they indeed showed stronger anti-correlation with active histone modifications and the transcription of nearby genes (Extended Data Fig. 4f, Supplementary Note 3).
We found some extensive changes in methylation near genes that are essential for fetal tissue development. For example, Fabp7 is essential for establishing radial glial fibres in the developing brain 27 . In the forebrain, Fabp7 underwent marked and continuous demethylation as the forebrain matured, associated with increased forebrain-specific acetylation at the 27th lysine residue of the H3 (H3K27ac) and Fabp7 gene expression (Fig. 1e). In a different region, an experimentally validated enhancer (from VISTA enhancer browser 28 ) of E11.5 heart, limb, nose and several other tissues, is hypomethylated in matched E11.5 tissue (Fig. 1f).
Distinct pre-and postnatal mCG dynamics
The dominant methylation pattern that emerged during fetal progression was a continuous loss of mCG at tissue-specific CG-DMRs, which overlap strongly with predicted enhancers (Fig. 2a, Extended Data Fig. 5a). This widespread demethylation is consistent with results from a previous study of whole mouse embryos 29 . By contrast, these CG-DMRs mainly gained mCG after birth (Fig. 2a). To quantify these changes for each developmental period, we counted loss-of-mCG and gainof-mCG events (decreases or increases in mCG level of at least 0.1 in one CG-DMR) ( Fig. 2b-d, Methods). From E10.5 to P0, 77-95% of the mCG changes were loss-of-mCG, more than 70% of which occurred between E 1 1 .5 E 1 4 .5 P 0 E 1 6 .5 E 1 3 .5 E 1 5 .5 E 1 2 .5 E 1 0 .5 A d u lt The adult forebrain was approximated using postnatal six-week-old frontal cortex 9 . Each row of the heatmaps represents an individual CG-DMR. b, The numbers of loss-of-mCG (blue) and gain-of-mCG (red) events in tissue-specific CG-DMRs for each developmental period (tissues aligned with a). c, d, Percentage of tissue-specific CG-DMRs that undergo loss of mCG (c) or gain of mCG (d) at each developmental period. Grey lines show the data for each non-liver tissue, and the blue or red line shows the mean. e, mCG and H3K27ac dynamics of forebrain-specific CG-DMRs. f, Relationship between mCG and H3K27ac in tissue-specific CG-DMRs. For each tissue type, tissue-specific CG-DMRs were grouped by their mCG level into low (L, mCG level ≤ 0.2), medium (M, 0.2 < mCG level ≤ 0.6) or high (H, mCG level > 0.6). Then, we quantified the fraction of tissue-specific CG-DMRs in each category that showed different levels of H3K27ac enrichment (Methods).
E10.5 and E13.5 in all tissues except heart (46%) (Extended Data Fig. 5b). The mCG level of 44-84% tissue-specific CG-DMRs dropped to below 0.5 at E14.5, compared to 16-31% at E10.5. As allele-specific methylation is relatively rare 8 , the observed methylation dynamics suggest that, at E14.5, most of the tissue-specific CG-DMRs are unmethylated in more than half of the cells in a tissue. Compared to the loss of mCG, 57-86% of the gain-of-mCG events happened after birth (Extended Data Fig. 5c). As a result, 27-56% of the tissue-specific fetal CG-DMRs became highly methylated (mCG level >0.6) in adult tissues (at least 4 weeks old), compared to 0.3-15% at P0, which is likely to reflect the silencing of fetal regulatory elements (Extended Data Fig. 5d). In forebrain, 70% of forebrain-specific CG-DMRs underwent both prenatal loss-of-mCG and postnatal gain-of-mCG, coinciding with the marked methylomic reconfiguration during postnatal forebrain development 9 (Extended Data Fig. 5e). However, only 33% of heart-specific CG-DMRs showed a similar trajectory, which might be associated with its relatively earlier maturation (Extended Data Fig. 5e). The percentage (8-15%) was even lower for CG-DMRs specific to kidney, lung, stomach and intestine, suggesting that major demethylation events are likely to occur during earlier developmental stages.
This widespread demethylation cannot be explained by the expression dynamics of the cytosine methytransferases Dnmt1 and Dnmt3a, the co-factor Uhrf1 30 , or Tet methylcytosine dioxygenases, although a previous study 29 reported the involvement of active DNA demethylation (Extended Data Fig. 5f). The absence of gain-of-mCG events until the postnatal period may involve translational and/or posttranslational regulation of these enzymes. Notably, WGBS does not distinguish between 5-methylcytosine and 5-hydroxymethylcytosine 31 , although earlier studies 9,32 suggested that 5-hydroxymethylcytosines are relatively rare. Further studies that directly measure the full complement of cytosine modifications are needed to understand their dynamics during fetal tissue development.
Linking dynamic mCG and chromatin states
To further pinpoint the timing of CG-DMR remethylation and its relationship with enhancer activity, we clustered forebrain-specific CG-DMRs on the basis of their mCG and H3K27ac dynamics across both fetal and adult stages (Fig. 2e, Extended Data Fig. 5g, Methods). In all clusters, mCG increased markedly between the first and second postnatal weeks and increased even further during tissue maturation in adult mice (Extended Data Fig. 5h).
We then investigated the association between mCG dynamics and predicted enhancer activity (approximated by H3K27ac abundance). Although depletion of mCG was not necessarily related to H3K27ac enrichment (for example, clusters 3, 5 and 6), high mCG was indicative of low H3K27ac (Fig. 2e, f). Only 2-9% of highly methylated CG-DMRs (mCG level >0.6) showed high H3K27ac enrichment (>6), whereas 25-28% of CG-DMRs with low methylation levels (mCG level <0.2) were enriched for H3K27ac (Fig. 2f). These observations suggest that decreases in cytosine methylation during fetal progression may precede and promote enhancer activity by increasing TF binding and/or altering histone modifications.
Large-scale mCG features
In mouse neurons and a variety of human tissues, some CG-DMRs were found clustered together to form kilobase-scale hypomethylated domains, termed large hypo CG-DMRs 8,33 . We identified 273-1,302 such CG-DMRs in fetal tissues by merging adjacent CG-DMRs (Supplementary Table 3, Methods). For example, we found two limb-specific large hypo CG-DMRs upstream of Lmx1b, which is crucial for limb development 34 (Extended Data Fig. 6a). The mCG levels of CG-DMRs within the same large hypo CG-DMR were well-correlated (average Pearson correlation coefficient 0.76-0.86) (Extended Data Fig. 6b).
We also found a different multi-kilobase DNA methylation feature called a DNA methylation valley or DMV 37,38 (Supplementary Table 5, Methods). DMVs are ubiquitously unmethylated in all tissues across their developmental trajectory, whereas large hypo CG-DMRs display spatiotemporal hypomethylation patterns (Extended Data Fig. 7a, b). In fact, less than 4% of large hypo CG-DMRs overlapped with DMVs. Also, 53-58% of the DMV genes encode TFs, compared to 8-17% of genes in large hypo CG-DMRs (Extended Data Fig. 7c). The absence of repressive DNA methylation in DMVs implies that the expression of TF genes may be regulated by alternative mechanisms. Indeed, 510 out of 706 DMV genes (72.2%) are targets of the Polycomb repression complex 23 (fold-enrichment 2.3, P < 0.001, hypergeometric test).
mCH domains predict gene silencing
A less well-understood form of cytosine DNA methylation found in mammalian genomes is mCH 15 . mCH accumulates at detectable levels in nearly all tissues and organs during fetal progression (Fig. 3a).
Notably, in brain tissues, the timing of mCH accumulation correlates with developmental maturation (downregulation of neural progenitor markers 39,40 and upregulation of neuronal markers 41 ) in sequential order of hindbrain, midbrain and forebrain (Fig. 3a, Extended Data Fig. 8a, b). Previous studies have shown that mCH is preferentially deposited at the 5′-CAG-3′ context in embryonic stem cells by DNMT3B and at 5′-CAC-3′ in adult tissues by DNMT3A 15 . In all fetal tissues, mCH is enriched at CAC sites and this specificity increases further as the tissues mature, implying a similar DNMT3A-dependent mCH pathway in both fetal and adult tissues (Extended Data Fig. 8c).
mCH accumulates preferentially at large genomic regions that we call 'mCH domains', which show higher mCH levels than their flanking sequences (Fig. 3b). We identified 384 mCH domains, which averaged 255 kb in length (Methods). Notably, 92% of them and 61% of their bases are intragenic (fold-enrichment 1.20 and 1.43, respectively; P < 0.001, Monte Carlo testing). Twenty-two per cent (128 out of 582) of the mCH domain genes (for example, Pax3) encode TFs, many of which are related to tissue development or organogenesis (fold-enrichment 3.23, P < 0.001, Monte Carlo testing).
To further explore the dynamics of mCH accumulation, we grouped mCH domains into five clusters, C1-C5 (Fig. 3b, c, Extended Data Fig. 8d, Methods). mCH domains in C1, C4 and C5 acquire mCH in all tissues (Fig. 3c). Notably, C1 is enriched for genes related to neuron differentiation, whereas C4 and C5 overlap with genes associated with embryo development (Fig. 3d, Numbers related to feDMRs are underlined. c, True positive rate of putative enhancers on 100 down-sampled VISTA data sets in each E11.5 tissue for (from left to right): top 1-2,500 and 2,501-5,000 feDMRs; *top 1-2,500 and 2,501-5,000 feDMRs that do not overlap with the putative enhancers from ref. 23 ; top 1-2,500 putative enhancers from ref. 23 (blue); and random region (grey). The sample size is 1,000 for random region and 100 for all others. Random region indicates ten sets of randomly selected genomic regions with GC density and evolutionary conservation matching the top 5,000 feDMRs. Blue dashed line shows the fraction of elements that are experimentally validated enhancers (positives) in the dataset that is downsampled to match the estimated abundance of enhancers (see Supplementary Note 4 for details). Black dashed line indicates the random positive rate. Middle line, median; box, upper and lower quartiles; whiskers, 1.5 × (Q3 − Q1) above Q3 and below Q1; points, outliers.
C3 is brain-specific and overlaps with genes related to axon guidance (Fig. 3d, e).
As mCH accumulates in mCH domains during fetal progression, the mCH domain genes tend to be repressed compared to genes outside these domains, especially by P0 (Extended Data Fig. 8e, f). Because mCH domain genes are related to tissue, organ or embryo development, our data suggest that mCH is associated with silencing of the pathways of early fetal development. Notably, 382 of the 582 mCH domain genes are targeted by the Polycomb repressive complex pathway 23 (foldenrichment 2.0, P < 0.001, hypergeometric test). Consistent with our findings across fetal tissues, one study 42 on postnatal brain reported that mCH acquired in gene bodies during postnatal brain development also repressed transcription. Further experiments, especially in the developing embryo, are necessary to delineate the mechanism of mCH regulation and its potential role in transcriptional regulation.
Enhancer annotation based on multi-omic data
To further investigate dynamic transcriptional regulation in developing fetal tissues, we predicted fetal CG-DMRs that are likely to be associated with enhancer activity using the REPTILE 43 algorithm through the integration of mCG, histone modifications and chromatin accessibility
Fig. 5 | Association between mCG, gene expression and disease-associated
SNPs. a, Expression profiles for 2,500 of the most variable genes. b, Thirtythree CEMs identified by WGCNA and their eigengene expression. CEMs shown in bold are related to c. c, The most enriched biological process terms of genes in four representative CEMs using EnrichR 49 . P values based on one-tailed Fisher's exact test with sample sizes 6,766, 602, 126 and 2,968 for CEM3, CEM12, CEM29 and CEM32, respectively, adjusted for multiple testing correction using the Benjamini-Hochberg method. d, Correlation of the tissue-specific eigengene expression (orange) for each developmental stage with the mCG level or enhancer score (blue or red, respectively) z-scores of feDMRs linked to the genes in CEM32. Pearson correlation coefficients were calculated (n = 7, 11 and 8 for E11.5, E14.5 and P0, respectively). e, f, Pearson correlation coefficients of mCG level or enhancer score (blue or red, respectively) of feDMRs linked to the genes in each CEM with tissue-specific eigengene expression across all 33 CEMs on all stages (e), and temporal epigengene expression across all CEMs in all tissue types (f), excluding liver. P values based on two-tailed Mann-Whitney test (n = 231 (e), n = 363 (f)). Middle line, median; box, upper and lower quartiles; whiskers, 1.5 × (Q3 − Q1) above Q3 and below Q1; points, outliers. g, feDMRs are enriched for human GWAS SNPs associated with tissue-or organ-specific functions and tissue-related disease states. P values calculated using LD score regression 47 , adjusted for multiple testing correction using the Benjamini-Hochberg approach. (Fig. 4b).
To evaluate the likelihood that these putative fetal enhancers are functional, we intersected feDMRs with VISTA enhancer browser DNA elements 28 , which were tested for enhancer activity by in vivo transgenic reporter assay in E11.5 mouse embryos. Even after carefully controlling for biases in the data set, 37-55% of the 2,500 (top 3-7%) most confident feDMRs that overlapped VISTA elements showed in vivo enhancer activity in matched tissues (Fig. 4c, Extended Data Fig. 9; Supplementary Note 4). Also, in any given tissue, feDMRs cover 73-88% of chromatinstate-based putative enhancers, and capture experimentally validated enhancers missing from the chromatin-state-based putative enhancers without compromising accuracy (Fig. 4c, Extended Data Fig. 9d). These results are consistent with previous findings that incorporating DNA methylation data improves enhancer prediction 43 . The validity of feD-MRs is further supported by their evolutionary conservation, enrichment of TF binding motifs related to specific tissue function(s) and the enrichment of neighbouring genes in specific tissue-related pathways (Extended Data Fig. 2e
Linking mCG, enhancers and gene expression
Finally, we investigated the association of mCG dynamics with the expression of genes in different biological processes or pathways. Using weighted correlation network analysis (WGCNA) 45 , we identified 33 clusters of co-expressed genes (co-expression modules, CEMs) and calculated 'eigengenes' to summarize the expression profile of genes within modules (Fig. 5a, b, Extended Data Fig. 10a, Methods). Genes that share similar expression profiles are more likely to be regulated by a common mechanism and/or to be involved in the same pathway (Extended Data Fig. 10b, Supplementary Table 9). For example, genes in CEM12, which are related to cell cycle, are highly expressed in early developmental stages but are downregulated as tissues mature, matching our knowledge that cells become post-mitotic in mature tissues ( Fig. 5c, Extended Data Fig. 10c).
To understand how mCG and the enhancer activity of feDMRs are associated with the expression of genes in CEMs, we linked feDMRs to their neighbouring genes. Then, we correlated the eigengene expression of each CEM with the average mCG levels (or enhancer score) of feDMRs linked to the genes in that CEM (Methods). To tease out tissue-specific and temporal associations, we calculated the correlation across tissues and across developmental stages separately. Across all tissue samples from a given developmental stage, mCG of feDMRs was negatively correlated with eigengene expression, whereas enhancer score was positively correlated with eigengene expression (Fig. 5d, e). We then calculated the correlation across samples of a given tissue type from different developmental stages. Whereas mCG levels generally decreased at feDMRs over development (Fig. 2a), the enhancer score remained positively correlated with temporal expression (Fig. 5f, Extended Data Fig. 10d). These results imply that feDMRs are likely to drive both tissue-specific and temporal gene expression.
Genetic risk factors enriched in feDMRs
The vast majority of genetic variants associated with human diseases that have been identified in genome-wide association studies (GWAS) are located in non-coding regions. These non-coding variants, as well as the heritability of human diseases, are enriched in the distal regulatory elements of related tissues and cell types 46,47 . The spatiotemporal mouse enhancer activity annotation (feDMRs) and the degree of evolutionary conservation in regulatory elements between human and mouse 26 make it possible to analyse disease-or trait-associated loci, and to pinpoint the related tissue(s) and developmental time point(s) in the mouse ENCODE data. To do this, we applied stratified linkage disequilibrium (LD) score regression 47 to partition the heritability of 27 traits in the human orthologous regions of the mouse feDMRs (Methods). We found that the heritability of human disease-and trait-associated single-nucleotide polymorphisms (SNPs) was significantly enriched in the orthologous regions of mouse feDMRs for each corresponding tissue (Fig. 5g, Supplementary Table 10; LD score regression 47 (Methods)). For example, the heritability of schizophrenia and 'years of education' is enriched in forebrain-and midbrain-specific feDMRs, whereas craniofacial-and limb-specific feDMRs are enriched for the heritability of height (Fig. 5g). Some associations between traits or diseases and tissue-specific feDMRs were found only at certain developmental stages (Fig. 5g). For example, schizophrenia loci are associated with forebrain feDMRs only at E12.5-P0. Similar results were also found at human orthologues of regions that showed spatiotemporal differences in open chromatin 23 . Given current challenges in obtaining human fetal tissue, our results suggest that it might be possible to integrate human genetic data with fetal spatiotemporal epigenomic data from model organisms to predict the relevant tissue or organ type(s) for a variety of human developmental diseases.
Discussion
We have described the generation and analysis of a comprehensive collection of base-resolution, genome-wide maps of cytosine DNA methylation for twelve tissues and organs from eight distinct developmental stages of mouse embryogenesis and the adult stage. By integrating DNA methylation with histone modification, chromatin accessibility and RNA-seq data from the same tissue samples from companion papers 23,24 , we have annotated 1,808,810 methylation-variable genomic elements, encompassing nearly a quarter (613 Mb) of the mouse genome and generating predictions for 468,141 fetal enhancer elements. The counterparts of these fetal enhancers in the human genome are tissue-specifically enriched for genetic risk loci associated with a variety of developmental disorders or diseases. Such enrichments suggest that it might be possible to generate new mouse models of human disease by introducing the candidate disease-associated alleles into feDMRs using genome-editing techniques 48 .
The temporal nature of these data sets enabled us to uncover simple mCG dynamics at predicted DNA regulatory regions. During early stages of fetal development, methylation decreases at predicted fetal regulatory elements in all tissues until birth, after which time it rises markedly. As the tissues that we have investigated comprise a variety of cell types, a fraction of the observed dynamics might result from changes in DNA methylation during the differentiation of individual cell types and/or the changing cell type composition during development. In spite of the tissue heterogeneity, such dynamics suggest a plausible regulatory principle in which metastable repressive mCG is removed to enable more rapid, flexible modes of gene regulation (for example, histone modification or changes in chromatin accessibility).
In addition, our findings extend current knowledge of non-CG methylation, an understudied context of cytosine modification. During fetal development, there is preferential accumulation of mCH in specific tissues at genomic locations, each hundreds of kilobases in size. We call these genomic features 'mCH domains'. Genes that lie in mCH domains are downregulated in their expression as mCH further accumulates during the later stages of fetal development. Although its function remains debatable, in vivo and in vitro studies indicate that mCH directly increases the binding affinity of MeCP2 18 , which is highly expressed in the brain and mutation of which leads to Rett syndrome. Gene-rich mCH domains in non-brain tissues are likely to be enriched for undiscovered mCH binding proteins, which, as with MeCP2, may be involved in recruiting transcriptional repressor complexes and thereby promoting gene repression.
Despite the broad scope of this study, it is important to note its limitations. First, several tissues, such as skeleton, gonads and pancreas, were not included in the data set. Also, sex-related differences were not studied. In addition, the tissues examined in this study are heterogeneous, and thus future efforts to examine the epigenomes of individual cells will be critical for a deeper understanding of the gene regulatory programs.
Overall, we present, to our knowledge, the most comprehensive set of temporal fetal tissue epigenome mapping data available in terms of the number of developmental stages and tissue types investigated, expanding upon the previous phase of the mouse ENCODE project 26 , which focused exclusively on adult mouse tissues. Our results highlight the power of this data set for analysing regulatory element dynamics in fetal tissues during in utero development. These spatiotemporal epigenomic data sets provide a valuable resource for answering fundamental questions about gene regulation during mammalian tissue and organ development as well as the possible origins of human developmental diseases.
Online content
Any methods, additional references, Nature Research reporting summaries, source data, extended data, supplementary information, acknowledgements, peer review information; details of author contributions and competing interests; and statements of data and code availability are available at https://doi.org/10.1038/s41586-020-2119-x. Publisher's note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this license, visit http://creativecommons.org/licenses/by/4.0/.
Tissue collection
All animal work was reviewed and approved by the Lawrence Berkeley National Laboratory Animal Welfare and Research Committee or the University of California, Davis Institutional Animal Care and Use Committee.
Mouse fetal tissues were dissected from embryos of different developmental stages from female C57Bl/6N Mus musculus. Mice used for obtaining tissue samples at E14.5 and P0 were purchased from Charles River Laboratories (C57BL/6NCrl strain) and Taconic Biosciences (C57BL/6NTac strain). Mice used for obtaining tissue samples at remaining developmental stages were purchased from Charles River Laboratories (C57BL/6NCrl strain). The number of embryos or P0 pups collected was determined by whether the materials were sufficient for genomic assay, and was not based on statistical considerations. Between 15 and 120 embryos or pups were collected for each replicate of each tissue at each stage.
Tissue excision and fixation
See Supplementary Files 1, 2 for details.
MethylC-seq library construction and sequencing
MethyC-seq libraries were constructed as previously described 8 and a detailed protocol is available 50 . An Illumina HiSeq 2500 system was used for all WGBS using either 100-or 130-base single-ended reads.
Mouse reference genome construction
For all analyses in this study, we used mm10 as the reference genome, which includes 19 autosomes and two sex chromosomes (corresponding to the'mm10-minimal' reference in the ENCODE portal, https:// www.encodeproject.org/). The fasta files of mm10 were downloaded from the UCSC genome browser (9 June 2013) 51 .
WGBS data processing
All WGBS data were mapped to the mm10 mouse reference genome as previously described 52 . WGBS processing includes mapping of the bisulfite-treated phage lambda genome spike-in as control to estimate the sodium bisulfite non-conversion rate. This pipeline (called methylpy) is available on github (https://github.com/yupenghe/methylpy). In brief, cytosines within WGBS reads were first computationally converted to thymines. The converted reads were then aligned by bowtie (1.0.0) onto the forward strand of the C-T converted reference genome and the reversed strand of the G-A converted reference genome, separately. We filtered out reads that were not uniquely mapped or were mapped to both computationally converted genomes. Next, PCR duplicate reads were removed. Last, methylpy counted the methylated basecalls (cytosines) and unmethylated basecalls (thymines) for each cytosine position in the corresponding reference genome sequence (mm10 or lambda).
Calculation of methylation level
Methylation level was computed to measure the intensity and degree of DNA methylation of single cytosines or larger genomic regions. The methylation level is defined as the ratio of the sum of methylated basecall counts over the sum of both methylated and unmethylated basecall counts at one cytosine or across sites in a given region 53 , subtracting the sodium bisulfite non-conversion rate. The sodium bisulfite non-conversion rate is defined as the methylation level of the bisulfite-treated lambda genome.
We calculated this metric for cytosines in both CG context and CH contexts (H = A, C or T). The former is called the CG methylation (mCG) level or mCG level and the latter is called the CH methylation (mCH) level or mCH level.
Quality control of WGBS data
We calculated several quality control metrics for all the WGBS data and the results are presented in Supplementary Table 1. For each tissue sample, we calculated cytosine coverage, sodium bisulfite conversion rate, and reproducibility between biological replicates. Cytosine coverage is the average number of reads that cover cytosine. In the calculation, we combined the data of both strands. Sodium bisulfite conversion rate measures the sodium bisulfite conversion efficiency and is calculated as one minus the methylation level of unmethylated lambda genome. The reproducibility of biological replicates is defined as the Pearson correlation coefficient of mCG quantification between biological replicates for sites covered by at least ten reads.
All of the WGBS data passed ENCODE standards (https://www. encodeproject.org/data-standards/wgbs/) and are accepted by the ENCODE consortium. Almost all of the biological replicates of tissue samples have at least 30× cytosine coverage. All biological replicates have at least 99.5% sodium bisulfite conversion rate. All non-liver tissue samples have reproducibility greater than 0.8. The reproducibility of liver samples is slightly lower but is still greater than 0.7. The reduced reproducibility is due to the increase in sampling variation, which is a result of genome-wide hypomethylation in the liver genome.
ChIP-seq data processing
ChIP-seq data were processed using the ENCODE uniform processing pipeline for ChIP-seq. In brief, Illumina reads were first mapped to the mm10 reference using bwa 54 (version 0.7.10) with parameters '-q 5 -l 32 -k 2'. Next, the Picard tool (http://broadinstitute.github.io/picard/, version 1.92) was used to remove PCR duplicates using the following parameters: 'REMOVE_DUPLICATES=true'.
We represented each histone modification mark as continuous enrichment values of 100-bp bins across the genome. The enrichment was defined as the RPKM after subtracting ChIP input. The enrichment across the genome was calculated using bamCompare in Deeptools2 55 (2.3.1) using options '-binSize 100-normalizeUsingRPKM-extend-Reads 300-ratio subtract'. For the ChIP-seq data of the transcriptional co-activator EP300 (E1A-associated protein p300), we used MACS 56 (1.4.2) to call peaks using default parameters.
RNA-seq data
Processed RNA-seq data for all fetal tissues from all stages were downloaded from the ENCODE portal (https://www.encodeproject.org/; Supplementary Table 2).
To further validate our findings regarding transcriptomes generated across the Wold and Ecker laboratories, we generated an additional two replicates of RNA-seq data for fetal forebrain, midbrain, hindbrain and liver tissues. We first extracted total RNA using the RNeasy Lipid tissue mini kit from Qiagen (cat no. 74804). Then, we used the Truseq Stranded mRNA LT kit (Illumina, RS-122-2101 and RS-122-2102) to construct stranded RNA-seq libraries on 4 μg of the extracted total RNA. An Illumina HiSeq 2500 was used to sequence the libraries and generate 130-base single-ended reads.
RNA-seq data processing and gene expression quantification
RNA-seq data were processed using the ENCODE RNA-seq uniform processing pipeline. In brief, RNA-seq reads were mapped to the mm10 mouse reference using STAR 57 aligner (version 2.4.0k) with GENCODE M4 annotation 58 . We quantified gene expression levels using RSEM (version 1.2.23) 59 , expressed as TPM. For all downstream analyses, we filtered out non-expressed genes and only retained genes that showed non-zero TPM in at least 10% of samples.
ATAC-seq data
ATAC-seq data for all fetal tissues from all stages were downloaded from the ENCODE portal (https://www.encodeproject.org/; Supplementary Table 2). ATAC-seq reads were mapped to the mm10 genome using bowtie (1.1.2) with flag '-X 2000-no-mixed-no-discordant'. Then, we removed PCR duplicates using samtools 60 and mitochondrial reads. Next, we converted read ends to account for Tn5 insertion position by moving the read end position by 4 bp towards the centre of the fragment. We converted paired-end read ends to single-ended read ends. Last, we used MACS2 (2.1.1.20160309) with flags '-nomodel -shift 37 -ext 73 -pval 1e-2 -B -SPMR -call-sumits' to generate signal track files in bigwig format. MACS2 calculated ATAC-seq read fold enrichment over the background MACS2 moving window model. This fold enrichment is used as the intensity/signal of chromatin accessibility.
Genomic features of mouse reference genome
We used GENCODE M4 58 gene annotation in this study. CGI annotation was downloaded from UCSC genome browser (5 September 2016) 51 . CGI shores are defined as the upstream 2 kb and downstream 2 kb regions along CGIs. Promoters are defined as regions from −2.5 kb to +2.5 kb around TSSs. CGI promoters are defined as those that overlap with CGIs while the remaining promoters are called non-CGI promoters.
We also obtained a list of mappable transposable elements (TEs) using the following procedure. RepeatMasker annotation of the mm10 mouse genome was downloaded from UCSC genome browser (12 September 2016) 51 . The annotation included 5,138,231 repeats. We acquired the transposon annotation by selecting only repeats that belonged to one of the following repeat classes (repClass): 'DNA', 'SINE', 'LTR' or 'LINE'. Then, we excluded any repeat elements with a question mark in their name (repName), class (repClass) or family (repFamily). For the remaining 3,643,962 transposons, we further filtered out elements that contained fewer than two CG sites or cases within which less than 60% of CG sites were covered by at least ten reads across all samples when the data from two replicates were combined. Finally, we used the remaining set of 1,688,189 mappable transposons for analyses in this study.
CG-DMRs
We identified CG-DMRs using methylpy (https://github.com/yupenghe/ methylpy) as previously described 52 . In brief, we first called DMSs and then merged them into blocks if they both showed similar samplespecific methylation patterns and were within 250 bp. Last, we filtered out blocks containing fewer than three DMSs. In this procedure, we combined the data from the two biological replicates for all tissues, excluding liver samples owing to global hypomethylation of the genome.
We overlapped the resulting fetal tissue CG-DMRs with CG-DMRs previously identified 11 using 'intersectBed' from bedtools 61 (v2.27.1). The mm9 coordinates of the CG-DMRs from ref. 11 were first mapped to mm10 using liftOver 51 with default parameters. Overlap of CG-DMRs is defined as a CG-DMR with at least one base overlap with another CG-DMR when comparing genomic coordinates between lists.
Identification of tissue-specific CG-DMRs
For each fetal tissue type, we defined tissue-specific CG-DMRs as those that showed hypomethylation in a tissue sample from any fetal stage (E10.5 to P0). Hypomethylation is meaningful only with a baseline, thus we used an outlier detection algorithm 62 to defined the baseline mCG level of each CG-DMR across tissue samples using the mean of the bulk, which was defined as the value for the narrowest mCG level range that includes half of all samples. Specifically, is defined as the smallest integer that is greater than or equal to N/2. Last, we defined hypomethylated samples as samples in which the mCG level at CG-DMR i is at least 0.3 smaller than baseline b i , that is, . Then, CG-DMR i is specific to these tissues. Liver data were not included in this analysis and we excluded CG-DMRs that had zero coverage in any of the non-liver samples. In total, only 402 CG-DMRs (about 0.02%) were filtered out.
Linking CG-DMRs with genes
We linked CG-DMRs to their putative target genes on the basis of genomic distance. First, we only considered expressed genes that showed non-zero TPM in at least 10% of all fetal tissue samples. Next, we obtained coordinates for TSSs of the expressed genes and paired each CG-DMR with the closest TSS using 'closestBed' from bedtools 61 .
In this way, we inferred a target gene for each CG-DMR; these gene-TSS associations were used in all subsequent analyses in this study.
Predicting feDMRs
The REPTILE 43 algorithm was used to identify the CG-DMRs that showed enhancer-like chromatin signatures. We called these feDMRs. REP-TILE uses a random forest classifier to learn and then distinguish the epigenomic signatures of enhancers and genomic background. One unique feature of REPTILE is that by incorporating the data of additional samples (as outgroup/reference), it can use epigenomic variation information to improve enhancer prediction. In this study, REPTILE was run using input data from CG methylation (mCG), chromatin accessibility (ATAC-seq) and six histone marks (H3K4me1, H3K4me2, H3K4me3, H3K27ac, H3K27me3 and H3K9ac). A REPTILE enhancer model was trained in similar way previously 43 . In brief, CG-DMRs were called across the methylomes of mouse embryonic stem cells (mES cells) and all eight E11.5 mouse tissues. CG-DMRs were required to contain at least two DMSs and they were extended 150 bp in each direction (5′ and 3′). The REPTILE model was trained on the mES cell data using E11.5 mouse tissues as an outgroup. Data from mCG and six histone modifications are available for these samples. The training data set consists of 5,000 positive instances (putative known enhancers) and 35,000 negative instances. Positives were 2-kb regions centred at the summits of the top 5,000 EP300 peaks in mES cells. Negatives include 5,000 randomly chosen promoters and 30,000 randomly chosen 2-kb genomic bins. The bins have no overlap with any positives or promoters. REPTILE learned the chromatin signatures that distinguish positive instances from negative instances.
Next, using this enhancer model, we applied REPTILE to delineate feDMRs from the 1,808,810 CG-DMRs identified across all non-liver tissues. feDMRs were predicted for each sample based on data from mCG and six core histone marks, while the remaining non-liver samples were used as an outgroup. In REPTILE, the random forest classifier for CG-DMR assigns a confidence score ranging from 0.0 to 1.0 to each CG-DMR in each sample. This score corresponds to the fraction of decision trees in the random forest model that vote in favour of the CG-DMR being an enhancer. Previous benchmarks showed that the higher the score, the more likely it was that a CG-DMR shows enhancer activity 43 . We named this confidence score the enhancer score. For each tissue sample, feDMRs are defined as CG-DMRs with an enhancer score greater than 0.3. feDMRs were also defined for each tissue type as the CG-DMRs that were identified as an feDMR in at least one tissue sample of that tissue type. For example, if a CG-DMR was predicted as an feDMR only in E14.5 forebrain, it was classified as a forebrain-specific feDMR.
We overlapped the feDMRs with putative adult enhancers from ref. 26 . We used a set of coordinates to identify the centre base position of putative enhancers for each of the tissues and cell types from http://mouseencode.org/publications/mcp00/. Next, we defined putative enhancers as ±1-kb regions around the centres. Putative enhancers from different tissues and cell types were combined and merged if they overlapped. The merged putative enhancers (mm9) were then mapped to the mm10 reference using liftOver 51 . Finally, 'intersectBed' from bedtools 61 was used to overlap feDMRs with these putative enhancers.
Evaluating feDMRs with experimentally validated enhancers
We used enhancer data from the VISTA enhancer browser 28 to estimate the fraction of feDMRs that display enhancer activity in vivo. Specifically, we calculated the fraction of feDMR-overlapping VISTA elements that have been experimentally validated as enhancers, which we termed the true positive rate. We evaluated the true positive rate of feDMRs for six E11.5 tissues (forebrain, midbrain, hindbrain, heart, limb and neural tube), where at least 30 VISTA elements had been experimentally validated as enhancers (positives).
However, the selection of the VISTA elements was biased. Compared to randomly selected sequences, they are more enriched for enhancers, which will lead to an overestimate of the true positive rate. To reduce the effect of selection bias, we needed to first estimate the fraction of VISTA elements that are positives (positive rate) in a given tissue if there is minimal selection bias. We termed this fraction the genuine positive rate. Details can be found in Supplementary Note 4. Then, we can sample the current VISTA data set to construct data sets with a positive rate that matches the genuine positive rate. As the positive rate is not inflated in the constructed data sets, it will allow a fair evaluation of our enhancer prediction approach (also see Supplementary Note 4 for details).
Using the bias-controlled data sets, we calculated the true positive rate of feDMRs for each E11.5 tissue. First, we ranked feDMRs by their enhancer scores (from highest to lowest). We then overlapped the top 2,500 (or top 2,501-5,000) feDMRs of a given E11.5 tissue with VISTA elements, requiring that at least one feDMR is fully contained for a VISTA element to be counted as overlapped. Last, we calculated the fraction of feDMR-overlapping VISTA elements that are experimentally validated enhancers in the given tissue (that is, the true positive rate).
To better interpret the true positive rate of feDMRs, we also evaluated 5,000 randomly selected genomic bins with GC content and degree of evolution conservation (PhyloP score) matching the top 5,000 feDMRs. We used this method as a baseline. For each E11.5 tissue, we repeated this random selection process ten times and generated ten sets of random regions. Next, we calculated the true positive rate of each set of random regions in the bias-controlled data sets. As an additional baseline method, we also calculated the positive rate of VISTA elements that did not overlap with any feDMRs or H3K27ac peaks.
Comparing feDMRs with putative enhancers based on chromatin state
Chromatin state-based putative enhancers are genomic regions labelled as enhancer states (states 5, 6 and 7) by ChromHMM 63 in nonliver tissue samples (ref. 23 ). To fairly compare their validation rate with that of feDMRs, we needed to select the top 2,500 putative enhancers. ChromHMM does not assign a score and therefore we instead ranked these elements using the H3K27ac signal. Then, we calculated the fraction of the top 2,500 putative enhancers that were overlapping with feDMRs.
To test whether feDMRs can capture more enhancers than chromatin states, we computed the validation rate of the non-overlapping feD-MRs. Also, we calculated the validation rate of ChromHMM enhancers by overlapping them with VISTA elements. This is used as additional baseline for evaluating feDMRs.
Next, we performed a hypergeometric test to identify significant motif enrichment. For each tissue type, we calculated the motif enrichment for feDMRs in that tissue (foreground) against a list of feDMRs identified for other tissues not overlapping with the foreground tissue list. For this analysis, we extended the average size of both foreground and background feDMRs to 400 bp to avoid bias due to size differences. For a given tissue t, the total number of foreground and background feDMRs is N f,t and N b,t , respectively, and N t = N f,t + N b,t is the total number of feDMRs. For a given TF binding motif m, TF motif occurrences are overlapped with n f,t,m foreground and n b,t,m background feDMRs, while n t,m = n f,t,m + n b,t,m is the total number of overlapping feDMRs. The probability of observing n f,t,m or more overlapping foreground feDMRs (P) is defined as: For each tissue type, we performed this test for all motifs (n = 532). Then, the P values for each tissue were adjusted using the Benjamini-Hochberg method and the motifs were called as significant if they passed 1% false discovery rate (FDR) cutoff. Last, we excluded any TF-binding motifs whose TF expression level was less than 10 TPM. The results are listed in Supplementary Table 7.
Enriched pathways and biological processes of feDMR neighbouring genes
For each tissue stage, we used GREAT 65 to find enriched pathways and biological processes of genes near feDMRs identified in that tissue. For each tissue stage, GREAT was run under the 'Single nearest gene' association strategy on 10,000 feDMRs with the highest enhancer scores. The GREAT analysis results are listed in Supplementary Table 8.
Enrichment of heritability in feDMRs for human diseases and traits
We applied stratified LD score regression 47 to test for the heritability enrichment of different traits in feDMRs. To obtain the human orthologous regions of the CG-DMRs, we used liftOver to map mouse CG-DMRs (mm10) to hg19, requiring that at least 50% of the bases in CG-DMR could be assigned to hg19 (using option -minMatch = 0.5). In total, 1,034,801 out of 1,880,810 of mouse DMR regions (55%) could be aligned to the human genome.
Then, for each tissue sample, we overlapped the human orthologous regions of its feDMRs with SNPs in 1000 Genomes SNPs and calculated the LD score using 1000 Genomes data. However, only the LD score of SNPs in the pretrained baseline model were reported and used for later analysis. LD score was calculated using option '-ld-wind-cm 1'.
Last, we performed LD score regression for each trait and the feDMRs of each tissue sample with option '-overlap-annot'. The regression model used in the test included feDMRs and the annotations in the pretrained baseline model as before 47 . The latter was used to control for non-tissue-specific enrichment in generic regulatory elements, such as all promoters 47 . In total, we performed 1,953 tests (27 traits × 59 tissue samples). P values were calculated using reported coefficient z-score (Coefficient_z-score) using the R function pnorm with parameter 'lower.tail=F'. The coefficient_z-score was based on 200 repeats of block jackknife resampling and thus the sample size of this statistical test is 200. To correct P value inflation due resulting from to multiple comparisons, we applied the Benjamini-Hochberg approach separately on the P values from tests on the feDMRs of each tissue sample. A P value cutoff given 5% FDR was used to call significant enrichment.
Categorizing CG-DMRs
To better understand the potential functions of CG-DMRs, we grouped them into various categories on the basis of genomic location and chromatin signatures. First, we overlapped CG-DMRs with promoters, CGIs and CGI shores and defined the CG-DMRs that overlapped with these locations as proximal CG-DMRs. Out of the 153,019 proximal CG-DMRs, 46,692, 90,831, 1,710 and 13,786 overlapped with CGI promoters, non-CGI promoters, CGIs and CGI shores, respectively. We avoided assigning proximal CG-DMRs into multiple categories by prioritizing the four genomic features as CGI promoter, non-CGI promoter, CGI and CGI shores (ordered in decreasing priority). Each CG-DMR was assigned to the category with the highest priority.
We further classified the remaining 1,655,791 distal CG-DMRs as follows: (1) 397,320 of them were predicted as distal feDMRs (CG-DMRs that show enhancer-like chromatin signatures 44,68 ) as described above.
(2) Next, we defined flanking distal feDMRs as the CGs that were within 1 kb of distal feDMRs but were not predicted as enhancers (feDMRs). In total we found 212,620 such CG-DMRs. (3) Then, among the remaining, unclassified CG-DMRs, 159,347 CG-DMRs were identified as tissue-specific CG-DMRs in at least one of the tissues because they displayed strong tissue-specific hypomethylation patterns (mCG difference ≥ 0.3). By checking the enrichment of histone marks in their hypomethylated tissues, we found that they were enriched for H3K4me1 but not other histone marks, and these chromatin signatures resembled thsoe of primed enhancers 69 . Therefore, we defined these CG-DMRs as primed distal feDMRs. (4) Last, we defined the remaining CG-DMRs as unexplained CG-DMRs (unxDMRs) because their functional roles could yet not be assigned. We found that unxDMRs have strong overlap with transposons and we further divided them into two classes: te-unxDMRs (n = 449,623) and nte-unxDMRs (n = 436,881). te-unxDMRs are unxDMRs that overlap with transposons, and the remainder were nte-unxDMRs.
To find the fraction of CG-DMRs that are evolutionarily conserved, we overlapped CG-DMRs from different categories with conserved DNA elements in the mouse genome. The list of conserved elements was downloaded from UCSC genome browser 51 (phastConsElement-s60Way in mm10 mouse reference).
CG-DMR effect size
We defined the effect size of a CG-DMR as the absolute difference in mCG level between the most hypomethylated tissue sample and the average of samples in the bulk. The average mCG level of some CG-DMRs in bulk samples estimates the baseline mCG level of that genomic region. The bulk samples are selected as 50% of all samples such that the range of their mCG level is narrowest (see 'Identification of tissue-specific CG-DMRs' for details). In this definition, the effect size indicates the degree of hypomethylation of CG-DMRs. The effect size of DMSs is defined in the same way.
Finding TF-binding motifs enriched in flanking distal feDMRs
To identify TF-binding motifs that were enriched in flanking distal feDMRs relative to feDMRs, we performed motif analysis using the former as foreground and the latter as background. Specifically, for each tissue, the tissue-specific feDMRs were used as background, while flanking distal feDMRs that were within 1 kb of these tissue-specific feDMRs were used as foreground. To avoid potential bias resulting from differences in size distribution, both foreground and background regions were extended from both sides (5′ and 3′) such that both had a mean size of 400 bp. Next, a hypergeometric test was performed to find TF-binding motifs that were significantly enriched in the foreground. This test was the same as that used for the identification of TF-binding motifs in feDMRs.
TF-binding motif enrichment analysis for primed distal feDMRs
We also performed motif analysis to identify TF-binding motifs that were enriched in primed distal feDMRs. The procedure was similar to the motif enrichment analysis on feDMRs. For each tissue, the primed distal feDMRs that were hypomethylated in that tissue were considered as foreground while the remaining primed distal feDMRs were considered as background. Then, a hypergeometric test was performed to identify significant motif enrichment.
Next, for each tissue type, we compared the TF-binding motifs that were enriched in primed distal feDMRs and the tissue-specific feDMRs. The hypergeometric test was used to test the significance of overlapthe chance of obtaining the observed overlap if the two lists were based on random sampling (without replacement) from the TF-binding motifs with TF expression level greater than 10 TPM.
Monte Carlo test of the overlap between unxDMRs and transposons
To estimate the significance of overlap between unxDMRs and transposable elements (TEs), we shuffled the location of unxDMRs using the 'shuffleBed' tool from bedtools 61 with default setting and recalculated the overlaps. After repeating this step 1,000 times, we obtained an empirical estimate of the overlap if unxDMRs were randomly distributed in the genome. Let the observed number of TE-overlapping unxDMRs be x obs and the number of TE-overlapping shuffled unxDMRs in permutation i be x i permut . We then calculated P values as
Identification of large hypo CG-DMRs
Large hypo CG-DMRs were called using the same procedure as previously described 33 . For each tissue type, tissue-specific CG-DMRs were merged if they were within 1 kb of each other. Then, we filtered out merged CG-DMRs less than 2 kb in length. We overlapped genes with large hypo CG-DMRs and then filtered out any genes with names starting with 'Rik' or 'Gm[0-9]', in which [0-9] represents a single digit, because the ontology of these genes was ill-defined.
Super-enhancer calling
Super-enhancers were identified using the ROSE 36,71 pipeline. First, H3K27ac peaks were called using MACS2 56 callpeak module with options '-extsize 300 -q 0.05-nomodel -g mm'. Control data were used in the peak-calling step. Next, ROSE was run with options '-s 12500 -t 2500', and H3K27ac peaks, mapped H3K27ac ChIP-seq reads and mapped control reads as input. The super-enhancer calls were generated for each tissue sample. Then, we obtained the super-enhancers for one tissue type by merging the super-enhancers called at each stage of fetal development (E10.5 to P0). Last, we generated a list of merged super-enhancers by merging super-enhancer calls for all tissue types except liver.
Quantification of mCG dynamics in tissue-specific CG-DMRs
To quantify mCG dynamics, we defined and counted loss-of-mCG and gain-of-mCG events. A loss-of-mCG or gain-of-mCG event is a decrease or increase, respectively, in mCG level by at least 0.1 in one CG-DMR in one stage interval. For example, if the mCG levels of one CG-DMR at E11.5 and E12.5 are 0.8 and 0.7, respectively, in heart, it is considered a loss-of-mCG event at stage interval E11.5-E12.5. A stage interval is defined as the transition between two sampled adjacent stages (for example, E15.5 and E16.5).
Clustering forebrain-specific CG-DMRs based on mCG and H3K27ac dynamics
We used k-means clustering to identify subgroups of forebrain-specific CG-DMRs on the basis of mCG and H3K27ac dynamics. First, for each forebrain-specific CG-DMR, we calculated the mCG level and H3K27ac enrichment in forebrain samples from E10.5 to adult stages. Here, we used published methylome data for postnatal 1-, 2-and 6-week frontal cortex 9 to approximate the DNA methylation landscape of the adult forebrain. We also incorporated H3K27ac data for postnatal 1-, 3-and 7-week forebrain samples. Next, to make the range of H3K27ac enrichment values comparable to that of mCH levels, for each forebrainspecific CG-DMR, the negative H3K27ac enrichment values were thresholded as zero and then each value was divided by the maximum. If the maximum was zero for some forebrain-specific CG-DMRs, we set all values to be zero. k-means clustering of subgroups was carried out but no new patterns were observed. Last, we used GREAT 65 employing the 'Single nearest gene' association strategy to identify the enriched gene ontology terms of genes near CG-DMRs for each subgroup.
Association between mCG level and H3K27ac enrichment
To investigate the association between mCG and H3K27ac, for each tissue and each developmental stage, we first divided the tissue-specific CG-DMRs into three categories on the basis of mCG methylation levels: H (high CG methylation; mCG level > 0.6), M (moderate CG methylation; 0.2 < mCG level ≤ 0.6) and L (low CG methylation; mCG level ≤ 0.2). Then, we examined the distribution of H3K27ac enrichment in different groups of CG-DMRs by counting the number of CG-DMRs for each of four levels of H3K27ac: [0,2], (2, 4], (4, 6] and (6, ∞).
DMV identification
We identified DMVs as previously described 37 . First, the genome was divided into 1-kb non-overlapping bins. Then, for each tissue sample (replicate), consecutive bins with an mCG level of less than 0.15 were merged into blocks; bins with no data (no CG sites or no reads) were skipped. Next, any blocks merged from at least five with-data bins were called as DMVs. For each tissue sample, we filtered for DMVs that were reproducible in two replicates by first selecting the DMVs identified in one replicate that overlapped any DMVs called in the other replicate, and then merging overlapping DMVs. Using this strategy, we obtained DMV calls for each tissue from each developmental stage. Last, we generated a list of merged DMVs for all tissue samples by merging all DMVs identified in any tissues from any developmental stages.
We overlapped genes with DMVs and then filtered out any genes with names starting with 'Rik' or 'Gm[0-9]', where [0-9] represents a single digit, because the ontology of these genes was ill-defined.
Next, these regions were divided into 10-kb non-overlapping bins and we calculated the percentiles of the methylation levels at the CG sites within each bin. CG sites that were within CGIs, DMVs 37 or any of four Hox loci (see below) were excluded as these regions are typically hypomethylated which may result in incorrect PMD calling. Additionally, sites with fewer than five reads covered were also excluded. We trained the random forest classifier using data from E14.5 liver (combining the two replicates) and we then predicted whether a 10-kb bin was a PMD or non-PMD in all liver samples (considering replicates separately). We chose a large bin size (10 kb) to reduce the effect of smaller-scale variations in methylation (such as DMRs) as PMDs were first discovered as large (mean length 153 kb) regions with intermediate methylation level (<70%) 7 . Furthermore, the features (the distribution of methylation level of CG sites, which measured the fraction of CG sites that showed methylation levels at various methylation level ranges) used in the classifier required enough CG sites within each bin to robustly estimate the distribution, which necessitated a relatively large bin. Also, we excluded any 10-kb bins containing fewer than ten CG sites for the same reason. These percentiles were used as features for the random forest. The random forest implement was from scikit-learn (version 0.17.1) 72 python module and the following arguments were supplied to the Python function RandomForestClassifier from scikitlearn: n_estimators = 10000, max_features = None, oob_score = True, compute_importances = True.
Last, we merged consecutive 10-kb bins that were predicted as PMDs into blocks and filtered out blocks smaller than 100 kb. We further excluded blocks that overlapped with gaps in the mm10 genome (downloaded from UCSC genome browser, 21 September 2013). To obtain a set of PMDs that was reproducible in both replicates, we considered only genomic regions that were larger than 100 kb and were covered by PMD calls in both replicates. These regions were the final set of PMDs used for later analyses. Because there was only one replicate for adult liver, we called the PMDs at this stage using the single replicate.
Overlap between PMDs and lamina-associated domains (LADs)
To examine the relationship between PMDs and LADs in normal mouse liver cells (AML12 hepatocytes) we used LAD data from supplementary table 2 of ref. 73 . The mm9 coordinates of LADs were converted to mm10 using liftOver with default settings. We then used Monte Carlo testing to examine the significance of the overlap between PMDs and LADs. Similar to the procedure for checking the overlap between TEs and unxDMRs, we permutated (1,000 times) the genomic locations of PMDs and recorded the number of overlapping bases (x i shuf for permutation i)
Replication timing data
Replication timing data (build mm10) for three mouse cell types was used from ReplicationDomain 74 . The cell types used for these analyses were mES cells (id: 1967902&4177902_TT2ESMockCGHRT), neural progenitor cells (id: 4180202&4181802_TT2NSMockCGHRT) and mouse embryonic fibroblasts (id: 304067-1 Tc1A).
Gene expression in PMDs
We obtained information about PMD-overlapping protein-coding genes using 'intersectBed'. A similar approach was used to identify protein-coding genes that overlapped with PMD flanking regions (100 kb upstream and downstream of PMDs); genes that overlapped with PMDs were removed from this list. Last, we compared the expression of PMD-overlapping genes (n = 5,748) and genes (n = 2,555) that overlapped flanking regions.
Sequence context preference of mCH
To interrogate the sequence preference of mCH, as previous described 8 , we first identified CH sites that showed a significantly higher methylation level than the low level noise (which was around 0.005 in term of methylation level) caused by incomplete sodium bisulfite nonconversion. For each CH site, we counted the number of reads that supported methylation and the number of reads that did not. Next, we performed a binomial test with the success probability equal to the sodium bisulfite non-conversion rate. The FDR (1%) was controlled using the Benjamini-Hochberg approach 75 . This analysis was independently performed for each three-nucleotide context (for example, a P value cutoff was calculated for CAG cytosines). Last, we counted sequence motif occurrence of ± 5bp around the trinucleotide context of methylated mCH sites and visualized the sequence preferences using seqLogo 76 .
Calling mCH domains
We used an iterative process to call mCH domains, which are genomic regions that are enriched for mCH compared to flanking regions. First, we selected a set of samples that showed no evidence of mCH. Data from these samples were used in the following steps to filter out genomic regions that are prone to misalignment and showed suspicious mCH abundance. Analysis of the global mCH level and mCH motifs revealed that E10.5 and E11.5 tissues (excluding heart samples) have extremely low mCH and the significantly methylated non-CG sites showed little CA preference. Therefore, we assumed that these sites contain no mCH domain and any mCH domains called in control samples by the algorithm were likely to be artefacts. By filtering out the domains called in the control samples, we were able to exclude the genomic regions that were prone to mapping error and avoid other potential drawbacks in the processing pipeline.
To identify genomic regions in which sharp changes in mCH levels occurred, we applied a change point detection algorithm with the mCH levels of all 5-kb non-overlapping bins across the genome as input. We included only bins that contained at least 500 CH sites and in which at least 50% of CH sites were covered by 10 or more reads. The identified regions defined the boundaries that separate mCH domains from genomic regions that show background mCH levels. We implemented this step using the function cpt.mean in R package 'changepoint', with options 'method="PELT", pen.value = 0.05, penalty = "Asymptotic" and minseglen = 2'. To match the range of chosen penalty, we scaled up mCH levels by a factor of 1,000.
The iterative procedure was carried out as follows: 1) An empty list of excluded regions was created. 2) For each control sample, the change point detection algorithm was applied to the scaled mCH levels of 5-kb non-overlapping bins. Bins that overlapped excluded regions were ignored.
3) The genome was segmented into chunks based on identified change points. 4) The mCH level of each chunk was calculated as the mean mCH level of the overlapping 5-kb bins that did not overlapped excluded regions. 5) mCH domains were identified as chunks whose mCH level was at least 50% greater than the mCH level of both upstream and downstream chunks. A pseudo-mCH level of 0.001 was used to avoid dividing by zero. 6) mCH domains were added to the list of excluded regions. 7) Steps 2 to 6 were repeated until the list of excluded regions stopped expanding. 8) Steps 2 to 5 were then applied to all samples. 9) For each tissue or organ, only regions were retained that were identified as (part of) an mCH domain in both replicates, and regions less than 15 kb in length were filtered out; mCH domains must span at least three bins. The above criterion were used to define mCH domains for each tissue or organ. 10) Individual mCH domains from each tissue and organ were merged to obtain a single combined list of 384 mCH domains.
Clustering of mCH domains
We applied k-means clustering to group the 384 identified mCH domains into 5 clusters on the basis of the normalized mCH accumulation profile of each mCH domain and corresponding flanking regions (100 kb upstream and 100 kb downstream). Specifically, 1) in each tissue sample, the mCH accumulation profile of one mCH domain was represented as a vector of length 50: the mCH levels of 20 5-kb bins upstream of the mCH domain, 10 bins that equally divided the mCH domain and 20 5-kb bins downstream. 2) Then, we normalized all values by the average mCH levels of bins of flanking regions (the 20 5-kb bins upstream and 20 5-kb bins downstream of the mCH domain). 3) We next computed the profile in samples of the six tissue types (midbrain, hindbrain, heart, intestine, stomach and kidney) that showed the most evident mCH accumulation in fetal development. 4) Using the profile of these tissue samples, k-means (R v3.3.1) was used to cluster mCH domains with k = 5. We also tried higher cluster numbers (for example, 6) but did not identify any new patterns. Even using the current k setting (k = 5), the mCH domains in clusters 1 (C1) and 3 (C3) shared a similar mCH accumulation pattern.
Genes in mCH domains
We obtained the overlapping gene information for each of the mCH domains by overlapping gene bodies with mCH domains using 'inter-sectBed' in bedtools 61 . Only protein-coding genes were considered. We further filtered out any genes with names starting with 'Rik' or 'Gm[0-9]', where [0-9] represents a single digit, because the ontology of these genes was ill-defined. For the overlapping genes of each mCH domain cluster, we used EnrichR 49,77 to find the enriched gene ontology terms ('GO_Biological_Process_2015').
Next we asked whether the identified overlapping genes were enriched for TF-encoding genes. For this purpose, a list of mouse TFs from AnimalTFDB 78 (27 February 2017) was used. We then performed a Monte Carlo test to estimate the significance of the findings. Specifically, x obs is the number of TF-encoding genes in all overlapping genes. We randomly selected (1,000 times) the same number of genes and, on the ith time, x i permut of the randomly selected genes encoding TFs. Last, the P value was calculated as
mCH accumulation indicates gene repression
To evaluate the association between mCH abundance and gene expression, we traced the expression dynamics of genes inside mCH domains. For mCH domains in each cluster, we first calculated the TPM z-score for each of the overlapping genes. Specifically, for each tissue type and each overlapping gene, we normalized TPM values in the samples of that tissue type to z-scores. The z-scores showed the trajectory of dynamic expression, in which the aptitude information of expression was removed. If the gene was not expressed, we did not perform the normalization. Next, we calculated the z-scores for all genes that had no overlap with any mCH domain. Last, we subtracted the z-scores of overlapping genes from the z-scores of all genes outside mCH domains. The resulting values indicated the level of expression of genes in mCH domains relative to genes not in mCH domains.
Weighted correlation network analysis
We used WGCNA 79 , an unsupervised method, to detect sets of genes with similar expression profiles across samples (R package, 'WGCNA' version 1.51). In brief, TPM values were first log 2 transformed (with pseudo count 1 × 10 −5 ). Then, the TPM value of every gene across all samples was compared against the expression profile of all other genes and a correlation matrix was obtained. To obtain connection strengths between any two genes, we transformed this matrix to an adjacency matrix using a power adjacency function. To choose the parameter (soft threshold) of the power adjacency function, we used the scale-free topology (SFT) criterion, where the constructed network is required to at least approximate scale-free topology. The SFT criterion recommends use of the first threshold parameter value at which model-fit saturation is reached as long as it is above 0.8. In this study, the threshold was reached for a power of 5. Next, the adjacency matrix is further transformed to a topological overlap matrix (TOM) that finds 'neighborhoods' of every gene iteratively, based on the connection strengths. The TOM was calculated on the basis of the adjacency matrix derived using the signed hybrid network type, biweight mid correlation and signed TOMtype parameters of the TOMsimilarityFromExpr module in WGCNA. Hierarchical clustering of the TOM was done using the flashClust module using the average method. Next, we used the cutreeDynamic module with the hybrid method, deepSplit = 3 and minClusterSize = 30 parameters to identify modules that have at least 30 genes. A summarized modulespecific expression profile was created using the expression of genes within the given module, represented by the eigengene. The eigengene is defined as the first principal component of the log 2 transformed TPM values of all genes in a module. In other words, this is a virtual gene that represents the expression profile of all genes in a given module. Next, very similar modules were merged after a hierarchical clustering of the eigengenes of all modules with a distance threshold of 0.15. Finally, the eigengenes were recalculated for all modules after merging.
Gene ontology analysis of genes in CEMs
To better understand the biological processes of genes in each CEM, we used Enrichr 49,77 (http://amp.pharm.mssm.edu/Enrichr/) to identify the enriched gene ontology terms in the GO_Biological_Process_2015 category.
Correlating eigengene expression with mCG and enhancer scores of feDMRs
We investigated the association between gene expression and epigenomic signatures of regulatory elements in CEMs. First, for each CEM, we used the eigengene expression to summarize the transcription patterns of all genes in the module. Then, we calculated the normalized average enhancer score and normalized average mCG level of all feDMRs that were linked to the genes in the CEM. Specifically, to reduce the potential batch effect, for each tissue and each stage, we normalized the enhancer score of each feDMR by the mean enhancer score of all feDMRs. mCG levels of feDMRs were normalized in similar way except that the data of all DMRs was used to calculate the mean mCG level for each tissue and each stage. Next, for each CEM, the TPM of its eigengene, the normalized average enhancer score and the mCG level of linked feDMRs were converted to z-scores across all fetal stages for each tissue type (for analysis of tissue-specific expression) or across tissue types for each development stage (for analysis of temporal expression). Last, for each CEM, we calculated the Pearson correlation coefficient (R 3.3.1) between the z-score of eigengene expression and the z-score of normalized enhancer score (or mCG level) for each module. The correlation coefficients were calculated for two different settings: 1) for each tissue type, the correlation was computed using the z-score of normalized eigengene expression values and enhancer scores (or mCG levels) across different development stages; or 2) for each developmental stage, the correlation was computed across different tissue types. The coefficients from the former analysis indicate how well temporal gene expression is correlated with enhancer score or mCG level of regulatory elements, while the latter measures the association with tissue-specific gene expression.
We then tested whether the correlation that we observed was significant by comparing it with the correlation based on shuffled data. In the analysis of tissue-specific expression in a given tissue type, we mapped the eigengene expression of one CEM to the enhancer score (or mCG level) of feDMRs linked to genes in a randomly chosen CEM. For example, in the shuffle setting, when the given tissue type was heart, we calculated the correlation between the eigengene expression of CEM14 and the enhancer score of the feDMRs linked to genes in CEM6. In the analysis of temporal expression, given a specific developmental stage, we performed a similar permutation. Next, we calculated the Pearson correlation coefficients for this permutation setting. Last, using a two-tailed Mann-Whitney test, we compared the median of observed correlation coefficients and the median of those based on shuffled data.
Reporting summary
Further information on research design is available in the Nature Research Reporting Summary linked to this paper.
Data availability
All WGBS data from mouse embryonic tissues are available at the ENCODE portal (https://www.encodeproject.org/) and/or have been deposited in the NCBI Gene Expression Omnibus (GEO; Supplementary Table 1). The additional RNA-seq data for forebrain, midbrain, hindbrain and liver are available at the GEO under accession GSE100685. All other data used in this study, including ChIP-seq), ATAC-seq, RNA-seq and additional WGBS data, are available at the ENCODE portal and/or GEO (Supplementary Table 2).
Code availability
methylpy (1.0.2) and REPTILE (1.0) are available at https://github.com/ yupenghe/methylpy and https://github.com/yupenghe/REPTILE, respectively. Custom code used for this study is available at https:// github.com/yupenghe/encode_dna_dynamics. This work used computation resources from the Extreme Science and Engineering Discovery Environment (XSEDE) 80 . Fig. 1 | Global hypomethylation in fetal liver. a, Average mCG level of PMDs and flanking regions (±100 kb) in liver samples from different developmental stages. b, Normalized average mCG level of PMDs and flanking regions in liver samples. The mCG level was normalized (scaled) such that the average mCG level of ±20-kb regions around each PMD is 1.0. c, The total bases that PMDs encompass in liver at different developmental stages. d, Percentage of bases in PMDs identified in each of the liver samples (E12.5 liver, E13.5 liver and so on) that are also within PMDs identified in E15.5 liver sample. e, Histone modification profiles for H3K9me3 (top), H3K27me3 (middle) and H3K27ac
Extended Data
(bottom) within PMDs and flanking regions (±100 kb) in liver samples from different developmental stages. f, Replication timing profiling of PMDs and flanking regions (±100 kb). The values indicate the tendency to be replicated at an earlier stage in the cell cycle. g, Expression of genes that overlap PMDs and flanking regions (±100 kb) (left) compared with those with no PMD overlap (right). Two plots below show data from a validation data set, containing RNAseq data generated using a different protocol on matched tissues. Middle line, median; box, upper and lower quartiles; whiskers, 1.5 × (Q3 − Q1) above Q3 and below Q1; points, outliers. Fig. 8 | Non-CG methylation accumulation in fetal tissues. a, Expression of the neural progenitor marker genes Nes 39 and Sox2 40 . b, Expression of several neuronal markers from ref. 41 . c, Sequence context preference for non-CG methylation (mCH). d, Grouping mCH domains into five clusters according to the dynamics of methylation accumulation. The heatmap shows normalized methylation levels of mCH domains and flanking genomic regions (±100 kb). mCH in the adult (AD) forebrain was approximated using data from the frontal cortex from six-week-old mice. e, Expression dynamics of genes within mCH domains relative to the other genes. Z-scores were calculated for each gene across development and each line shows the mean value of mCH overlapping genes for each cluster. f, The expression of genes in mCH domains at P0 relative to the expression dynamics of genes outside mCH domains. Each circle corresponds to the value given one mCH domain cluster and one tissue. Red line indicates median, which was tested against 0 using a one-sided Wilcoxon signed-rank test (n = 50). Fig. 9
Statistics
For all statistical analyses, confirm that the following items are present in the figure legend, table legend, main text, or Methods section.
n/a Confirmed The exact sample size (n) for each experimental group/condition, given as a discrete number and unit of measurement A statement on whether measurements were taken from distinct samples or whether the same sample was measured repeatedly The statistical test(s) used AND whether they are one-or two-sided Only common tests should be described solely by name; describe more complex techniques in the Methods section.
A description of all covariates tested A description of any assumptions or corrections, such as tests of normality and adjustment for multiple comparisons A full description of the statistical parameters including central tendency (e.g. means) or other basic estimates (e.g. regression coefficient) AND variation (e.g. standard deviation) or associated estimates of uncertainty (e.g. confidence intervals) For null hypothesis testing, the test statistic (e.g. F, t, r) with confidence intervals, effect sizes, degrees of freedom and P value noted
Data
Policy information about availability of data All manuscripts must include a data availability statement. This statement should provide the following information, where applicable: -Accession codes, unique identifiers, or web links for publicly available datasets -A list of figures that have associated raw data -A description of any restrictions on data availability The data that support these findings are publicly accessible at https://www.encodeproject.org/ and http://neomorph.salk.edu/ ENCODE_mouse_fetal_development.html. Additional RNA-seq datasets for forebrain, midbrain, hindbrain and liver are available at the NCBI Gene Expression Omnibus (GEO) (accession GSE100685). ATAC-seq data for mouse embryonic stem cells is available at GEO (accession GSE113592). Further details describing the data used in this study can be found in Supplemental Tables 1 and 2.
|
v3-fos-license
|
2018-05-13T23:20:04.555Z
|
2018-05-11T00:00:00.000
|
13685357
|
{
"extfieldsofstudy": [
"Chemistry",
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://bmcchem.biomedcentral.com/track/pdf/10.1186/s13065-018-0420-7",
"pdf_hash": "26325cddb0619fc0c66a9335c581bb7e0039c9f8",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44317",
"s2fieldsofstudy": [
"Chemistry",
"Medicine"
],
"sha1": "26325cddb0619fc0c66a9335c581bb7e0039c9f8",
"year": 2018
}
|
pes2o/s2orc
|
Stereoselective synthesis, X-ray analysis, computational studies and biological evaluation of new thiazole derivatives as potential anticancer agents
Background The synthesis of new thiazole derivatives is very important because of their diverse biological activities. Also , many drugs containing thiazole ring in their skeletons are available in the market such as Abafungin, Acotiamide, Alagebrium, Amiphenazole, Brecanavir, Carumonam, Cefepime, and Cefmatilen. Results Ethyl cyanoacetate reacted with phenylisothiocyanate, chloroacetone, in two different basic mediums to afford the thiazole derivative 6, which reacted with dimethylformamide- dimethyl acetal in the presence of DMF to afford the unexpected thiazole derivative 11. The structures of the thiazoles 6 and 11 were optimized using B3LYP/6-31G(d,p) method. The experimentally and theoretically geometric parameters agreed very well. Also, the natural charges at the different atomic sites were predicted. HOMO and LUMO demands were discussed. The anticancer activity of the prepared compounds was evaluated and showed moderate activity. Conclusions Synthesis of novel thiazole derivatives was done. The structure was established using X-ray and spectral analysis. Optimized molecular structures at the B3LYP/6-31G(d,p) level were investigated. Thiazole derivative 11 has more electropositive S-atom than thiazole 6. The HOMO–LUMO energy gap is lower in the former compared to the latter. The synthesized compounds showed moderate anticancer activity. Electronic supplementary material The online version of this article (10.1186/s13065-018-0420-7) contains supplementary material, which is available to authorized users.
Introduction
Currently marketed anticancer medications have increasing problems of various toxic side effects and development of resistance to their action. So, there is an urgent clinical need for the synthesis of novel anticancer agents that are potentially more effective and have higher safety profile. The synthesis of different thiazole derivatives has attracted great attention due to their diverse biological activities that include anticonvulsant [1,2], antimicrobial [3,4], anti-inflammatory [5,6], anticancer [7], antidiabetic [8], anti-HIV [9], anti-Alzheimer [10], antihypertensive [11], and antioxidant activities [12]. The reaction between active methylene compounds with phenylisothiocyanate and α-haloketones in DMF in the presence of potassium hydroxide is the simple and convenient method for the synthesis of many thiazole derivatives [13][14][15]. In continuation of our interest in the synthesis of new biologically active heterocyclic rings [16][17][18][19][20][21][22] and motivated by these information, it was thought worthwhile to synthesize some novel thiazole derivatives and to test their antitumor activity in order to discover new potentially biologically active drugs of synthetic origin.
Chemistry
The thiazole derivative 6 was previously obtained by the reaction of ethyl cyanoacetate with phenylisothiocyanate and propargyl bromide in DMF-NaH [23]. The presence of many functional groups attached to this bioactive thiazole ring motivated us to prepare it again to use it as a precursor for some new heterocycles bearing the bioactive thiazole ring. In this research, we used, instead of propargyl bromide, other reagents, such as chloroacetone, and we studied the configuration of the isolated products.
Next, fusion of thiazole 6 with DMF-DMA in presence of DMF afforded the unexpected thiazole derivative 11 (Scheme 2). The structure of the isolated product was elucidated based on its elemental and spectral analysis (IR, NMR, MS and X-ray) (see "Experimental section") (Figs. 3,4).
In many reports dimethylformamide were used as a formylating agent for indole [25], thiophene [26], and substituted benzene [27]. Based on these information, we suggested that the reaction was started via formylation of thiazole derivative 6 by DMF to afford the formyl derivative 7, which involved a reversible opening of the thiazole ring to give intermediate 8. The subsequent cyclization of 8 afforded 9, which underwent dehydration to give the methyl ketone 10. Reaction of intermediate 10 with dimethylformamide-dimethylacetal For more details see (Additional file 1: Tables S1-S6) (these files are available in the ESI section).
Geometry optimization
The optimized molecular geometries of the thiazole derivatives 6 and 11 are shown in Fig. 5 and the results of the calculated bond distances and angles are given in Additional file 1: Table S7. Good correlations were obtained between the calculated and experimental bond distances with correlation coefficients ranging from 0.991 to 0.996 (Fig. 6). The maximum differences between the calculations and experiments not exceed 0.03 Å for both compounds indicating the well prediction of the molecular geometries.
Charge population analysis
The natural population analysis is performed to predict the natural charges (NC) at the different atomic sites (Additional file 1: Table S8). The ring sulphur atom has natural charge of 0.5079 and 0.5499e for thiazole 6 and thiazole 11, respectively. In both cases, the S-atoms have electropositive nature where higher positive charge is found in thiazole 11 probably due to the presence of carbonyl group as electron withdrawing group directly attached to the ring while in thiazole 6, there is one methyl as electron releasing group via inductive effect attached to the ring. The negative sites are related to the nitrogen and oxygen sites as also further confirmed from the molecular electrostatic potential (MEP) maps shown in Fig. 7.
Frontier molecular orbitals
The HOMO and LUMO levels of the thiazole derivatives Since the HOMO and LUMO levels are mainly located over the π-system of the studied compound so the HOMO-LUMO intramolecular charge transfer is mainly a π-π* transition.
Cytotoxic activity
The anti-cancer activity of the thiazole derivatives 6 and 11 was determined against the Human Colon Carcinoma (HCT-116) cell line in comparison with the anticancer drug vinblastine, using MTT assay [28,29]. The cytotoxic activity was expressed as the mean IC 50 (the concentration of the test compounds required to kill half of the cell population) of three independent experiments ( Table 1). The results revealed that thiazole 11 has moderate anticancer activity against colon carcinoma (HCT-116), while thiazole 6 has less activity.
Chemistry General
All the melting points were measured on a Gallen Kamp apparatus in open glass capillaries and are uncorrected. The IR Spectra were recorded using Nicolet 6700 FT-IR
Method A
To a stirred solution of ethyl cyanoacetate (1.13 g, 1.07 mL, 10 mmol), in dimethylformamide (10 mL) was added potassium carbonate (1.38 g, 10 mmol). Stirring was continued at room temperature for 30 min, then phenylisothiocyanate (1.35 g, 1.2 mL, 10 mmol) was added dropwise to this mixture and stirring was continued for another 1 h. To this reaction mixture, chloroacetone (0.92 g, 0.8 mL, 10 mmol) was added and the mixture was stirred for additional 3 h at room temperature. Finally, the content was poured on cold water (50 mL
Method B
A mixture of ethyl cyanoacetate (1.13 g, 1.07 mL, 10 mmol) in sodium ethoxide (0.23 g Sodium in 10 ml of absolute ethanol) was stirred for 10 min. To this mixture, phenyl isothiocyanate (1.35 g, 10 mmol) was added dropwise and the mixture was stirred for another 1 h. Chloroacetone (0.92 g, 0.8 mL, 10 mmol) was added to the reaction mixture and stirring was continued for 3 h. Finally, it was poured on cold water and the solid precipitate that formed was filtered and recrystallized from DMF to afford the same product which obtained from method A, yield 65%.
X-Ray analysis
The thiazoles of 6 and 11 were obtained as single crystals by slow evaporation from DMF solution of the pure compound at room temperature. Data were collected on a BrukerAPEX-II D8 Venture area diffractometer, equipped with graphite monochromatic Mo Kα radiation, λ = 0.71073 Å at 100 (2) K. Cell refinement and data reduction were carried out by Bruker SAINT. SHELXT [30,31] was used to solve structure. The final refinement was carried out by full-matrix least-squares techniques with anisotropic thermal data for nonhydrogen atoms on F. CCDC 1504892 and 1505279 contain the supplementary crystallographic data for this compound can be obtained free of charge from the Cambridge Crystallographic Data Centre via www.ccdc.cam.ac.uk/data_reque st/cif.
Computational details
The X-ray structure coordinates of the studied thiazoles were used for geometry optimization followed by frequency calculations. For this task, we used Gaussian 03 software [32] and B3LYP/6-31G(d,p) method. All obtained frequencies are positive, and no imaginary modes were detected. GaussView4.1 [33] and Chemcraft [34] programs have been used to extract the calculation results and to visualize the optimized structures.
Cytotoxic activity
The cytotoxic activity of the synthesized compounds was determined against Human Colon Carcinoma (HCT-116) by the standard MTT assay [28,29].
|
v3-fos-license
|
2017-10-09T05:27:14.621Z
|
2017-10-09T00:00:00.000
|
539435
|
{
"extfieldsofstudy": [
"Biology",
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fmicb.2017.01956/pdf",
"pdf_hash": "bf9d3bd59b5aae351459ff822d1d9666f7f10005",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44318",
"s2fieldsofstudy": [
"Biology",
"Chemistry",
"Medicine"
],
"sha1": "bf9d3bd59b5aae351459ff822d1d9666f7f10005",
"year": 2017
}
|
pes2o/s2orc
|
pH-Dependant Antifungal Activity of Valproic Acid against the Human Fungal Pathogen Candida albicans
Current antifungal drugs suffer from limitations including toxicity, the emergence of resistance and decreased efficacy at low pH that are typical of human vaginal surfaces. Here, we have shown that the antipsychotic drug valproic acid (VPA) exhibited a strong antifungal activity against both sensitive and resistant Candida albicans in pH condition similar to that encountered in vagina. VPA exerted a strong anti-biofilm activity and attenuated damage of vaginal epithelial cells caused by C. albicans. We also showed that VPA synergizes with the allylamine antifungal, Terbinafine. We undertook a chemogenetic screen to delineate biological processes that underlies VPA-sensitivity in C. albicans and found that vacuole-related genes were required to tolerate VPA. Confocal fluorescence live-cell imaging revealed that VPA alters vacuole integrity and support a model where alteration of vacuoles contributes to the antifungal activity. Taken together, this study suggests that VPA could be used as an effective antifungal against vulvovaginal candidiasis.
INTRODUCTION
Candida albicans is the major human fungal pathogens and also a component of the normal human flora, colonizing primarily mucosal surfaces, gastrointestinal and genitourinary tracts, and skin (Berman and Sudbery, 2002). Although many infections involve unpleasant but non-life-threatening colonization of various surface of mucosal membranes, immunosuppressed patients can fall prey to serious mucosal infections, such as oropharyngeal candidiasis in HIV patients and newborns, and lethal systemic infections (Odds, 1987). C. albicans followed by C. glabrata are natural components of the vaginal fungal microbiota and, opportunistically, the leading causative agents of vulvovaginal candidiasis (VVC). VVC affects 70-75% of childbearing women at least once, and 40-50% of them will experience recurrence (Sobel, 2007).
Topical azoles-based antifungal formulations (e.g., fluconazole, clotrimazole, miconazole, or butoconazole) such as vaginal suppositories, tablets, and cream are widely used to treat VVC. However, their efficiency is questioned especially for C. glabrata who is intrinsically resistant to azoles. Furthermore, VVC are often caused by C. albicans azole-resistant strains (Sobel, 2007;Marchaim et al., 2012). Importantly, antifungals used for VVC treatments had to fulfill the constraint of remaining effective at acidic pH (4-4.5), which is the normal pH of human vaginal surfaces. Recent studies had proven that the acidic pH increases the minimal inhibitory concentrations (MICs) of several antifungals including azoles, amphothericin B, ciclopirox olamine, flucytosine, and caspofungin for C. albicans (Danby et al., 2012). Pai and Jones reported a similar finding in C. glabrata where MICs of triazoles were increased in pH 6 as compared to pH 7.4 (Pai and Jones, 2004). Taken together, these data demonstrate that in addition to the complications related to the acquired or the intrinsic-resistance to conventional antifungals, reduction of antifungal potency at acidic pH can further complicate the treatments of VVC. Due to the fact that the antifungal discovery pipelines of pharmaceutical companies are almost dry, there is an urgent need to identify novel low pH-effective antifungal molecules for VVC therapeutic intervention.
Valproic acid (VPA), is a branched short-chain fatty acid wellknown as a class I/II histone deacetylase inhibitor (HDACi) (Gottlicher et al., 2001;Phiel et al., 2001). VPA is widely prescribed as antipsychotic to treat epilepsy, bipolar disorder, and uncontrolled seizures (Privitera et al., 2006). The antifungal properties of VPA has been previously reported against different opportunistic fungi causing infections of the central nervous system (Galgoczy et al., 2012;Homa et al., 2015). Despite the growing interest on VPA as antifungal, its precise mechanism of action remains not clear. Recent investigations in the budding yeast Saccharomyces cerevisiae have shown that VPA induces apoptosis and inhibits both cell-cycle at the G1-S transition and the activation of the cell wall integrity pathway, Stl2 MAP kinase (Mitsui et al., 2005;Desfosses-Baron et al., 2016). VPA was also shown to cause inositol depletion which in turn led to vacuolar ATPase perturbation (Ju and Greenberg, 2003;Deranieh et al., 2015). In Schizosaccharomyce pombe, VPA acts as an HDACi and disturbs different cellular processes including calcium homeostasis, cell wall integrity, and membrane trafficking (Miyatake et al., 2007;Zhang et al., 2013).
We have recently shown that low pH strongly potentiates VPA antimicrobial activity against the model yeast S. cerevisiae (Desfosses-Baron et al., 2016). Here, we investigated the in vitro susceptibility of both planktonic and sessile cells of different sensitive and resistant clinical isolates of the opportunistic yeast C. albicans to VPA using conditions mimicking the vaginal environment. The effect of VPA on the ability of C. albicans to cause damage to vaginal epithelial cells were investigated. Drug synergy between VPA and 11 standard antifungal agents were also explored. In attempt to gain insight into the mechanism of action associated with the antifungal activity of VPA a genetic screen was undertaken to uncover mutations conferring hypersensitivity to VPA.
Fungal Strains, Media, and Chemicals
The fungal clinical and laboratory strains used in this study are listed in the Tables S1, S2, respectively. C. albicans and other yeast strains were routinely maintained at 30 • C on YPD (1% yeast extract, 2% peptone, 2% dextrose, with 50 mg/ml uridine) or synthetic complete (SC; 0.67% yeast nitrogen base with ammonium sulfate, 2.0% glucose, and 0.079% complete supplement mixture) or RPMI (RPMI-1640 with 0.3 g/Lglutamine) media. Acidic pHs used for VPA susceptibility were obtained using hydrochloric acid.
VPA Susceptibility and Time-Kill Assays
The pH-dependant effect of VPA on C. albicans was evaluated as follows: The reference clinical strain SC5314 was grown overnight in YPD medium at 30 • C in a shaking incubator. Cells were then resuspended in fresh SC at an optical density at 595 nm (OD 595 nm ) of 0.05. The pHs of SC media were adjusted using sodium hydroxide or hydrochloric acid for alkaline and acidic pHs, respectively. A total volume of 99 µl C. albicans cells was added to each well of a flat-bottom 96-well plate in addition to 1 µl of the corresponding stock solution of VPA. Plates were incubated in a Sunrise-Tecan plate reader at 30 • C with agitation and OD 595 nm readings were taken every 10 min over 24 h. Experiments were performed in triplicate, and average values were used for analysis. VPA effect on other fungal species at acidic pH was performed in a similar fashion.
The Minimal Inhibitor Concentration (MIC) was determined following the CLSI recommendations (CLSI, 2008). Briefly, 50 µl of VPA or standard antifungals at two-fold the final concentration prepared in RPMI was serially diluted in flatbottom 96-well plates (Costar-Corning) and combined with 50 µl of an-overnight culture of C. albicans and other yeasts at 10 4 cell/ml. Plates were incubated at 30 • C with shaking and OD 595 nm readings were taken after 24 h using the Sunrise-Tecan plate reader. The MIC was determined as the first well with growth reduction of >10% based on OD 595 nm values in the presence of VPA or conventional antifungals as compared to untreated control cells. Time-kill was performed as described by Sanglard et al. (2003). Briefly, C. albicans SC5314 strain cultures were grown in RPMI pH 4.5 at 30 • C under shaking in the presence of different concentration of VPA for defined time periods (6, 24, and 48 h). Fractions of cultures were removed at each exposition time and the colony forming units (CFU) counts were ensured by serial dilution in YPD-agar.
Synergism Assay
Evaluation of synergistic interactions between VPA and standard antifungals was performed using RPMI-1640 medium buffered at pH 4.5. Synergism was assessed by calculating the fractional inhibitory concentration (FIC) index as described by Epp et al. (2010). The FIC index was calculated as follows: (MIC of VPA in combination/MIC of VPA alone) plus (MIC of a standard antifungal in combination/MIC of a standard antifungal alone).
Biofilm Formation and XTT Reduction Assay
Biofilm formation and XTT (2,3-bis(2-methoxy-4-nitro-5-sulfophenyl)-2H-tetrazolium-5-carboxanilide) assays were carried out as previously described by Askew et al. (2011). Overnight YPD cultures were washed three times with PBS and resuspended in fresh RPMI supplemented with L-glutamine (0.3 g/l) to an OD 595 nm of 1. C. albicans yeast cells were allowed to adhere to the surface of 96-well polystyrene plate for 3 h at 37 • C in a rocking incubator. Non-attached cells were washed from each well three times with PBS and fresh RPMI supplemented with VPA was added for 24 h at 37 • C for biofilm formation. The plates were then washed and fresh RPMI supplemented with 100 µl of XTT-menadione (0.5 mg/ml XTT in PBS and 1 mM menadione in acetone) was added. After 3 h incubation on the dark at 37 • C, 80 µl of the resulting colored supernatants were used for colorimetric reading (OD 490 nm ) to assess metabolic activity of biofilms. A minimum of four replicates were at least performed.
Vaginal Epithelial Cell Damage Assay
Damage of vaginal epithelial cells was assessed using the lactate dehydrogenase (LDH) cytoxicity detection kit (Sigma) based on the release of LDH in the surrounding medium following the manufacturer's protocol. VK2/E6E7 (ATCC-CRL-2616) vaginal epithelial cell line was grown on a keratinocyteserum free medium (supplemented with 0.1 ng/ml recombinant epidermal growth factor and 50 µg/ml bovine pituitary extract) as a monolayer to 95% confluency on a 96-well culture plate and incubated at 37 • C with 5% CO 2. VK2/E6E7 cells were infected with 2 × 10 4 of C. albicans SC5314 blastospores for 24 h. A total of 100 µl supernatant was removed from each experiment and LDH activity in this supernatant was determined by measuring the absorbance at 490 nm (OD 490 nm ). LDH activity were calculated as the mean of, at least, three independent biological replicates.
Genetic Screen for VPA-Sensitive Mutants
A total of 2371 mutants from the transcription factors (Homann et al., 2009) (365 strains), transcriptional regulators (Vandeputte et al., 2012) (509 strains), kinases (Blankenship et al., 2010) (165 strains), and generalist collections (Noble et al., 2010) (1,332 strains) were screened for VPA-sensitivity. These mutant libraries were obtained from the Fungal Genetics Stock Center (FGSC). With the exception of the kinase collection where genes were disrupted by transposon insertions, mutants of the other collections were created through gene deletion of the complete ORF. In most cases and for each gene, at least two independent transformants were screened. Mutant strains were grown overnight in SC with pH 4.5 on flat-bottom 96-well and were plated on SC-agar pH 4.5 medium with or without VPA (50 µg/ml) using a 96-well blot-replicator. Mutants exhibiting more than fold-fold growth reduction based on colony diameter were compiled together in a 96-well plate and their sensitivity were confirmed to different concentration of VPA (10, 50, and 100 µg/ml) following the same procedure. Mutant strain with established VPA sensitivity were individually reconfirmed by serial dilution spot assay. A complete listing of VPAsensitive mutants is shown in Table S3. The overrepresentation of specific GO terms associated with the function of gene required for VPA tolerance was determined with GO Term Finder using a hypergeometric distribution with multiple hypothesis correction (http://www.candidagenome.org/cgi-bin/ GO/goTermFinder) (Inglis et al., 2012). Descriptions related to gene function in Table S3 were extracted from CGD (Candida Genome Database) database (Inglis et al., 2012).
Confocal Microscopy and Vacuole Integrity
C. albicans vacuole integrity was assessed using the lipophilic vacuole membrane dye MDY-64 (Molecular probes, Fisher Scientific) following the manufacturer's recommended procedure. Briefly, cells were grown overnight on RPMI liquid medium with pH 4.5 at 30 • C. Cells were pelleted and washed twice with fresh RPMI pH 4.5 and resuspended in the same medium at an OD 595 of 0.1. VPA was added at different concentrations (10, 50, and 100 µg/ml). Cells were incubated for 2 h at 30 • C under agitation. Aliquots were taken from VPAtreated and non-treated cultures and the MDY-64 was added at a final concentration of 10 µM. Cells were incubated at room temperature for 3 min prior to confocal microscopy visualization. Images were acquired with a 1.3-numerical-aperture (NA) 63x objective on a Leica DMI6000B inverted microscope connected to a Hamamatsu C9100-13 camera.
Pan1-green fluorescent protein (GFP), End3-GFP and LIFEACT-GFP (Epp et al., 2013) were visualized using confocal microscopy as follow: an overnight culture was diluted in SC supplemented with 10 or 50 µg/ml VPA to an OD 595 nm of 0.05 and grown for four generations at 30 • C under agitation. Cells were imaged as described for the vacuole staining experiments.
Antifungal Activity of VPA Is pH-Dependant
Antifungal activity of VPA on C. albicans was evaluated by monitoring OD 595 nm of cultures exposed for 24 h to increased concentration of VPA in SC media at different pHs. VPA exerted an inhibitory effect that was exaggerated in acidic pH ( Figure 1A). Antifungal activity of VPA was also assessed in other clinically relevant Candida species including C. glabrata, C. tropicalis, C. parapsilosis, and C. krusei in addition to the yeast S. cerevisiae. The obtained data demonstrates that VPA inhibited the growth of all tested fungal species, with C. albicans exhibiting FIGURE 1 | In vitro antifungal activity of valproic acid is pH-dependant. (A) Effect of different pHs on antifungal activity of VPA. The C. albicans SC5314 strain was grown in SC medium with different pH (4.5-8) supplemented with different concentration of VPA. SC5314 strain was grown at 30 • C and OD 595 nm reading was taken after 24 h of incubation. ODs measurement for each VPA concentration is the mean of triplicate. (B) VPA inhibit the growth of non-albicans Candida species. C. glabrata, C. parapsilosis, C. tropicalis, C. krusei in addition to S. cerevisiae were grown in SC medium pH 4.5 with different concentration of VPA. OD 595 nm reading was taken after 24 h of incubation at 30 • C under agitation. (C) Time-kill curve demonstrating the fungistatic activity of VPA. C. albicans SC5314 strain was exposed to two different concentrations (1,000 and 3,000 µg/ml) at different times (6, 24, and 48). CFUs were calculated as described in the method section.
To test whether VPA had fungistatic or fungicidal activity on C. albicans at acidic pH, time-kill curve assays were performed. Two high concentrations of VPA which correspond to 125x (1,000 µg/ml) and 375x (3,000 µg/ml) of the MIC for the C. albicans reference strain SC5314 (Table 1) were tested. VPA exhibited a concentration-independent fungistatic activity ( Figure 1C). Lower VPA concentrations ranging from 7.8 (MIC for the SC5314 strain) and 500 µg/ml were also tested and the obtained results demonstrates a similar fungistatic activity (result not shown).
Antifungal Activity of VPA against Azoleand Echinocandin-Resistant Strains
Since VPA was highly potent against C. albicans, we wanted to test whether its antifungal activity can be expanded to other clinically sensitive and resistant strains of this yeast. Several azole-resistant strains with different resistant mechanisms, were selected (Table S1) in addition to echinocandin-resistant isolates. A total of four sensitive and 11 resistant strains (six azole-and five echinocandin-resistant strains) were examined using broth microdilution assay as specified by CLSI at both neutral or acidic pHs. The sensitivity of C. albicans isolates to VPA was pH-dependant and MICs ranged from 3.5 to 15.6 µg/mlfor both resistant and susceptible strains ( Table 1). The range of MICs was also similar when comparing azoleresistant and echinocandin-resistant clinical strains separately (3.5-15.6 µg/ml). Overall, these results demonstrate that VPA may be of use to tackle therapeutic limitations related to acquired clinical resistance of C. albicans. Furthermore, comparable VPA-sensitivity in susceptible and resistant strains indicates that the mechanisms that confer resistance to azoles and echinocandins are distinct from those that may cause VPA resistance.
Valproic Acid Attenuate Damage of Vaginal Epithelial Cells Caused by C. albicans
To verify whether VPA exerts protective antifungal activity during host cell invasion, interaction of C. albicans with the human epithelial vaginal cell line VK2/E6E7 were performed as described in the method section. C. albicans-mediated damage of VK2/E6E7 cells were quantified based on the LDH release. Two different concentrations of VPA (7.8 and 78 µg/ml) corresponding to the MIC and 10x MIC for C. albicans SC5314 strain were used. In accordance with our in vitro data, the VPA had no significant protective effect at pH 7 (Figure 2). At pH 5, 7.8, and 78 µg/ml of VPA prevented 55 and 100% of VK2/E6E7 damage, respectively, as compared to the control. Intermediate protective activity was perceived at pH 6 where 28 and 52% damage reduction was obtained with 7.8 and 78 µg/ml of VPA, respectively. In support of in vitro data, these results demonstrate that VPA confers a protective antifungal activity during the invasion of vaginal epithelial cells.
VPA Acts Synergistically with Terbinafine in Both Susceptible and Resistant Strains
Different standard antifungals used against C. albicans and other human fungal pathogens were screened to identify drugs that could potentiate the anti-Candida activity of VPA.
Interactions of VPA with other 11 antifungal agents including azoles (Fluconazole, Voriconazole, Itraconazole, clotrimazole, Terconazole, and Miconazole), polyenes (Amphothericin B and Nystatin), echinocandins (Caspofungin and Micafungin), and the allylamine, Terbinafine were tested. Based on the appreciation of the FIC index in the clinical strain SC5314, VPA was found to exhibit an apparent synergistic interaction with terbinafine (
VPA Inhibits Biofilm Formation in Both Susceptible and Resistant Strains
The effect of VPA on biofilm formation was evaluated using the metabolic colorimetric assay based on the reduction of XTT at acidic and neutral pHs. At neutral pH, no VPA antibiofilm activity was noticed of all tested concentrations for the C. albicans SC5314 reference strain (not shown). In contrast, at pH 4.5, biofilm inhibition was apparent at 1.44 µg/ml of VPA with ∼5% of inhibition as compared to the control (Figure 3). The MIC of VPA on the SC5314 strain was evaluated at 7.2 µg/ml. The effect of VPA on biofilm formation was also tested in two azole-resistant strains (S2 and F5) with different resistance mechanisms in addition to two echinocandin-resistant isolates (DPL-1008 and DPL-1010). As for the SC5314 sensitive strain, the four resistant strains exhibited a clear reduction in metabolic activity at 1.44 µg/ml of VPA (Figure 3). The MIC values for the azole-resistant strains were similar (2.88 µg/ml VPA) and slightly decreased as compared to the SC5314 susceptible strain. The two echinocandin-resistant strains DPL-1008 and DPL-1010 were highly sensitive to VPA as compared to other strains and their MIC was noticed at 1.44 µg/ml of VPA. These results demonstrate that, in addition to its antifungal activity on planktonic cells, VPA is also active on sessile forms of C. albicans at acidic pH.
Mutants Defective in Vacuolar Functions Are Hypersensitive to VPA
To gain insight into the mechanism of action of VPA associated with its antifungal property, a comprehensive regulatory and generalist mutant collections of C. albicans were screened for their sensitivity to VPA. Among the 947 unique mutants that were screened, 55 were confirmed to be hypersensitive to VPA (Table S3). To identify the functional categories that are associated with mutations affecting VPA susceptibility, we performed gene ontology (GO) enrichment analysis. Our data demonstrated that VPA sensitive mutants are defective in genes related primarily to vacuole transport (p = 1.72e-08) and organization (p = 8.86e-09) ( Table 3, Table S4). This include mutants of vacuolar protein sorting (vps15, vps34, vps64, and ypt72), proteins associated with the retromer complex (pep7 and pep8), and proteins required for vacuole inheritance, and organization (cla4, pep12, vam6, vps41, and pep12). Requirement of vacuolar functions for VPA tolerance was also reported previously in S. pombe and S. cerevisiae (Deranieh et al., 2015) where genome-wide screens demonstrated that retromer complex and vacuolar ATPases, respectively, were associated with VPA sensitivity. Taken together, our chemogenetic screen provides a rational for mechanistic investigation into the effect of VPA on fungal vacuole.
VPA Alters Vacuole Morphology
Our chemogenetic screen demonstrated clearly that C. albicans sensitivity to VPA were exaggerated in mutant of vacuolar transport, organization and inheritance. The requirement of intact vacuolar pathways for VPA tolerance suggests that VPA might alters the function and/or the integrity of the vacuole.
To verify this hypothesis, the integrity of C. albicans vacuoles were assessed using the vacuole membrane marker, MDY-64, in cells treated or not with different concentrations of VPA at pH 4.5. A dominant fraction of non-treated cells internalized the MDY-64 dye and exhibited well-structured vacuoles with two to four compartments comprising discernable lumens ( Figure 4A). However, cells treated with either 10 or 50 µg/ml of VPA displayed an altered vacuole structure with a foamy fluorescence pattern and indistinguishable lumens ( Figure 4B). These findings suggest that VPA affect the morphology and the integrity of vacuoles in C. albicans.
DISCUSSION
Candida pathogenic species are adapted to survive in different acidic environments inside their host such as the vagina, inflammatory foci like abscesses (Park et al., 2012) and phagolysosomes of neutrophils and macrophages (Erwig and Gow, 2016). In such acidic condition, several studies demonstrated that the in vitro activity of standard antifungals is compromised as evidenced by the increase of their MICs (Marr et al., 1999;Pai and Jones, 2004;Danby et al., 2012). In the current study, we demonstrated that the antifungal activity of the VPA, a histone deacetylase inhibitor and the widely prescribed as antipsychotic, is potentiated at acidic pH that resemble to that of host niches cited above. We also demonstrated that VPA potentiates the antifungal activity of the widely prescribed terbinafine at acidic pH. In this regard, VPA, alone or with terbinafine, may be useful against fungal vaginosis caused primarily by C. albicans. VPA was also found to be effective against both echinocandin-and azole-resistant strains suggesting that this molecule represents an alternative solution to circumvent VVC or recurrent VVC caused by C. albicans strains that are resistant to standard antifungals. In the current study, VPA were also potent against C. albicans biofilm in a similar fashion as for planktonic cells and for both sensitive and clinical resistant strains. As for vaginal bacterial pathogens, C. albicans is able to form infective biotic biofilms on the vaginal mucosal surfaces (Harriott et al., 2010). Due to the fact that biofilm growth is impervious to all conventional antifungals, and since efficiency of these drugs is compromised at acidic pH, VPA may represent thus a promising alternative for antibiofilm therapy.
Importantly, this work supports a direct clinical repurposing of VPA as an antifungal against VVC or recurrent VVC due to the fact that its safety profile has been extensively characterized in vivo over the past decades of its clinical use in systemic forms as anticonvulsant (Lagace et al., 2004) or anticancer (Gupta et al., 2013). VPA had also a broad therapeutic safety margin when used topically (Choi et al., 2013). It does not cause skin irritations such as erythema and edema and had no toxicity to different human cells including keratinocytes, fibroblasts, and mast cells (Choi et al., 2013). In the current work, we also find that VPA did not impair the growth and the integrity of the vaginal epithelial cells VK2/E6E7 as judged by the LDH cytotoxicity assay ( Figure S2). While a whole animal vaginal model is required to confirm that VPA does not cause vaginal irritations, the aforementioned studies are supportive of a safe use of VPA topically against VVC.
It is intriguing that the antifungal activity of VPA was acidic pH-dependant. This could be explained by the chemical nature of VPA, which is an eight-carbon branched-chain acid with proprieties of weak acid (pKa 4.8). Low pH is expected to decrease its ionization state and increase its liposolubility, which in turn may facilitate the passage through the plasma membrane and its accumulation in the cells. Future structure-guided medicinal chemistry approach by introducing structural changes in VPA that can lead to beneficial biological activity in a pH-independent manner will allow expanding the potential use of this molecules form VVC and recurrent VVC to treat oral C. albicans infections and even systemic candidiasis.
In the current study, we undertook a chemogenetic screen to delineated biological process that underlies VPA-sensitivity in C. albicans. This screen enables the identification of different vacuole-related functions as being required to tolerate VPA and provide thus a rational to examine the effect of this molecule on fungal vacuole. Our data demonstrates clearly that VPA antifungal activity is a consequence of the impairment of vacuole integrity and illuminate thus a previously unappreciated mechanism of action of this drug. Recent work in S. cerevisiae indicates that cellular depletion of inositol by VPA disrupts the vacuolar phosphoinositide, PI3,5P2 homeostasis which compromise the function of V-ATPase activity and proton pumping (Deranieh et al., 2015). This V-ATPase phenotype was rescued by supplementing the growth medium by inositol. Despite the requirement of V-ATPases to tolerate VPA in S. cerevisiae, the authors did not report any alteration of the vacuolar morphology by VPA as seen in our investigation. Furthermore, the vacuole defects in C. albicans were not recovered by adding inositol to the growth medium suggesting that VPA may act via a different mechanism in this pathogenic yeast. Similarly, in S. pombe, genetic screens revealed that mutant of genes operating in Golgi-endosome membrane trafficking and vacuole retromer complex were hypersensitive to VPA (Miyatake et al., 2007;Ma et al., 2010;Zhang et al., 2013), however, no apparent alteration of vacuole was seen in this yeast model.
Regardless of the exact vacuolar process that is targeted by VPA, our study reinforces the fact that pharmacological perturbation of vacuole leads to fungal growth inhibition and is protective for host cells. Different C. albicans vacuolar proteins has been previously characterized and linked to the ability to infect the host and to control different virulence traits including biofilm formation, filamentation, and resistance to antifungals. This include for instance vacuolar membrane and cytosolic V-ATPases (Vma2, Vma3, and Vph1) (Patenaude et al., 2013;Rane et al., 2013Rane et al., , 2014, proteins mediating vesicular trafficking to the vacuole (Pep12, Vps11, and Vps21) (Palmer et al., 2005;Johnston et al., 2009;Palanisamy et al., 2010;Wachtler et al., 2011) and the vacuolar calcium channel, Yvc1 (Wang et al., 2011). This makes the vacuole an ideal therapeutic target to manage fungal infections. However, the functional resemblance of fungal vacuoles with their human counterpart organelle, lysosomes, raises uncertainty regarding their druggability. Indeed, while the two V-ATPase inhibitors bafilomycin A1 and concanamycin A from Streptomyces, exhibit a potent activity against C. albicans they also compromise the activity of the mammalian V-ATPases (Olsen, 2014). Meanwhile, the fungal vacuoles had distinctive proteins such as the V0-ATPase subunit with no apparent human homologs that could be specifically targeted for pharmacological interventions in the treatment of fungal infections. In this regard, we demonstrate that VPA had no cytotoxicity on vaginal epithelial cells at concentrations above 10 times the MIC of C. albicans suggesting that VPA-mediated vacuole alteration is fungus-specific ( Figure S2).
In conclusion, we have shown that VPA is a potent antifungal at acidic pH and consequently an attractive therapeutic molecule against vulvovaginal candidiasis. We have also described an unreported effect of VPA on the structural integrity of fungal vacuoles which might be the main cause of its cytotoxicity.
AUTHOR CONTRIBUTIONS
AS designed the experiments; JC, FT, CG performed the experiments; JC, FT, CG, HW, RP, and AS analyzed the data; AS and JC wrote the manuscript with the help of all authors.
|
v3-fos-license
|
2018-05-08T17:56:40.939Z
|
2004-09-30T00:00:00.000
|
44979927
|
{
"extfieldsofstudy": [
"Chemistry"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.mdpi.com/1420-3049/9/10/876/pdf?version=1403112395",
"pdf_hash": "f518e018a8289c74de5a3d1f01e4bf74d2047ae7",
"pdf_src": "Grobid",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44321",
"s2fieldsofstudy": [
"Chemistry"
],
"sha1": "f518e018a8289c74de5a3d1f01e4bf74d2047ae7",
"year": 2004
}
|
pes2o/s2orc
|
Crystal Structure and Characterization of the Dinuclear Cd(II) Complex [Cd(H2O)2(ο-HOC6H4COO)2]2
The structure of a new binuclear cadmium (II) complex, [Cd(H2O)2(Sal)2]2 (Sal= salicylate), has been determined by X-ray crystallography. It was also characterized by elemental analysis, its IR spectrum and thermogravimetric-differential scanning calorimetry (TG-DSC). It crystallizes in the monoclinic system, space group P2(1)/c, with lattice parameters a = 15.742(3) A, b = 12.451(3) A, c = 7.7225(15) A, beta = 96.07(3)degrees and Z = 4. Two cadmium (II) ions are bridged by two mu(2)-carboxy oxygen atoms. Each cadmium atom lies in a distorted capped octahedron coordination geometry. The thermal gravimetry (TG) data indicate that there are four discrete decomposition steps with two endothermic peaks and one exothermic peak. The final thermal decomposition product is CdO.
Introduction
Cd 2+ ions have been found to induce various pathological conditions, such as cardiovascular diseases [1], hypertension and cancer [2].Cadmium can replace the zinc of superoxide dismutase (SOD) and this results in a drop in the biological activity of SOD [3].In addition, cadmium has some stress-causing activity towards enzymes inolved in the tolerance mechanisms of crops [4].Cadmium(II), being a d 10 ion, provides few spectroscopic signatures for structure monitoring, however, the structure of the cadmium complex could be elucidated by X-ray crystallography.In this paper, we report the synthesis and crystal structure of the title compound.Its elemental analysis, FT-IR and thermal analysis have also been investigated.
IR Spectra
In the IR spectra, the bands around 3418~3548 cm -1 can be attributed to O-H stretching modes, which are consistent with the presence of water in the crystal [12].The characteristic absorption bands at 1617 and 1637 cm -1 are the salicylate antisymmetric carboxyl vibrations, while the symmetric carboxyl stretching frequency occurs at 1429 cm -1 [13,14].
Thermal Analysis
The TG/DSC curves of the title compound are presented in Figure 3.In this figure we can see that there are four discrete weight loss steps and that the decomposition events mainly take place at 66.9ºC, 172.3ºC and 512.5ºC, with two heat-absorption peaks and one heat-liberation peak, respectively.On the base of weight changes, the first endothermic weight loss process (8.90 %) corresponds to the loss of two water molecules (found 8.90 % calc.8.52 %); the second weight loss event may be related to the loss of one phenol (PhOH) molecule with the breakage of C-C bonds (found: 22.43 %, calc.22.00 %), an exothermic phenomenon.The weight loss of 3.58 % from 240ºC to 370ºC is attributed to the loss of one oxygen atom (found: 3.58 %, calc.3.79 %).There is a broad endothermic peak at 512.5ºC and ca.33.93 % weight loss in the TG curve between 380~540ºC, which is attributed to the loss of the phenol groups, while the residual weight of 31.16 % suggests that the residue may be CdO (found 31.16 % calcd.30.38 %).In the temperature range of 540~730ºC, there is about an 18.21 % weight loss.From this weight loss we can suggest that the corresponding transition process may be attributed to the sublimation of part of the CdO.[15] A summary of the key crystallographic information is given in Table 1.The selected crystal of [Cd(H 2 O) 2 (ο-HOC 6 H 4 COO) 2 ] 2 was mounted on an Rigaku Raxis-IV diffractometer using Mo-Kα radiation (λ=0.71073Å, T = 293 K) with a graphite monochromator.Intensities were corrected for Lorentz and polarization effects and empirical absorption, and the data reduction using the SADABS program [16].The structure was solved by the direct methods and all the non-hydrogen atoms were refined on F 2 anistropically by the full-matrix least squares method [17].The hydrogen atom positions were fixed geometrically at calculated distances and allowed to ride on the parent carbon atoms.The molecular graphics were plotted using SHELXTL [17].Atomic scattering factors and anomalous dispersion corrections were taken from International Tables for X-ray Crystallography [18].
Figure 2 :
Figure 2: Packing diagram of the unit cell along the b axis.
Figure 3 :
Figure 3: TG/DSC curves of the title compound
|
v3-fos-license
|
2022-05-03T13:22:31.749Z
|
2023-01-11T00:00:00.000
|
248497673
|
{
"extfieldsofstudy": [
"Medicine",
"Biology"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://doi.org/10.1016/j.xplc.2023.100669",
"pdf_hash": "0f02f12fca68f497d98ccb47374ea9d136117239",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44323",
"s2fieldsofstudy": [
"Biology",
"Environmental Science"
],
"sha1": "4430b725cfb35cc6102702c20227f39d3f2f1a82",
"year": 2023
}
|
pes2o/s2orc
|
Divergence of trafficking and polarization mechanisms for PIN auxin transporters during land plant evolution
The phytohormone auxin, and its directional transport through tissues, plays a fundamental role in the development of higher plants. This polar auxin transport predominantly relies on PIN-FORMED (PIN) auxin exporters. Hence, PIN polarization is crucial for development, but its evolution during the rise of morphological complexity in land plants remains unclear. Here, we performed a cross-species investigation by observing the trafficking and localization of endogenous and exogenous PINs in two bryophytes, Physcomitrium patens and Marchantia polymorpha, and in the flowering plant Arabidopsis thaliana. We confirmed that the GFP fusion did not compromise the auxin export function of all examined PINs by using a radioactive auxin export assay and by observing the phenotypic changes in transgenic bryophytes. Endogenous PINs polarize to filamentous apices, while exogenous Arabidopsis PINs distribute symmetrically on the membrane in both bryophytes. In the Arabidopsis root epidermis, bryophytic PINs have no defined polarity. Pharmacological interference revealed a strong cytoskeletal dependence of bryophytic but not Arabidopsis PIN polarization. The divergence of PIN polarization and trafficking is also observed within the bryophyte clade and between tissues of individual species. These results collectively reveal the divergence of PIN trafficking and polarity mechanisms throughout land plant evolution and the co-evolution of PIN sequence-based and cell-based polarity mechanisms.
INTRODUCTION
Auxin is a crucial regulator of polarity and morphogenesis in land plants (Mockaitis and Estelle, 2008;Smit and Weijers, 2015;Kato et al., 2018;Leyser, 2018;Yu et al., 2022).The auxin gradients and local maxima within tissues coordinate a broad spectrum of plant development, ranging from embryogenesis to organ formation and tropisms (Vanneste and Friml, 2009;Friml, 2021).The establishment of the auxin gradient relies predominantly on directional auxin transport driven by the PIN-FORMED (PIN) efflux carriers.In different tissue types, specific PINs are polarized at different plasma membrane (PM) domains, directly driving the directionality of auxin flow (Adamowski and Friml, 2015).Therefore, given the essential impact of auxin flow in various developmental processes, the function and polarization of PIN proteins are crucial for maintaining the correct pattern of plant growth and patterning (Sauer and Kleine-Vehn, 2019).
PINs are found in all land plants and can be traced back to charophytic green algae (Viaene et al., 2013;Skokan et al., 2019).Functional conservation of PINs in auxin transport has also been demonstrated by exogenously expressing PINs from the green algae Klebsormidium flaccidum, the moss Physcomitrium patens, and the angiosperm Arabidopsis thaliana in transgenic plants and in heterologous systems (Zourelidou et al., 2014;Skokan et al., 2019).Additionally, when exogenous PINs from charophytes or Arabidopsis are overexpressed in P. patens, the transgenic plants show growth inhibition that resembles auxin deprivation (Viaene et al., 2014;Lavy et al., 2016;Tao and Estelle, 2018;Skokan et al., 2019).These observations support the hypothesis that PIN-mediated polar auxin transport has been governing plant development since the emergence of land plants.
PIN polarity regulation has been investigated extensively in the angiosperm Arabidopsis.Canonical PINs contain a long central hydrophilic loop (HL) between two transmembrane domains and are delivered to the PM via the endoplasmic reticulum-Golgi apparatus vesicle trafficking pathway.Depolymerization of actin filaments induces accumulation of PIN-labeled small intracellular puncta near the PM but with no apparent PIN polarity defect (Geldner et al., 2001;Glanc et al., 2018;2019).Microtubules are involved in the cytokinetic trafficking of PINs but are not required for polarity establishment or maintenance at the PM of non-dividing cells (Geldner et al., 2001;Kleine-Vehn et al., 2008b;Glanc et al., 2019).Notably, disruption of both cytoskeletal networks delays, but does not abolish, AtPIN2 polarization in Arabidopsis epidermal cells (Glanc et al., 2019).This suggests that the cytoskeletal networks participate in but are not strictly essential for PIN polar trafficking, whereas other mechanisms contribute to PIN polar localization.PINs are known to undergo constitutive cycles of endocytosis and recycling, which is modulated by auxin itself (Narasimhan et al., 2020(Narasimhan et al., , 2021)).This process is essential for their polar distribution (Kleine-Vehn et al., 2011;Doyle et al., 2015).
Phosphorylation of specific sites within the HL region is a critical determinant for the apical-basal polarization pattern of PINs in Arabidopsis epidermal cells.A serine/threonine kinase, PINOID, phosphorylates specific sites on AtPIN2 and leads to its apical localization (Friml et al., 2004).In contrast, when phosphatase 2A dephosphorylates AtPIN2 it counteracts PINOID-dependent phosphorylation and guides the delivery of AtPIN2 to the basal domain of epidermal cells (Michniewicz et al., 2007).The phosphorylation sites targeted by different kinase families are crucial for polar localization and PIN function, and most sites are highly conserved within canonical Arabidopsis PINs (Zwiewka et al., 2019).Because PINs are present in all land plants, one can hypothesize that phosphorylation-based polarity regulation may have been established since the emergence of early land plants.However, it has never been demonstrated that these phosphorylation sites are evolutionarily conserved in early land plants.
PIN polarization has been observed in the moss P. patens, which grows as filamentous protonemata.PpPINA-GFP exhibits polar localization at the tip of protonema cells, but polar localization of PpPINA-GFP is not always conserved in other species (Bennett et al., 2014;Viaene et al., 2014).When PpPINA-GFP is expressed in Arabidopsis root epidermal cells, where AtPIN2-GFP exhibits clear apical localization, PpPINA-GFP localizes to basal and apical sites (Zhang et al., 2019).Furthermore, PINs from the liverwort Marchantia polymorpha and from the green alga K. flaccidum are also mislocalized in root epidermal cells in Arabidopsis (Zhang et al., 2019).This distinctive PIN localization pattern in different species suggests that mechanisms of PIN trafficking and polarization may have diversified after the emergence of land plants.Despite the profound significance of PIN polarization, and the resulting directional auxin transport for land plant development, PIN trafficking and polarization mechanisms are mainly derived from the angiosperm model A. thaliana.
In this study, we investigated PIN trafficking/polarization mechanisms from an evolutionary perspective.We show that canonical PINs from the bryophytes P. patens and M. polymorpha and the land plant Arabidopsis exhibit high conservation in their transmembrane domains and phosphorylation sites.Endogenous PIN-GFP shows different localization patterns in various developmental contexts, suggesting tissue-specific PIN polarization mechanisms.A cross-species investigation revealed that exogenous PINs can traffic to the PM but fail to enrich at the polar domains, unveiling species-specific mechanisms for PIN polarization.This notion was verified by a different dependency of the cytoskeleton for polarization of Arabidopsis PINs and bryophytic PINs.Overall, our results highlight that PIN trafficking and polarization mechanisms underwent complex evolution during the gradual rise of morphological complexity in land plants.
Phosphorylation sites are highly conserved between bryophytic and Arabidopsis PINs
We performed a phylogenetic analysis to better understand the extent of conservation between bryophytic and Arabidopsis PINs.The coding sequences of the single canonical PIN MpPINZ from M. polymorpha, three canonical PINs (PpPINA-PpPINC) from P. patens, and five canonical PINs (AtPIN1-AtPIN4 and AtPIN7) from Arabidopsis were aligned using MEGA X (Kumar et al., 2018).Bryophytic PINs cluster together, and MpPINZ is more closely related to Arabidopsis PINs than PpPINs (Figure 1A).Arabidopsis AtPIN1 and AtPIN2 exhibit a polar localization pattern at the PM that plays a crucial role in embryogenesis, organ formation, and tropic growth (Krecek et al., 2009;Omelyanchuk et al., 2016).The phylogenetic tree shows that AtPIN1 and AtPIN2 are equally close to bryophytic PINs, and the polarized localization pattern of AtPIN2 has been studied extensively in roots (Abas et al., 2006;Kleine-Vehn et al., 2008a;Glanc et al., 2018).Therefore, we used AtPIN2 as our reference for further alignment analyses.The identity index of coding amino acid sequences showed that full-length AtPIN2 shares around 50% identity with each bryophytic PIN (Figure 1B).We suspected that central HLs would be more divergent since the transmembrane domains show high similarity between all examined PINs.Surprisingly, the identity index for the HL region of AtPIN2 shared over 40% identity with the HL region of MpPINZ (45%), PpPINA (42%), and PpPINB (43%) (Figure 1B).The identity index of full-length AtPIN1 and HLs showed similar results as AtPIN2 (Supplemental Figure 1).
The overview of coding sequences for all examined PINs revealed highly conserved transmembrane domains at the N-terminus and C-terminus that were connected by a less conserved HL region (Figure 1C).Because polarization of AtPIN2 is tightly associated with the phosphorylation status of the HL region, we used AtPIN2 as a reference to search for and highlight these experimentally identified phosphorylation sites (Sukumar et al., 2009;Zhang et al., 2010;Barbosa et al., 2018).Compared with transmembrane domains, despite relatively lower conservation in their HLs, the four identified phosphorylation sites are fully conserved between Arabidopsis PINs and bryophytic PINs (Figure 1D), which suggests that PIN phosphorylation might be evolutionarily conserved to regulate the intracellular localization of PIN proteins.
The HL regions in AtPIN2, PpPINA, and MpPINZ are less conserved We used Alphafold2 to predict the structures of Arabidopsis and bryophytic PINs and performed structural alignments to assess their overall structural conservation.These structures closely resembled the crystal structures of AtPIN1 and AtPIN8 (Supplemental Figure 2A and 2B;Jumper et al., 2021;Ung et al., 2022;Yang et al., 2022), which suggested that the Alphafold2 structure predictions were reliable.We compared these structures to the well-characterized and polarized AtPIN1 and AtPIN2 using Alphafold2 for structure prediction and Chi-meraX for structure alignments (Jumper et al., 2021;Pettersen et al., 2021).The structures of the transmembrane domains were highly conserved, and the HL regions shared similar folds (Figure 2A, black arrowheads) except for two additional loops in AtPIN2 (Figure 2A, white arrowheads).We next aligned the structure of AtPIN2, PpPINA, and MpPINZ.The predicted transmembrane domains of these three PINs had very high confidence scores in Alphafold2 and were highly conserved with nearly perfect alignment.However, the HL regions had very low confidence scores and were less conserved with only one loop sharing partial similarity (Figure 2B and Supplemental Figure 2C, black arrowheads).Conserved phosphorylation sites were observed by rotating the aligned protein structures presented in Figure 2C-2E.In general, bryophytic PINs possess looser and larger loops compared with AtPIN2 (Figure 2B-2E).The predicted structures are shown individually with the annotated phosphorylation sites that are indicated in Figure 1D (Figure 2C-2E).The structural conservation of transmembrane domains implies that bryophytic PINs may traffic to the PM as Arabidopsis PINs do, whereas the loose loops with conserved phosphorylation sites suggest that their polarization pattern may be different.
GFP-fused PIN proteins possess auxin export activity
The sequence and structural analyses of Arabidopsis and bryophytic PINs revealed high conservation in sequence, phosphorylation sites, and structure of the transmembrane domains.We next investigated whether bryophytic PINs are delivered to the PM with polar domain enrichment as Arabidopsis PINs.We determined the localization pattern of the PIN proteins by fusing GFP to AtPIN1, AtPIN2, PpPINA, and MpPINZ as shown in Figure 3A (Zhang et al., 2019).Auxin export mediated by these PIN-GFP fusions was assessed by subcloning each fusion gene into a moss vector and driving expression with an inducible XVE promoter (Kubo et al., 2013).We generated transgenic moss plants expressing single XVE::PIN-GFP transgenes and verified their genotypes (Supplemental Figure 3).We used these transgenic lines to perform the auxin export assay.In brief, overexpression of PIN-GFPs was
Plant Communications
Evolution of PIN trafficking and polarization induced by b-estradiol for 3 days, followed by radioactive auxin H 3 -IAA treatment for 24 h.The radioactive tissues were then washed twice and incubated in fresh growth medium for another 24 h.The culture medium was collected for H 3 scintillation detection (Lewis and Muday, 2009).Wild-type (WT) moss plants were used as an internal control to show basal exportation of H 3 -IAA by endogenous PpPINs.In comparison with the WT, all examined PIN-GFP plants showed a higher amount of radioactive auxin in the culture medium, indicating their auxin export activity (Figure 3B).Evolution of PIN trafficking and polarization
Plant Communications
The function of PIN-GFPs was also confirmed by the growth changes caused by PIN-GFP overexpression in P. patens and M. polymorpha.During early development, P. patens gradually transits its filamentous protonemata from thicker/shorter chloronema cells with perpendicular division planes to thinner/longer caulonema cells with oblique division planes (Rensing et al., 2020).
Here we showed that overexpression of PpPINA-, AtPIN1-, and AtPIN2-GFP led to similar defects in this chloronema-caulonema transition, with a shorter length of the subapical cell and a larger division angle (Figure 3C-3E).Wild-type M. polymorpha has prostrate thalli.However, with overexpression of either MpPINZ-GFP or AtPIN-GFPs (genotypes are confirmed in Supplemental Figure 3), the thallus grew more vertically, as apparent from the side view (Figure 3F), phenocopying the auxin-deficient phenotype (Kato et al., 2017).The angles between the thallus and the horizontal agar were measured to quantify vertical growth (Figure 3G).Overexpression of MpPINZ-GFP caused the most striking phenotype, but overexpression of AtPIN-GFPs also resulted in vertical growth that showed significant differences from the WT.
Our results show that all PIN-GFPs can export auxin at least in the moss system and that overexpression of PIN-GFPs causes phenotypic changes in P. patens and M. polymorpha.
Endogenous PINs exhibit different localization patterns in different tissue types
We utilized a stable moss transgenic line expressing the PpPINA genomic DNA-GFP fusion under its native promoter (pPINA:: PpPINA-GFP) to observe PpPINA-GFP localization in P. patens.The moss P. patens has a filamentous protonema stage and a leafy gametophore stage in its life cycle.The initial protonema cell was regenerated from a detached leaf, and the elongated protonema was imaged in a six-day-old moss colony.PpPINA-GFP localized at the PM of the protonema tip with a clear polarity in both initial and elongated protonemata (Figure 4A).This polarization pattern also appeared in chloronema and caulonema cells.To determine if polar localization of PpPINA-GFP occurs in complex tissues composed of multiple cell layers, we observed its localization in gametophytic leaves.Near the tip of gametophytic leaves, PpPINA-GFP showed clear basal-apical polarization along the leaf axis with notable corner enrichment (Figure 4A) and was evenly distributed on the PM near the base of the leaves (Supplemental Figure 4).
We extended our analysis to PpPINB fused to GFP and driven by its endogenous promoter.Notably, PpPINB-GFP did not show visible polarity at the tip of protonema but had a more even PM distribution with an increased intracellular signal (Supplemental Figure 5).This distribution pattern of PpPINB-GFP differs from a previous study and could be due to reduced expression of our PpPINB-GFP construct since it was driven by the native PpPINB promoter as opposed to overexpression in the previous study (Viaene et al., 2014).The differences between the localization of PpPINA-GFP and PpPINB-GFP suggest that these PINs may be recruited by distinct polarization pathways or have different regulation in the same cell.
The divergence of PpPINA-GFP polarity in filamentous protonema cells and in gametophytic leaves made us wonder whether this tissue-specific polarization of PINs is conserved in other bryophytes.To determine this, we examined the bryophyte and liverwort M. polymorpha, which produces gemmae as the asexual reproductive progenies that consist of multiple cell layers.After water imbibition, single-cell rhizoids emerge from the large rhizoid precursor cells on the epidermis of a gemma (Shimamura, 2016).We generated a p35S::MpPINZ-GFP transgenic line to determine the subcellular localization of the sole canonical PIN in M. polymorpha.Interestingly, MpPINZ-GFP localized on the PM with small intracellular puncta and no apparent polarity in all gemma epidermal cells (Figure 4B, right panel).However, when gemmae were stimulated to grow rhizoids, MpPINZ-GFP accumulated at the protrusion site of emerging rhizoids in the rhizoid precursor cells (Figure 4B, yellow arrowheads).MpPINZ-GFP accumulation was lost in most young and elongated rhizoids shortly after their emergence.However, around 5%-10% of the observed young rhizoids had weak MpPINZ-GFP accumulation at their tips and a polar localization pattern (Figure 4B and Supplemental Figure 6).
The localization pattern of PpPINA-and MpPINZ-GFP in filamentous and complex tissues suggests distinct polarity recognition mechanisms for bryophytic PINs in different tissue types.
To examine whether Arabidopsis PINs show similar polar patterns as bryophytic PINs in different types of tissues, we expressed pPIN2::AtPIN2-GFP in filamentous root hairs and complex epidermal cells.AtPIN2-GFP exhibited apical localization in epidermal cells, which is consistent with previous observations (Figure 4C; Zhang et al., 2019).We next observed AtPIN2-GFP in initial and elongated root hairs to see if localization was consistent with the polar localization of bryophytic PINs at the tip of filamentous tissues.AtPIN2-GFP exhibited polar localization at the tip of initial root hairs (Figure 4C).However, the polarity of AtPIN2-GFP signals diminished in elongated hairs (Figure 4C).The different polarity patterns of PINs in different types of tissue support the notion that regulation of PIN polarization is specialized in different cellular profiles and developmental contexts.
Exogenous PINs are localized to the PM with no defined polarity Because bryophytic and Arabidopsis PINs demonstrated tissueand development-specific polarity regulation, we wondered if conserved phosphorylation sites in the HL region are sufficient to drive polarization of exogenous PINs in other species.We utilized the same moss XVE::PIN-GFP transgenic lines generated for the auxin export assay to compare PpPINA-, AtPIN1-, and At-PIN2-GFP.In protonemata, AtPIN1-GFP was evenly distributed on the PM and cell division plane with no polarity at the tips and displayed numerous small intracellular puncta, whereas PpPINA-GFP had the same polarization pattern as when it was driven by its endogenous promoter (Figures 4A, 5A, and 5D).
The signal from AtPIN2-GFP under the same induction condition had a much lower intensity but resembled AtPIN1-GFP localization patterns, so we used AtPIN1-GFP localization for our representative images (Supplemental Figure 7).Short-term weak induction of XVE::AtPIN1-GFP was performed, and it showed no difference in localization, which verified that the localization patterns of AtPIN1-GFP were not caused by overexpression (Supplemental Figure 7; Supplemental Video 1).
Evolution of PIN trafficking and polarization
To investigate whether these features are conserved in bryophytes, we expressed AtPIN1-GFP using a 35S promoter in M. polymorpha.AtPIN1-GFP exhibited non-polar PM localization with high cytosolic signals in gemma epidermal cells (Supplemental Figure 8A).No visible polar localization was observed in rhizoid precursor cells of emerging rhizoids (Supplemental Figure 8B).MpPINZ-GFP exhibited clear tip polarization in young rhizoids, whereas AtPIN1-GFP was evenly distributed on the PM with homogeneous cytosolic signals (Figure 5B and 5D).To determine if the polar localization pattern of MpPINZ-GFP and the apolar localization pattern of At-PIN1-GFP in M. polymorpha were due to overexpression, we expressed each gene using the endogenous MpPINZ promoter.The fluorescent signal in rhizoids was much weaker, but the localization patterns were consistent for both proteins with either promoter driving expression (Supplemental Figure 9).Evolution of PIN trafficking and polarization
Plant Communications
Our results indicated that AtPIN1-GFP is delivered to the PM with no visible polarity when expressed in bryophytes.We next wanted to know if Arabidopsis trafficking machinery can polarize bryophytic PINs.To analyze bryophytic PINs, we observed protein localization in the epidermal cells of transgenic Arabidopsis lines expressing pPIN2::AtPIN2-, PpPINA-, and MpPINZ-GFP (Zhang et al., 2019).AtPIN2-GFPexhibited apical localization in epidermal cells, PpPINA-GFP localized at the apical and basal sides, and MpPINZ-GFP mainly localized at the basal side with some lateral localization (Figure 5C and 5E; Zhang et al., 2019).These data suggest that the Arabidopsis trafficking machinery drives PIN proteins to the PM through a generally conserved cellular trafficking pathway, whereas PIN polarization is specialized in different species.
Cytoskeletal networks are important for the polarization of bryophytic PINs
The diversification of PIN polarities in different plant species and tissues suggests that plant cells might utilize distinct machineries to deliver and maintain PIN proteins to the target side on the PM.Cytoskeletal networks guide directional vesicle trafficking in all eukaryotes and play critical roles in the maintenance and establishment of cell polarity in animal cells (Li and Gundersen, 2008).To verify the necessity of the cytoskeletal networks in the polarization of bryophytic PINs and AtPIN2 in their native species, we depolymerized actin filaments or microtubules by treating plant tissues with latrunculin B (LatB) or oryzalin (Ory), respectively.In P. patens, the disruption of actin filaments resulted in the hyperpolarization of PpPINA-GFP, which accumulated at a focal locus at the very tip of the cell (Figure 6A and 6D).Disruption of microtubules resulted in less accumulation of PpPINA-GFP at the tip of protonemata, and PpPINA-GFP appeared to be detached from the PM (Figure 6A).Changes in PpPINA-GFP localization in response to drug treatment demonstrated a requirement for cytoskeletal networks for polarization in filamentous tissues.
We used the same pharmacological interference to investigate whether MpPINZ-GFP polarization in young rhizoids of M. polymorpha relies on the cytoskeletal network.MpPINZ-GFP remained polarized at the tip when actin filaments were disrupted, whereas disruption of microtubules resulted in dislocation of the polarized MpPINZ-GFP (Figure 6B and 6E).These results collectively demonstrate the diversification of cytoskeletal dependency for PIN polarization within the bryophyte clade.
AtPIN2-GFP localization was monitored in the initial root hairs of Arabidopsis to determine if the cytoskeletal network was required
Plant Communications
Evolution of PIN trafficking and polarization for PIN polarization during tip cell growth.Despite attenuation of the AtPIN2-GFP signal at the tip of initial root hair cells, disruption of either actin filaments or microtubules had a minor effect on At-PIN2-GFP polarization (Figure 6C and 6F).Upon cytoskeleton disruption, apical polarization of AtPIN2-GFP in root hair cells was still evident, suggesting that, in contrast to the dependency of cytoskeletal networks for the bryophytic PINs, polarization of Arabidopsis PINs mainly relies on other trafficking or polarity retention mechanisms.These results reinforce the notion that mechanisms underlying PIN polarization have been diversified between bryophytes and vascular plants.
Evolution of sequence-specific determinants of PIN polarity
The flowering plant Arabidopsis has five PM-localized canonical PINs that exhibit different polarities in different developmental and tissue contexts.These differences in polarity help mediate directional auxin fluxes and generate asymmetric auxin distribution for a plethora of developmental processes, which ultimately shape the plant form.The PIN family originated from a single PIN auxin transporter, such as those found in simple filamentous streptophyte algae, but radiated during evolution into PINs with different expression and localization patterns that mediate diverse developmental and physiological processes (Skokan et al., 2019).
Ectopic co-expression of PINs in the same cell type can result in different polarity patterns.For example, AtPIN1 exhibits a basal pattern and AtPIN2 exhibits an apical pattern in root epidermal cells.This demonstrates the simultaneous presence of multiple polarity mechanisms in those cells (Wisniewska et al., 2006).Our results demonstrated that PpPINA and PpPINB endogenously expressed in the protonemata filaments of P. patens present different localization patterns (Figure 4 and Supplemental Figure 4).PpPINA exhibits tip-focused PM
Evolution of PIN trafficking and polarization
Plant Communications localization, whereas PpPINB can be found more spread out on the PM and in the cytoplasm.This polar localization pattern is thought to drive polar auxin transport from the base of the colony toward the tip of filamentous cells and plays an essential role in P. patens development (Thelander et al., 2018).Although the functional importance of this difference remains unclear, it demonstrates that parallel polarity/trafficking mechanisms exist in bryophytes in the same cells to which different PINs can be recruited.This is presumably based on specific sequence-based signals.
Notably, the sequences of the HL regions of PpPINA and PpPINB are 90.31%identical, which suggests that these signals are encoded within the divergent 10%.Further analysis of the differences between PpPINA and PpPINB or AtPIN1 and AtPIN2 would help to identify the sequence signals required for PIN polarity regulation.Although the identity of the sequence-based signals remains unclear, our observations show that cellular polarity mechanisms and PIN sequence-based polarity signals, which are crucial for diverse developmental roles in flowering plants, began diversifying in bryophytes.
Context-specific determinants of PIN polarity
It is well-known from Arabidopsis that the same PINs show different localization patterns in different contexts (Vieten et al., 2005).For example, AtPIN2 exhibits an apical localization pattern in epidermal cells, while it is localized on the basal side of young cortex cells (Kleine-Vehn et al., 2008a).The observations imply that different cell types possess specific trafficking pathways for the same PIN protein.In line with this, our results show that endogenous PIN proteins are polarized at the tip of apical cells in filamentous cells (e.g.protonemata in P. patens, rhizoids in M. polymorpha, and root hairs in Arabidopsis) (Figure 4A-4C).Notably, MpPINZ-GFP and AtPIN2-GFP signals were diminished when the rhizoids or root hairs elongated.These data collectively demonstrate that PIN polarity is differentially regulated in different developmental contexts.
In complex tissues with multiple cell layers, unlike in filamentous tissues, the PM-localized MpPINZ-GFP did not show polarity in thalli.However, PpPINA-GFP and AtPIN2-GFP presented polar localization at the apical-basal domain of the cells (Figure 4A-4C).These data demonstrate that PIN polarity and trafficking mechanisms have evolved with specific modifications in different tissues and cell types in angiosperms and bryophytes.This likely reflects different requirements for directional auxin transport in different developmental contexts, and it implies coevolution of PIN sequence-based signals and cell-type-specific polar sorting and trafficking mechanisms.
Diversification of PIN trafficking and polarity mechanisms during land plant evolution
The core mechanisms for auxin biosynthesis, auxin signaling, and PIN-mediated auxin transport are conserved across land plants (Kato et al., 2018;Sauer and Kleine-Vehn, 2019;Blazquez et al., 2020).However, bryophytes and vascular plants diverged around 450 million years ago and developed different tissue and organ types.It is unclear how conserved PIN polarity regulation is under such drastic changes that occurred during land plant evolution.Our cross-species studies revealed that exogenous PINs, such as Arabidopsis PINs expressed in bryophytes and bryophytic PINs expressed in Arabidopsis, can traffic to the PM (Figure 5).This suggests that all canonical PINs can be recognized by the general protein transport machinery in other species.When the Arabidopsis PINs are ectopically expressed in bryophytes, they fail to form any specific polarity, whereas bryophytic PINs remain in apical-basal domains in Arabidopsis epidermal cells (Figure 5).These observations hint towards the evolutionary loss of regulatory motifs required for PIN polarization that are present in bryophytic PINs but absent in Arabidopsis PINs.In line with this, the coding sequences of bryophytic PINs are longer than Arabidopsis PINs.The extra sequences are positioned in their HL regions, which are the main regulatory regions for PIN polarization.Our study demonstrates that PIN polarity mechanisms have not been conserved throughout plant evolution.
This hypothesis was also verified by the differences in cytoskeleton requirements for PIN polarization between bryophytes and angiosperms.The polarity of bryophytic PINs was disrupted when cytoskeletal networks were depolymerized, whereas Arabidopsis AtPIN2 was not affected (Figure 6).The most striking result is the hyperpolarization of PpPINA at the tip when actin filaments are disrupted.We hypothesize that focal exocytosis or endocytosis accounting for PpPINA polarity maintenance may rely on actin enrichment at the tip.This finding suggests a gradual shift in the dependence of PIN polarity and trafficking from the cytoskeleton-dependent pathways toward the cytoskeleton-independent pathways during land plant evolution.
Overall, our results demonstrate that different plant species evolved specialized pathways to deliver PINs and maintain their polarity at the PM.This is likely linked to an increasing repertoire of auxin transport developmental roles adopted with increasing morphological complexity during land plant evolution.
Plant growth and transformation
Arabidopsis seeds were surface sterilized and grown on 1/2 Murashige-Skoog (MS) plates.After two days of stratification at 4 C, seedlings were grown under long-day conditions (16 h light, 8 h dark) at 22 C with 100-120 mmol photons m À2 s À1 of white light.For P. patens, all transgenic and WT plants were cultured on standard moss BCD medium plates in a growth chamber at 24 C under long-day conditions (16 h light, 8 h dark) with 35 mmol photons m À2 s À1 of white light.For M. polymorpha, WT and all transgenic plants were cultured on 1/2 B5 plates in a growth chamber under long-day conditions (16 h light, 8 h dark) at 22 C with 50-60 mmol photons m À2 s À1 of white light-emitting diode lighting.
Plant Communications
Evolution of PIN trafficking and polarization p35S::MpPINZ-, AtPIN1-, and AtPIN2-GFP M. polymorpha plants were generated via the Agrobacterium transformation method described before (Kubota et al., 2013).In brief, the apical meristem region of each two-week-old Takaragaike-1 thallus was removed and cut into four pieces.After culturing on 1/2 B5 with 1% sucrose agar plates for three days, the cut thalli were transferred to 50 ml 0M51C medium with 200 mM acetosyringone (4 0 -hydroxy-3 0 ,5 0 -dimethoxyacetophenone) in 200ml flasks with 130 rpm agitation and cocultured with agrobacteria (optical density 600 = 1) harboring the target construct for another three days.The transformed thalli were washed and plated on 1/2 B5 plates with proper antibiotic selection.Independent T1 lines were isolated, and G1 lines from independent T1 lines were generated by the subcultivation of single gemmalings, which emerged asexually from a single initial cell (Shimamura, 2016).The next generation of G1, called the G2 generation, was used for analyses.
Arabidopsis transgenic lines bearing bryophytic PIN-GFPs under the At-PIN2 promoter control were generated and used as in a previous study (Zhang et al., 2019).For root imaging, seeds were sown on 1/2 MS medium plates, kept at 4 C for two days, and moved to the growth chamber to culture vertically for another four days.
Plasmid construction
Plasmids and primers for construction and genotype confirmation are listed in Supplemental Table 2.For transgenic moss lines with inducible overexpression, the insertion site of the GFP gene into the HL is indicated in Figure 3A, and the PIN-GFP regions were amplified from previously generated plasmids, which used the genomic DNA for PpPINA and AtPIN1 and the coding sequence for MpPINZ and AtPIN2 (Zhang et al., 2019).PIN-GFP was cloned into the Gateway entry plasmid pENTR/D-TOPO as the manufacturer suggested and subcloned into the pPGX8 vector, which contained a p35S-driven b-estradiolinducible XVE cassette (Nakaoka et al., 2012;Kubo et al., 2013;Floriach-Clark et al., 2021) via a Gateway LR reaction (Invitrogen) according to the manufacturer's recommendation.
To generate p35S::MpPINZ-, AtPIN1-, and AtPIN2-GFP constructs, the same PIN-GFP fragments as mentioned above were amplified with the primers listed in Supplemental Table 2.The amplified fragments were cloned into the pENTR/D-TOPO vector (Invitrogen) using the protocol supplied by the manufacturer.Plasmids with target genes were subcloned into the pMpGWB102 vector containing a 35S promoter (Ishizaki et al., 2015) using a Gateway LR reaction (Invitrogen) according to the manufacturer's recommendation.
Microscopy
Moss protonemata were cultured in glass-bottom dishes covered with BCD agar medium for 6-7 days before microscopy.Live-cell imaging was performed using a Leica SP8X-SMD confocal microscope equipped with a hybrid single-molecule detector (HyD) and an ultrashort pulsed white-light laser (50%; 1 ps at 40-MHz frequency).Leica Application Suite X was used for microscope control, and an HC PL APO CS2 403/1.20 water immersion objective was used for observing the samples.The following imaging settings were used: scan speed of 400 Hz, resolution of 1024 3 1024 pixels, and standard acquisition mode for the hybrid detector.The time-gating system was activated to avoid autofluorescence emitted by chloroplasts.For the filament growth assay, the imaging dish was supplied with an FM4-64 (Invitrogen) solution for 10-30 min, and a 103 objective lens was used.
Marchantia were observed by picking gemmae from a gemma cup and transferring them into a 24-well plate with 500 mL of liquid 1/2 B5 medium.Gemmae were cultured in the growth chamber for 24 before each sample was transferred to a slide and observed under a Leica Stellaris 8 system with HyD detectors and an ultrashort pulsed white-light laser (70%; 1 ps at 40-MHz frequency).Leica Application Suite X was used for microscope control, and an HC PL APO CS2 403/1.20 water immersion objective was used for observing the samples.The following settings were used: scan speed of 400 Hz and resolution of 1024 3 1024 pixels.GFP fluorescence was detected by exciting samples with a 488-nm white light laser, and setting the detection range between 500 and 525 nm.The tau-gating model was used to avoid autofluorescence emitted by chloroplasts by harvesting photons with a 1.0-to 10.0-ns lifetime for all Marchantia imaging.For the surface section (rhizoid precursor cell observation), a 5-mmthick section was set using the z section method with auto-optimization spacing to capture rhizoid protrusion.
For Arabidopsis root imaging, four-day-old seedlings of each indicated genotype were used for fluorescence imaging.After treatment roots with liquid MS medium supplied with the indicated chemicals, seedlings were carefully mounted on a slide with growth medium and placed into a chambered coverslip (Lab-Tek) for imaging.For root hair imaging, a 3mm Z projection image with 1-mm steps was taken around the medium plane of the root hair.All fluorescence imaging was performed using a laser-scanning confocal microscope (Carl Zeiss LSM800, 203 air lens).Fluorescence from GFP was detected using a 488-nm excitation source and 495-545-nm emission filter.
Image quantification
All images were analyzed using Fiji (ImageJ; https://imagej.net/software/fiji/).For the polarization patterns at the tip of filaments in P. patens, a line with 5-pixel thickness was plotted along the PM as depicted in Figures 4 and 5. Representative images for the DMSO control and drug treatments, obtained using the same imaging settings, were used to draw the line.The mean intensity along the line is shown.For the moss phenotype analysis, moss expressing the inducible PIN-GFP construct was cultivated in the imaging dish for five days followed by 1 mM b-estradiol induction for another three days.The cell outlines were stained with FM4-64 for 10-30 min.The line drawing function in Fiji was used to measure the length of the subapical cell, and the line is depicted between the middle points of the two cell division planes.The angle measurement function was applied to examine the angle between the first cell division plane and horizontal cell outline for division angle measurements.
Genomic DNA (gDNA) isolation
The gDNA of transformants was isolated by cetyltrimethylammonium bromide gDNA extraction (Schlink and Reski, 2002).In brief, moss tissues were harvested from one full plate and ground in liquid nitrogen.The ground tissues were then mixed and incubated with cetyltrimethylammonium bromide buffer, followed by the addition of chloroform.After centrifugation, the supernatant was collected and precipitated with isopropanol at -20 C for 1 h.
Pharmacological treatments
Ory (Sigma) and LatB (Sigma) treatments were used to depolymerize microtubules and actin filaments, respectively, in plant cells.The concentration and duration of each treatment for each plant species are described in the main text, and the conditions we used have been shown to efficiently depolymerize cytoskeletal networks in multiple species (Baskin et al., 1994;Baluska et al., 2001;Oda et al., 2009;Vidali et al., 2009;Glanc et al., 2019).For P. patens and Arabidopsis, we used 20 mM LatB or 5 mM Ory for 4 h.The chemicals were diluted in liquid BCD medium and applied to the imaging dishes before imaging.
For M. polymorpha, G2 gemmae from transgenic plants were transferred into each well of a 24-well plate containing 500 mL liquid 1/2 B5 medium and cultured in the growth chamber for 16 h.The chemicals were diluted directly into the medium prior to imaging at the indicated time.Based on previous studies, the concentrations of Ory (Era et al., 2013) and LatB (Otani et al., 2018) were selected.The gemmae were treated with 2 mM LatB or 10 mM Ory for 2 h.Plant Communications 5, 100669, January 8 2024 ª 2023 The Authors. 11 Evolution of PIN trafficking and polarization
Plant Communications
For Arabidopsis, four-day-old seedlings were submerged in liquid MS medium supplemented with chemical inhibitors and transferred to a separate agar medium for imaging.0.1% DMSO (Duchefa, 10 mM dimethyl sulfoxide) was used as a control for all treatments.
P. patens auxin export assay
The auxin export assay performed with transgenic moss plants was modified based on the protocol developed for Arabidopsis seedlings (Lewis and Muday, 2009).In brief, seven-day-old fresh tissues were transferred to liquid BCDAT growth medium containing 1 mM b-estradiol for four days with gentle shaking.This induction step was followed by treatment with 10 nM 3 H-IAA for 24 h.The radioactive tissues were then washed twice with sterile H 2 O and cultivated in fresh BCDAT medium for another 24 h.The cultivated medium was then collected and mixed with ScintiVerse BD cocktail (Fisher, SX18-4) at a 1:30 (v:v) ratio.Auxin export was measured using a scintillation counter (Beckman Coulter Genomics, LS6500).
M. polymorpha thallus growth assay G2 gemmae were transferred onto 1/2 B5 agar plates and grown for 10 days.Gemmae were imaged under a dissection microscope (SZN71, LiWeng, Taiwan) with a charge-coupled device camera (Polychrome M, LiWeng, Taiwan).To measure the vertical growth angle, an agar cube with an individual plant was cut out from the plate and placed in the middle of a slide.The slide was put on the surface of a laminar flow hood at a fixed distance from the edge, and images were taken using an HTC U11 cell phone camera.The growth angle was measured using ImageJ (https:// imagej.net/software/fiji/).
Phylogenetic analysis
The phylogenetic analysis for full-length amino acid sequences of all examined PINs was carried out using MEGA X (Kumar et al., 2018), and the results were imported into interactive tree of life (iTOL) (https://itol.embl.de/) for visual analysis.The evolutionary history was inferred by using the maximum likelihood method and Jones-Taylor-Thornton matrix-based model with default settings (Jones et al., 1992).The alignment and identity index were produced using the online CLUSTAL alignment program with default settings.
Figure 1 .
Figure 1.Phosphorylation sites are highly conserved between bryophytic and Arabidopsis PINs.(A) Phylogenetic analysis of canonical PINs from the early-divergent plants P. patens (Pp) and M. polymorpha (Mp) and the representative angiosperm A. thaliana (At).The unrooted tree shows the relationships between different PINs in two representative bryophytes and Arabidopsis.The scale bar represents the number of changes per site.(B) Identity indexes of all PINs compared with AtPIN2 with full-length or only the HL region of coding amino acid sequences.Identity indexes of all PINs compared with AtPIN1 are shown in Supplemental Figure 1.(C) Alignment of PIN amino acid sequences.Identical amino acids are highlighted with blue columns.The red and green boxes highlight the HL regions bearing conserved phosphorylation sites, which are enlarged in (D).(D) Four conserved phosphorylation sites, labeled S1-S4, were verified in previous AtPIN2 studies and are depicted with black frames.Note that S1-S4 are conserved in every examined PIN.
Figure 2 .
Figure 2. The HL regions in AtPIN2, PpPINA, and MpPINZ are less conserved.(A) Structural alignment of AtPIN1 and AtPIN2.The protein structures were predicted using Alphafold2 and aligned using ChimeraX.The structurally conserved regions are indicated by black arrowheads, and the non-conserved regions are indicated by white arrowheads.(B) Structural alignment of AtPIN2 with PpPINA and MpPINZ.Only the transmembrane domains and a single loop align with each other, while the majority of HL regions are not conserved.The four conserved phosphorylation sites are labeled with atomic details in ball-and-stick style.(C-E) Individual protein structures retrieved from (B).The four conserved phosphorylation sites are indicated by red arrowheads.
Figure 3 .
Figure 3. GFP-fused PIN proteins possess auxin export activity.(A) The insertion site of GFP in the indicated PIN proteins.Numbers represent the amino acid position in the respective proteins.(B) The auxin export assay with P. patens transgenic plants.Fresh tissues were cultured in liquid medium supplied with 1mM b-estradiol to induce XVE overexpression.Tissues were then incubated with radioactive H 3 -IAA followed by washing.Radioactive H 3 -IAA exported into the new culture medium was measured using a scintillation detector after one day.Wild-type moss plants treated with the same conditions were used as a control.Ten to fifteen 10-day-old moss colonies were used for one measurement, and the graph shows the mean ± SD from four independent experiments.(C) Representative protonema cells of the indicated genetic background.The cell outline was stained with FM4-64.White arrowheads indicate the first cell division plane, and the yellow double arrow indicates the length of the subapical cell.Scale bar, 100 mm.(D and E) Quantification of the subapical cell length and division angle (W) of the indicated lines.Bold horizontal lines indicate the median, and whiskers indicate the first and third quartile.***P < 0.001, Student's t-test.(F) Top (top panels) and side (bottom panels) views of the indicated M. polymorpha lines.Takaragaike-1 (WT) expands its thallus horizontally on the agar surface.Overexpression of MpPINZ-, AtPIN1-, and AtPIN2-GFP showed vertical thallus growth, which generated a large angle between the lower surface of the thallus and the surface of the agar (Ɵ).Scale bar, 0.5 cm.(G) Quantification of the thallus growth angle (Ɵ) shown in (A).Bold horizontal lines indicate the median, and whiskers indicate the first and third quartile.***P < 0.001, Student's t-test.
Figure 4 .
Figure 4. Endogenous PINs present different localization patterns in different tissues.(A)The localization of PpPINA-GFP in tissues with different complexities.PpPINA-GFP is polarized to the tip of initial and elongated protonema cells.The initial protonema cell is regenerated from a detached leaf, and the representative image shows the maximum projection with a 5-mm-thick Z section.Autofluorescence from the chloroplasts is indicated by asterisks.In elongated protonema cells, the polarity of PpPINA-GFP is plotted using an intensity measurement along the PM, as represented by the yellow arrows.The same measurement is applied to (B) and (C).In complex tissues composed of multiple cell layers (e.g.young leaves in P. patens), PpPINA-GFP is polarized at apical and basal domains (white arrowheads).Scale bars, 10 mm.(B) The localization of MpPINZ-GFP in emerging rhizoids, young rhizoids, and gemma epidermal cells.A representative image with a 5-mm-thick Z section at the gemma surface shows the accumulation of MpPINZ-GFP at the tips of emerging rhizoids, as indicated by yellow arrowheads in rhizoid precursor cells (asterisks).MpPINZ-GFP is polarized at the tip of young rhizoids (center).A middle section image shows the even distribution of MpPINZ-GFP on the PM of gemmae composed of multiple cell layers.Scale bars, 10 mm.(C) AtPIN2-GFP shows a polarized signal at the tip of the initial root hair, but the polarized signal is not observed in elongated root hairs.All imaging details are described in the Methods.The pixel values ranging from 0-255 are represented by the rainbow color.In Arabidopsis epidermal cells, AtPIN2-GFP shows apical polarization, as indicated by white arrowheads.Scale bars, 10 mm for root hairs and 1 cm for epidermis.
Figure 5 .
Figure 5. Exogenous PINs are PM-localized with no defined polarity.(A) Overexpressed PpPINA-GFP is polarized to the tip of protonemata, but overexpressed AtPIN1-GFP is evenly distributed on the PM with intracellular puncta.The PM is stained with the membrane dye FM4-64.The occurrence frequency is indicated in the bottom left corner of each image.Scale bars, 10 mm.(B) Overexpressed MpPINZ-GFP is polarized to the tip of young rhizoids, while overexpressed AtPIN1-GFP shows strong cytosolic signals and weak PM localization with no polarity.The PM is stained with the membrane dye FM4-64.Scale bars, 10 mm.(C) AtPIN2-GFP under its native promoter localizes to the apical side of epidermal cells, but PpPINA-GFP and MpPINZ-GFP are mislocalized to the basal and lateral sites, as indicated by white arrowheads.Scale bar, 1 cm.(D) Intensity plots of representative images for PpPINA-GFP, AtPIN1-GFP in P. patens protonema cells, and MpPINZ-GFP and AtPIN1-GFP in M. polymorpha young rhizoids.(E) Polarity index (ratio of signal intensity at the apical PM/signal intensity at the lateral PM) for apical localization of the indicated PIN-GFP in Arabidopsis epidermal cells.PpPINA-GFP and MpPINZ-GFP are significantly lower on the polarity index.Shown are 12-22 cells from three roots in three independent experiments for each line.***P < 0.001, Student's t-test.
Figure 6 .
Figure 6.Cytoskeletal networks are important for the polarization of bryophytic PINs.(A) PpPINA-GFP is polarized to the tip of protonemata, and disruption of actin filaments with latrunculin (LatB) induced its hyperpolarization (white arrowhead).Disruption of microtubules by oryzalin (Ory) disturbed the polarization of PpPINA-GFP (yellow arrowhead).The occurrence frequency is indicated in the bottom right corner of each image.For P. patens and A. thaliana, tissues were treated with 20 mM LatB or 5 mM Ory for 4 h.Scale bars, 10 mm.(B) MpPINZ-GFP is polarized to the tip of young rhizoids (white arrowheads), and its polarization was only abolished by Ory treatment (yellow arrowhead).The gemmae were treated with 2 mM LatB or 10 mM Ory for 2 h.Scale bars, 10 mm.(C) AtPIN2-GFP is polarized to the tip of initial root hair cells, and disruption of either actin filaments or microtubules did not change its polarization (white arrowheads) but did attenuate the peak signal at the tips, as shown in (F).The pixel values ranging from 0-255 are represented by the rainbow color.Scale bars, 10 mm.(D) The hyperpolarity index of PpPINA-GFP treated with DMSO or LatB was calculated by dividing the intensity at the very tip by the intensity at the curvature side of the tip.(E and F) Intensity plots of representative images for MpPINZ-GFP and AtPIN2-GFP along the tips of the rhizoids in M. polymorpha and root hairs in A. thaliana.
|
v3-fos-license
|
2022-11-24T16:24:02.780Z
|
2022-11-22T00:00:00.000
|
253827252
|
{
"extfieldsofstudy": [],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fbioe.2022.1042680/pdf",
"pdf_hash": "f358a1367f5322340fd0a9271b2776e3b02784b2",
"pdf_src": "Frontier",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44324",
"s2fieldsofstudy": [
"Biology"
],
"sha1": "a82e05f488c3422ded94bc962a9a1d013faf298e",
"year": 2022
}
|
pes2o/s2orc
|
Label-free multimodal nonlinear optical microscopy reveals features of bone composition in pathophysiological conditions
Bone tissue features a complex microarchitecture and biomolecular composition, which determine biomechanical properties. In addition to state-of-the-art technologies, innovative optical approaches allowing the characterization of the bone in native, label-free conditions can provide new, multi-level insight into this inherently challenging tissue. Here, we exploited multimodal nonlinear optical (NLO) microscopy, including co-registered stimulated Raman scattering, two-photon excited fluorescence, and second-harmonic generation, to image entire vertebrae of murine spine sections. The quantitative nature of these nonlinear interactions allowed us to extract accurate biochemical, morphological, and topological information on the bone tissue and to highlight differences between normal and pathologic samples. Indeed, in a murine model showing bone loss, we observed increased collagen and lipid content as compared to the wild type, along with a decreased craniocaudal alignment of bone collagen fibres. We propose that NLO microscopy can be implemented in standard histopathological analysis of bone in preclinical studies, with the ambitious future perspective to introduce this technique in the clinical practice for the analysis of larger tissue sections.
Introduction
Bone tissue is a specialized connective tissue made of a mineralized matrix in which the organic phase consists of collagen and a small amount of non-collagenous proteins hierarchically arranged, whereas the inorganic phase is mainly constituted by hydroxyapatite crystals. Overall, the amount and the spatial distribution of the diverse tissue components determine its mechanical properties (Ozasa et al., 2019), and alterations of bone microarchitecture and composition result in a variety of common and rare diseases, including osteoporosis and its opposite phenotype, i.e., osteopetrosis (Garnero, 2015).
Due to its peculiar structure and complexity, the study of the bone tissue is challenging. Among state-of-the-art techniques (Foessl et al., 2021), none can give insights into bone biochemical composition in a label-free way, preserving tissue integrity and spatial distribution of the diverse features.
A technology potentially overcoming current limitations is nonlinear optical (NLO) microscopy (Parodi et al., 2020a;Parodi et al., 2020b), which offers fast, high-specificity, and highresolution imaging. NLO techniques combine morphological and functional/chemical information in a label-free fashion, enriching the single observation with multiple complementary contrast mechanisms and overcoming the fundamental limitations often imposed by fluorescent probes, such as cytotoxicity, poor binding specificity, and conflict with natural cellular functions (Jensen, 2012;Marchetti et al., 2019).
NLO microscopy exploits ultrashort (sub-picosecond duration) pulsed lasers in the near-infrared (NIR) wavelength region, thus allowing high penetration depth in tissues. Due to the nonlinear optical generation mechanism, these signals are generated almost exclusively at the focal point of the objective lens, thus overcoming the need for confocal pinholes to remove out-of-focus photons and providing intrinsic three-dimensional sectioning capability (Denk, 1996). Multimodal NLO imaging may exploit a variety of different contrast mechanisms such as two-photon excitation fluorescence (Oheim et al., 2006) (TPEF), second-harmonic generation (Chen et al., 2012) (SHG), and coherent Raman spectroscopy (CRS) .
TPEF is the archetypical multi-photon imaging technique. It is based on the physical mechanism whereby, when two incident photons are simultaneously absorbed by an electron, the latter is Frontiers in Bioengineering and Biotechnology frontiersin.org 02 promoted from the ground state to an excited state ( Figure 1A). After an initial ultrafast internal relaxation within the excited state, the electron radiatively returns to the ground state, thus emitting a photon with energy smaller than twice that of the excitation photons ( Figure 1A). Compared to single-photon confocal imaging, TPEF microscopy reduces overall photobleaching and photodamage by confining it to the narrow region around the focal plane (Galli et al., 2014). Endogenous TPEF of intrinsic fluorophores, such as elastin, reduced Nicotinamide Adenine Dinucleotide (NADH) and Flavin Adenine Dinucleotide (FAD), yields physiological and pathological information from biological tissues at subcellular resolution in a completely label-free manner (Liu et al., 2018;Knyaz'kova et al., 2022).
SHG is a second-order non-linear process occurring when two photons at a certain frequency impinge on a noncentrosymmetric material of dimension comparable to the excitation wavelength, such as bundles of collagen fibers, and generate a new photon with twice the frequency and energy ( Figure 1A) (Campagnola and Dong, 2011). Because of its polarization-dependent nature, SHG allows obtaining information about spatial distribution and orientation of the analyzed material (Stoller et al., 2002). While TPEF implies energy absorption in the specimen and radiates from the focal volume in any direction, SHG is a coherent process that does not involve any absorption and propagates collinearly with the excitation beam. TPEF and SHG techniques can be easily combined, being generated with the same laser source, and can be detected separately by applying proper spectral filtering.
Besides multi-photon microscopy techniques, also vibrational microscopy is having a great impact in the biological imaging field (Vanna et al., 2022a). This technique relies on the Raman scattering process and provides a label-free method of accessing the vibrational spectrum of materials and molecules (Vanna et al., 2022b). In this way, the chemical bonds of the molecules can be studied and identified with high biochemical specificity. Stimulated Raman scattering (SRS) along with coherent anti-Stokes Raman scattering (CARS) are the most widely used CRS techniques Cheng et al., 2022;De la Cadena et al., 2022;Vernuccio et al., 2022). With respect to spontaneous Raman, CRS techniques guarantee high-speed detection thanks to the coherent excitation of the vibrational modes at the sample plane. Both SRS and CARS consist of exposing a sample to two temporally and spatially overlapped laser pulses of different frequencies, ω p and ω s , called pump and Stokes, respectively. When the difference ω p -ω s is equal to a vibrational mode Ω of the sample, the Raman signal is generated and amplified by the simultaneous in-phase vibration of all molecules in the focal volume (Nandakumar et al., 2009). In SRS, the coherent interaction of pump and Stokes beams with the specimen induces the excitation to a virtual state with consequent stimulated emission to the vibrational level of interest. In this process, a Stokes photon is generated, i.e., the Stokes experiences a stimulated Raman gain (SRG), and simultaneously a pump photon is annihilated, i.e., the pump undergoes a stimulated Raman loss (SRL), as shown in Figure 1A. Moreover, since the generated SRS signal is directly proportional to the concentration of scatterers in the focal volume, it is a valuable solution for extrapolating quantitative chemical information out of the sample also at very poor concentration of analytes, which is usually the case for biological matter. In SRS modality, the SRG is detected as a small variation on top of the orders of magnitude larger Stokes beam intensity. This, paired with the use of compact, cost-effective but noisy fiber laser, presents challenges for real microscopy applications, which can be overcome with balanced detection schemes, providing almost shot-noise limited performances (Kumar et al., 2012;Riek et al., 2016;Crisafi et al., 2017;Cheng et al., 2021).
Owing to its properties, multimodal NLO imaging has been recently applied to diverse biological contexts (Parodi et al., 2020b), and, in the musculoskeletal tissue, to the development of articular cartilage (He et al., 2017) and skeletal muscle (Sun et al., 2014). Single-modality TPEF imaging was exploited to obtain histological information of bone architecture, along with indication of osteoarthritis, osteomyelitis, and malignancy condition from unstained bone (Yoshitake et al., 2022). Similarly, single-channel third-harmonic generation (THG) microscopy enabled label-free imaging of bone porosity and interfaces (Genthial et al., 2017). On the other hand, simultaneous SHG, THG, CARS, and TPEF nonlinear optical microscopy was tested to image small areas (i.e., 250 × 250 µm) of a canine bone femur, deriving qualitative information about phosphate mineralization, collagen, and bulk morphology (Kumar et al., 2015). Thanks to the short pixel dwell time of 5 ms, fast multimodal microscopy was proved effective for perspective rapid and quantitative investigations concerning relevant biomedical questions in bone research.
We exploited the potential of multimodal NLO microscopy to analyze characteristics of bone composition in a quantitative approach, imaging entire vertebrae of murine models. We hypothesized that, owing to its non-disruptive nature, multimodal NLO microscopy images on large tissue areas might provide new hints into tissue composition and spatial organization, which are inherently linked to major mechanical properties.
In the present work, we applied multimodal NLO microscopy to murine spine sections from either a wild-type (WT) mouse or a genetic mouse model showing bone loss, i.e., the Dpp3 Knock-out (Dpp3 KO) mouse (Menale et al., 2019). The dipeptidyl peptidase 3 (Dpp3) is a ubiquitous zincdependent aminopeptidase involved also in the Keap1/ Nrf2 antioxidant signaling pathway. Mice lacking DPP3 (Dpp3 KO) present sustained oxidative stress and inflammation in the bone microenvironment, overall resulting in bone loss. In humans, recent findings in post-menopausal osteoporotic women supported the critical role played by DPP3 in bone homeostasis and tissue health.
Frontiers in Bioengineering and Biotechnology frontiersin.org 03 Figure 1B provides a schematic representation of the analytical workflow. Briefly, spine sections were obtained from WT and Dpp3 KO mice counterparts, then examined under a multimodal custom-built microscope, (Crisafi et al., 2018), operating in four different modalities: bright field, SHG, TPEF and SRS. This latter was applied to image our samples at both 2850 cm −1 and 2920 cm −1 Raman shifts, corresponding to the main vibrational frequencies of lipids and proteins, respectively. Data were analyzed in the Fiji-ImageJ software (Rueden et al., 2017) and in Python using Numpy (Harris et al., 2020), Scipy and the "NanoImagingPack" library: https://gitlab.com/bionanoimaging/ nanoimagingpack This led us to uncover different biochemical and structural traits in a single image, to quantify them, and to highlight significant differences between bone loss models and control counterparts. Our findings demonstrate that multimodal NLO microscopy performed on large tissue areas is an effective tool for the fast characterization of bone composition, without the need for time-consuming, destructive, or perturbative sample preparation.
Animals
Mice in which the Dpp3 gene was ubiquitously inactivated (Dpp3 KO) have been previously described (Menale et al., 2019). Dpp3 KO and WT mice were group-housed in a specificpathogen-free animal facility, under a 12-h dark/light cycle, with water and food provided ad libitum.
All the procedures involving mice were performed in accordance with the ethical rules of the Institutional Animal Care and Use Committee of Humanitas Research Hospital and with international laws (Italian Ministry of Health, protocol n.07/2014-PR).
Tissue samples preparation
Mice were euthanized by CO 2 asphyxiation; tissues were harvested and fixed in 4% paraformaldehyde (PFA). Bones were processed for embedding in methyl methacrylate (MMA) immediately after fixation, without any decalcification. Sections of MMA-embedded lumbar spine of three different mice per genotype were laid on polylysine-coated fused silica slides, unplasticized and analyzed by multimodal NLO microscopy. On each section, the entire area of a vertebra was analysed.
Multimodal nonlinear optical microscopy
We developed a multimodal nonlinear optical microscope, assembled with off-the-shelf components, able to perform imaging in four different modalities: linear transmission, twophoton excitation fluorescence (TPEF), second harmonic generation (SHG), and stimulated Raman scattering (SRS). We employed a multi-branch Erbium-doped amplified fiber laser to generate a pump beam at 780 nm and a tunable Stokes beam, in the 930-1060 nm range with a repetition rate of 40 MHz. In this way, the CH-stretching region (2800-3100 cm −1 ) of the Raman spectrum is fully covered for SRS imaging. The sole pump beam is used for TPEF and SHG microscopy. Our home-built microscope has an inverted transmission configuration. For excitation and detection, we used two identical Zeiss objectives (×100 magnification, 0.75 numerical aperture, 4-mm working distance, field number 25). The samples were mounted on a three-axis motorized translation stage, made up by a vertical (Z) stage (Mad City Labs Inc: model MMP1) to adjust the focus and a two-axis stage (Standa: model 8MTF-102LS0) along the (X-Y) sample plane to perform raster scanning of the image. A dichroic mirror was used to split the generated nonlinear signals. The TPEF/SHG photons were sent to a photomultiplier tube (Hamamatsu: model R3896) and extracted applying the appropriate set of filters: for the TPEF modality, a FESH0600 (Thorlabs) short-pass filter is used, since it possesses a good transmittance in the 400-600 nm range, thus matching the spectral range of both NADH and FAD autofluorescence (Ruofan et al., 2020), while also rejecting the unwanted Second-Harmonic Generation signal from collagen; for the SHG modality, the FF01-390/18-25 (Semrock) bandpass filter allows for selecting the narrow SHG signal from collagen, centred at 390 nm. In addition, for both modalities, the following combination of filters are also employed to block the fundamental beams: NF785/33, FESH0700 and FESH0750 Frontiers in Bioengineering and Biotechnology frontiersin.org (Thorlabs) The SRS signals were detected using the in-line balanced detection scheme (Crisafi et al., 2017), so that the transmitted SRS and Stokes photons are sent to a balanced photodiode, which features a responsivity above 0.4 A/W at our wavelengths of interest (Thorlabs: model PDB210A/M). A schematic representation of the microscope is reported in Figure 2. For additional information, the reader can refer to the work of Crisafi and colleagues (Crisafi et al., 2018).
Image acquisition setting
One complete vertebra from each sample was imaged in two consecutive acquisitions comprising four different modalities: linear light transmission, TPEF, SHG, and SRS at 2850 and at 2920 cm −1 Raman shifts, resonant with lipids and proteins, respectively (Yoshitake et al., 2022). To account for polarization-dependent signal generation in the SHG modality, we compared the results obtained with the pump beam polarization oriented either parallel or perpendicular to the craniocaudal axis of the spine. The laser power was kept constant at 20 mW for the pump beam and 1.2 mW for the Stokes beam. Images were acquired with a pixel dimension of 1 × 1 µm and a pixel dwell time of 5 ms.
Analysis of collagen content
In each SHG image of WT and KO samples, we defined a region of interest (ROI) comprising only either the cortical or the trabecular bone, disregarding the bone marrow, intervertebral disc, growth plate and areas outside the vertebra (see Supplementary Figure S1). We then computed the average raw SHG pixel intensity over these two ROIs using the Fiji-ImageJ software (Rueden et al., 2017). As SHG signals scale linearly with the density of the scatterers in the voxel, we obtained data that represent the average density of collagen in the cortical and trabecular bone, not affected by their relative extension.
Analysis of collagen fibres orientation-Sobel algorithm
We applied a Sobel filter implemented in Python to the two ROIs of the cortical and trabecular bone described in the previous paragraph. This algorithm consists in cross correlating the SHG images with the following two kernels: This allowed us to compute the pixel-wise angular orientation Θ of the collagen fiber pattern as (Nixon and Aguado, 2020): ( 2 )
Analysis of in vitro collagen production
Primary osteoblasts (50,000 cells/cm 2 in 12-well plate) were treated with 50 μg/ml of ascorbic acid for 8 h. Then, the cells were fixed with 4% PFA for 20 min at room temperature (RT), stained with 0.1% Sirius red (Sigma Aldrich) in saturated picric acid for 4 h and washed with PBS (Lonza). The stain was solubilized in 300 μL of destain solution (0.2 M NaOH/methanol 1:1) and the optical density was measured at 540 nm with a Synergy ™ H4 instrument (BioTek Instruments, Inc.).
Gene expression analysis
Total RNA was extracted from primary osteoblast cell cultures using the PureZOL ™ Reagent (Bio-Rad), following the manufacturer's instructions. For RNA extraction from murine flushed long bones (n = 4 WT, 7 KO), the frozen tissue was crashed in a mortar, then transferred in tubes and homogenized in a TissueLyser II instrument (Qiagen) in the presence of PureZOL ™ Reagent; afterwards, standard procedures were applied. Reverse transcription was carried out using 1.0 μg total RNA and High-Capacity cDNA Reverse Transcription Kit (Applied Biosystems). Quantitative PCR (qPCR) was performed using SsoAdvanced ™ SYBR ® Green Supermix (Bio-Rad) and gene-specific primers as detailed in Table 1. The amplification was performed using the ViiA7 Real-Time PCR Detection System (Applied Biosystems) with the following cycling conditions: cDNA denaturation and polymerase activation step at 95°C for 20 s followed by 40 cycles of denaturation at 95°C for 1 s and annealing at Frontiers in Bioengineering and Biotechnology frontiersin.org 60°C for 20 s; extension step for 60 cycles at 65°C for 30 s and melting curve analysis step at 65°C-95°C with 0.5°C increment for 2 s/step. The relative gene expression analysis of target genes was conducted following the comparative 2 −ΔΔCT method and the normalized expression was calculated as arbitrary units (AU) in comparison with pertinent controls.
Statistics
Statistical analysis for all the experimental data obtained was performed using the non-parametric Mann-Whitney U test (GraphPad Prism 5.0; GraphPad Software, Inc.). Statistical significance was considered where p < 0.1 (90% CI). All data are presented as mean ± standard error of the mean (SEM).
Multimodal nonlinear imaging
The new multimodal NLO microscope that we developed as described above, was able to map the major biochemical components that constitute the murine bone over a large field of view (in the scale of few mm 2 ) spanning the whole section of a murine vertebra and in a label-free manner. Figures 3A,B show two complete sets of images acquired from a WT and a KO murine bone sample, respectively, using the four complementary imaging modalities, each one providing distinct information about the sample.
The bright-field images, obtained by recording the linearly transmitted Stokes beam, allow us to derive morphological information about the density of biological matter under observation. In the TPEF modality (green channel in Figure 3), the signal is generated from the excitation of intrinsic intracellular fluorophores such as NADH and FAD. Their presence is an indicator of oxidative and glycolytic metabolic mechanisms occurring at the cellular level (Zipfel et al., 2003). The SHG signal (blue in Figure 3) derives from the non-centrosymmetric structure and the remarkable secondorder nonlinear susceptibility of collagen fibers (Kröger et al., 2021). In SRS modality, we could image the concentration of lipids and proteins (magenta and yellow in Figure 3, respectively) detecting their high-energy vibrational response at 2850 and 2920 cm −1 , resonant with the CH 2 and CH 3 stretching modes, respectively (Ji et al., 2013).
We then combined the four NLO channels into multicolor images ( Figure 3C), to investigate the co-localization of the multiple biological species on the same distribution map, thus obtaining a richer and more thorough understanding about the overall tissue composition. For example, combining the SRS signal of proteins at 2920 cm −1 and the SHG signal of collagen fibers, it is possible to recognize trabecular and cortical bone tissue, due to their dense content of proteins and collagen in the extracellular matrix.
Analysis of collagen content
We employed SHG microscopy to study the collagenous architecture of bone in terms of collagen amount and directionality. The former can be directly measured considering that collagen content is proportional to the SHG signal generated (James and Campagnola, 2021). The latter can Frontiers in Bioengineering and Biotechnology frontiersin.org 07 be estimated thanks to the fact that SHG radiation possesses a polarization-sensitive nature: defining θ as the angle between the polarization direction of the pump field and the molecular axis of a collagen fiber, the induced local dipole moment varies with cos 2 θ, whereas the total radiation power scales with cos 4 θ (Moreaux et al., 2000). As a consequence, the generated signal is higher when the polarization of the excitation beam is oriented along the longitudinal axis of collagen fibers, acting as radiating dipoles, with respect to the opposite case when light is polarized along their transversal direction (Mostaço-Guidolin et al., 2017). Therefore, we performed experiments with the pump beam polarization oriented either parallel or perpendicular to the craniocaudal axis of the murine spine ( Figure 3C). We used linearly polarized rather than circularly polarized light for SHG measurements to simultaneously acquire, in our multimodal approach, the SRS signal exploiting an in-line balanced detection for noise reduction (Crisafi et al., 2017).
Collagen amount
We estimated the average density of collagen in the whole section of a vertebra, manually excluding the bone marrow from the mineralized osseous region (i.e., cortex and trabeculae, see Supplementary Figure S1) via the average SHG signal intensity, for the WT and KO models, measured under the same experimental conditions in terms of laser power, spot size and pulse duration. Results are reported in Figure 4, for parallel and perpendicular polarization of the excitation light with respect to the craniocaudal axis of the murine spine. For both polarizations, the amount of SHG signal was significantly higher in the KO model, suggesting that collagen content is greater in the KO compared to WT bone (see also Supplementary Figure S2).
Then we assessed whether standard investigational procedures in bone biology provided results in line with this evidence. Indeed, Menale et al. (2019) who generated and characterized the Dpp3 KO mouse model, found higher ColIa1 gene expression during in vitro osteoblast differentiation, as well as in cultures of primary osteoblasts from Dpp3 KO as compared to WT mice. As a further support, here we assessed ColIa1 gene expression in the total RNA extracted from Dpp3 KO and WT flushed bone and found an overexpression trend in the absence of Dpp3 ( Figure 5A). We also evaluated in vitro collagen production from Dpp3 KO and WT primary osteoblast cultures, through Sirius-red staining and quantization, and found higher collagen production in KO than in WT osteoblast cultures from primary osteoblasts ( Figure 5B). Overall, these results agree with data from SHG images and suggest that the lack of Dpp3 is associated with an increased collagen content in the bone tissue.
Analysis of collagen directionality
We evaluated the directionality of collagen fibers thanks to the optical properties of polarization-sensitive SHG signals previously described. Figure 6 reports the pixel counts, in percentage, as a function of the fiber orientation in the 0°-180°range, employing the Sobel operator (see Methods), for the WT and Dpp3 KO mice. Results obtained for cortical and trabecular regions are plotted in magenta and green, respectively. As clear from the position of the Gaussian centers in terms of angular orientation [°], collagen fibers exhibit a prevalent orientation at 90°, i.e., along the craniocaudal axis of the murine spine, irrespective of the genotype and of the parallel/perpendicular polarization of the electric field of the excitation laser beam. This is, in line with the previous observation that the pixel counts are higher when parallel-polarized light is employed: the polarization of the impinging field matches the prevalent direction of fibers in the samples, thus maximizing SHG photon conversion.
Interestingly, the tissue sections of WT mice show a difference in collagen orientation between the cortical (magenta line) and the trabecular bone (green line) in both Average SHG signal intensity in mineralized bone (cortical and trabecular bone) in WT and KO mice, for parallelly-(left) and perpendicularly (right) polarized excitation laser field. Data were analysed using t test (n = 3 for both genotypes); p values are indicated above each bar plot.
Frontiers in Bioengineering and Biotechnology frontiersin.org 08 light polarizations (first row in Figure 6, as indicated by the fact that the curve of the pixel counts for the cortical and trabecular bone do not overlap). On the other hand, the tissue sections of Dpp3 KO mice (second row in Figure 6) exhibit a comparable fiber orientation in cortical and trabecular regions in both the conditions of light polarization, as indicated by the fact that the curve of the pixel counts for the cortical and trabecular bone are almost completely overlapping. Therefore, in the WT the normalized amount of collagen fibers aligned along the craniocaudal direction is higher in the cortical then in the trabecular region, while in the absence of Dpp3 there is no difference.
We modeled the data presented in Figure 6 as the sum of a baseline B, representing the portion of collagen fibers randomly oriented in any direction, and a Gaussian function with A representing the peak-baseline amplitude: where θ 0 is the preferential orientation of the fibers and σ its standard deviation. The values of these parameters and the corresponding coefficient of determination (R 2 ) of the fit are presented in Table 2 and Table 3, for polarization oriented parallel and perpendicular to the craniocaudal direction, respectively. As an indicator of the degree of orientation of the collagen fibers, we also reported in the Tables the percentage alignment ratio, defined as AR = A G /(A G + A B ), where A G is the area beneath the Gaussian curve (considering an offset value equal to zero), which is associated with fibers aligned along the vertical direction, and A B is the area underlying the baseline B. Randomly oriented fibers should display a value close to 0%, while this ratio should approach 100% for samples where all fibers are aligned in a single direction.
Comparing the parameters reported in Table 2 and Table 3, we can conclude that: 1) When exciting the sample with light polarized parallel to the craniocaudal axis, the baseline B is ≈23% smaller (0.39 instead of 0.48 on average) with respect to the opposite condition of illumination with light polarization perpendicular to this direction. This indicates that the collagen fibers are predominantly oriented in the vertical direction. 2) Similarly, the standard deviation σ of the Gaussian distribution is ≈ 22% smaller (27.58°instead of 33.59°on average) when the SHG is excited with light polarized parallel to the craniocaudal axis, suggesting a predominant collagen fiber distribution along this direction. Instead, using the pump beam with polarization oriented perpendicularly, the minor portion of collagen fibers oriented perpendicular to the craniocaudal axis will be more efficiently excited. Accordingly, the alignment ratios AR (Table 2 and Table 3) are higher in the case of the parallel-polarized pump. Furthermore, the AR in the cortical is more than 2fold higher than the one in the trabecular bone, especially for WT samples, for both polarizations. This result quantitatively distinguishes the cortical regions from the trabecular ones, in terms of collagen fibers degree of orientation. This is not the case for KO samples, where the AR is only slightly higher in the cortical than in the trabecular bone, for both polarizations. This can be appreciated also in Figure 6, where the plots describing collagen orientation in KO models in the cortical and trabecular bone appear to be overlapped.
In conclusion, in WT sample, the cortical and the trabecular regions feature differences in terms of collagen fiber orientation, due to the physiological different specialization of these bone areas: the cortex, featuring the highest AR values, is the main responsible for vertebrae mechanical strength and load bearing.
FIGURE 6
Collagen fiber orientation results: the normalized pixel counts are reported in percentage as a function of the fiber orientation in the 0°-180°r ange, for any combination of polarization's orientation (parallel and perpendicular), sample model (WT and KO; n = 3 per genotype) and bone area (cortical and trabecular bone). The solid line represents the average value, and the shaded area the standard deviation.
Lipid and protein content in bone and bone marrow
In WT murine spines, the images generated in the SRS modality at 2850 cm −1 (magenta in Figure 3A) and 2920 cm −1 (yellow in Figure 3A) show that both the lipid and protein content of the tissue appeared slightly concentrated in the mineralized areas, i.e. cortical and trabecular bone, as compared to the marrow compartment. Conversely, in the KO samples the protein signal at 2920 cm −1 (yellow in Figure 3B) was present in the whole field of view and distributed evenly in the bone and bone marrow, with a small predominance in the former. Interestingly, the lipid SRS signal at 2850 cm −1 showed a more distinctive distribution, also compared to the signals detected in the WT sample. In fact, the lipid content appeared to be dominant in bone, in both the cortical and the trabecular part, differentiating it from the bone marrow.
To quantify these differences, we plotted the average lipid and protein signals registered in the bone normalized over the ones collected in the bone marrow, for both the WT and KO samples (Figure 7). The ratios were larger than one for both genotypes, indicating that, in general, bone contains more proteins and lipids compared to the bone marrow. For what pertains to the lipid signal, the bone/bone-marrow ratio was greater in KO vs. WT samples. This observation may point to a different skeletal metabolism in the presence or absence of Dpp3. Of note, increasing evidence in literature shows the importance of cellular metabolism in the molecular control of skeletal cell functions, and the association between metabolic dysregulation and skeletal degenerative diseases and ageing (Van Gastel and
FIGURE 7
Ratio between the average signal intensity in the bone and in the bone marrow for lipids and proteins in WT and KO models. Statistical analysis was performed using t test. p-values are indicated above each bar plot (n = 3 for both genotypes).
Frontiers in Bioengineering and Biotechnology frontiersin.org Carmeliet, 2021). Therefore, to support the hypothesis raised by SRS analysis of lipids, we assessed the expression of genes related to lipid transport (CD36, Fabp4, and Fatp1), uptake (Fabp4, Lrp1) and utilization (Cpt1), and in energy metabolism (Pgc1, Pex7, and Glut1) in the flushed bone of WT and Dpp3 KO mice by qPCR and found an altered expression pattern in the KO (Figure 8). Overall, these data add to the hypothesis of altered bone metabolism in the absence of DPP3 raised by SRS analysis.
In conclusion, a non-conventional analysis of bone uncovered a possible metabolic alteration in this tissue in the absence of Dpp3. A recent clinical study proposed DPP3 as a bone protective factor, by showing significant association with femoral neck bone mineral density in post-menopausal osteoporotic women before treatment and significant reduction in patients compared to controls (Menale et al., 2022). Taken together, these findings will deserve further investigation, also considering their possible translational relevance for human bone pathophysiology.
Conclusion
We demonstrated multimodal NLO microscopy as a quantitative, chemically selective, and non-destructive tool to effectively reveal features of bone biochemical composition and morphological arrangement in label-free murine spines, imaging large tissue areas that include complete vertebrae. This advanced optical technology allowed highlighting changes in terms of collagen amount and orientation, along with modifications in the lipid content between tissue samples from a WT and a mutant mouse (Dpp3 KO), the latter modeling a bone loss condition relevant to human pathology.
FIGURE 8
Gene expression analysis of selected genes relevant for lipid transport, uptake, and metabolism, in the flushed bone of WT and Dpp3 KO mice (n = 4 WT and 7 KO). Statistical analysis was performed using Mann Whitney test; p values are indicated above each graph.
Frontiers in Bioengineering and Biotechnology frontiersin.org Our work provides a proof of concept of the application of multimodal NLO microscopy on entire spine tissue sections to derive typical traits of pathophysiological bone conditions. The quantitative nature of this type of microscopy data is also well suited for computational methods of automatic detection and classification, in the framework of artificial intelligence-driven diagnostics. Also, the presented multimodal NLO microscope was adapted to scan large tissue areas, thus collecting a considerable amount of data per acquired multichannel image, which would serve the need for a rich training dataset to boost computational accuracy. By coupling labelfree multimodal NLO imaging, not requiring any time-consuming sample preparation, to machine learning models, trained to predict in real-time the probability of bone loss diseases, one can offer a rapid and accurate system to aid and augment clinical decisions in bone histopathology protocols. Validation of this analytical approach on diverse pathological samples will corroborate our conclusion and pave the way to improvement of the standard histopathological and immunohistochemical practice in bone biomedical analysis, through the prospective introduction of advanced NLO microscopy tools in clinical diagnostics (Scott, 1979;Sobel and Feldman, 2015;Gaytan et al., 2020).
Data availability statement
The raw data supporting the conclusion of this article will be made available by the authors, without undue reservation.
Ethics statement
The animal study was reviewed and approved by Institutional Animal Care and Use Committee of Humanitas Research Hospital.
|
v3-fos-license
|
2022-09-24T06:18:26.234Z
|
2022-09-22T00:00:00.000
|
252465764
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.tandfonline.com/doi/pdf/10.1080/14756366.2022.2126464?needAccess=true",
"pdf_hash": "8278fc7885ab791a552d4f543a932c70529f50ae",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44325",
"s2fieldsofstudy": [
"Medicine",
"Chemistry"
],
"sha1": "02ebc5e197a329c5f0e224f2d9981431e60eb7c6",
"year": 2022
}
|
pes2o/s2orc
|
Design and synthesis of new indole drug candidates to treat Alzheimer’s disease and targeting neuro-inflammation using a multi-target-directed ligand (MTDL) strategy
Abstract A novel series of indole-based compounds was designed, synthesised, and evaluated as anti-Alzheimer’s and anti-neuroinflammatory agents. The designed compounds were in vitro evaluated for their AChE and BuChE inhibitory activities. The obtained results revealed that compound 3c had higher selectivity for AChE than BuChE, while, 4a, 4b, and 4d showed selectivity for BuChE over AChE. Compounds 5b, 6b, 7c, and 10b exerted dual AChE/BuChE inhibitory activities at nanomolar range. Compounds 5b and 6b had the ability to inhibit the self-induced Aβ amyloid aggregation. Different anti-inflammatory mediators (NO, COX-2, IL-1β, and TNF-α) were assessed for compounds 5b and 6b. Cytotoxic effect of 5b and 6b against human neuroblastoma (SH-SY5Y) and normal hepatic (THLE2) cell lines was screened in vitro. Molecular docking study inside rhAChE and hBuChE active sites, drug-likeness, and ADMET prediction were performed.
Introduction
The most common form of a chronic irreversible neurodegenerative disorder is Alzheimer's disease (AD). It is characterised by memory deterioration, loss of speech, cognitive impairment in elderly people 1,2 . It has been reported that 36 million people in the world were living with dementia in 2010, and every 20 years, this number will double, resulting in increasing the number of people with AD to be more than 152 million people by the end of 2050. It is expected that AD people will cost about US $2 trillion by 2030 3,4 .
Searching the literature, the aetiology of AD is not completely known, but the most characteristic pathogens of this multifactorial disease are low levels of acetyl choline, b-amyloid (Ab) deposits, tau-protein (ƭ) aggregation, oxidative stress, and biometals dyshomeostasis [5][6][7] . The casual role in AD is arises from inflammation. Thus, the characteristic feature of AD is chronic and sustained microglia activation which results in increasing inflammatory mediators, such as cyclooxygenase-2 (COX-2), nitric oxide (NO), tumour necrosis factor a (TNF-a), and interleukin 1B (IL-1B). These mediators lead to neuronal apoptosis and facilitate the propagation of a neuro-inflammation detrimental cycle 3,5 .
Until now, there has been no drug to cure AD. The most common known FDA-approved therapeutic agents are acetyl cholinesterase inhibitors (AChEIs), namely, tacrine, rivastigmine, and galantamine 2,3,7 . They counteract the action of choline estrases (ChEs), such as acetylcholinesterase (AChE) and butyrylcholinesterase (BuChE) in hydrolysis of the neurotransmitter acetylcholine into choline and acetic acid [8][9][10] . Moreover, the most effective drug for treating AD is donepezil (I), but it is effective for a short period of time and then the symptoms are reversed 11 .
Other polyphenolic natural products with stilbene chemical structural features have also been known. Resveratrol (V) and Ferulic acid (VI) have various therapeutic activities, especially as antioxidant, anti-Ab-aggregation, and anti-inflammatory agents 6,18 . Thus resveratrol (V) was reported to suppress the activation of NF-ƙ B and as a result, it could inhibit COX-2 enzyme and retain anti-inflammatory properties 19 . Moreover, butin (VIII) is a natural product that contains chalcone part, was reported to modulate neurodegenerative disorders 20,21 .
Guided by the above facts, and due to problems of most clinical AD drugs, such as nausea, vomiting, diarrhoea, and nephrotoxicity [28][29][30] , there is an urgent need to apply the multi-targetdirected-ligand (MTDL) strategy "one molecule, multiple targets" to design and synthesis new drug candidates that can interact with multiple targets involved in the pathogenesis of AD.
The designed compounds were classified into three groups A-C ( Figure 1) to contain an indole ring as a main core and decorated by methanesulfonyl group (SO 2 Me) as a selective COX-2 pharmacophore, besides: Stilben moiety [as in resveratrol (V) and ferulic acid (VI)]group A Piperazinyl pyrimidine moiety and other secondary amines [to mimic GIBH-130 (II)]group A Benzyl piperidine ring [to resemble donepezil (I)]groups A and B Chalcone part and -NHCOCH 2 -linker in some prepared derivatives [for anti-inflammatory activity as in butein (VIII) and compound IX]group B Hydrazone moiety and thiazole scaffold [as in compounds VIII and IX]group C The synthesised compounds were subjected to: spectroscopic analysis (IR, 1 H NMR, Dept-Q NMR, and Mass) and elemental analysis to confirm the chemical structures, measurement of their AChE and BuChE inhibitory activities to evaluate their effect on AD, assessment of antineuro-inflammatory activity through measurement of NO, COX-2, IL-1b, and TNF-a, cytotoxic effect on human neuroblastoma (SH-SY5Y) and normal hepatic (THLE2) cell lines. Moreover, molecular docking studies and ADMET prediction were investigated.
Results and discussion
Chemistry Synthetic routes for the development of novel N-methylsulfonyl indole derivatives have been outlined in Schemes 1-3.
In Scheme 2, chalcone derivatives 5a-c and 6a,b were outlined. Reaction of p-aminoacetophenone (A) with different phenyl acetic acid derivatives Ba-c, afforded the key intermediates, Ca-c. By applying Claisen-Schmidt condensation reaction conditions, compounds Ca-c reacted with indole carboxaldehyde derivative 2 in absolute ethanol using sodium ethoxide to give derivatives 5a-c for good to high yields.
N-Chloroacetyl derivatives of 3-or 4-aminoacetophenone Da,b were heated under reflux temperature with benzyl pipridine (E) in acetone in the presence of potassium carbonate and catalytic amount of KI to afford precursors Fa,b in excellent yields through nucleophilic substitution reaction.
Finally, the target compounds 6a,b were generated by stirring at room temperature compound 2 with the intermediates Fa,b in methanol containing KOH for 24 h.
In Scheme 3, we tried to introduce hydrazone moiety to the prepared compounds. Thus, indole carboxaldehyde derivative 2 underwent a condensation reaction with benzohydrazide, phenylacetohydrazide, or p-chlorophenylacetohydrazide in glacial acetic acid to give indole molecules 7a-c.
The key intermediate 8 was obtained from the reaction of compound 2 with cyanoacetic acid hydrazide under reflux conditions using absolute ethanol as a solvent. Then, the target derivatives 9a-c and 10a,b were afforded by reacting compound 8 with either different arylidene derivatives giving pyridine-containing compounds 9a-c or with ethyl/or phenyl isothiocyanate derivatives, elemental sulphur in absolute ethanol containing catalytic amount of triethylamine to produce 10a,b.
The structures of the synthesised compounds were confirmed with the help of IR, 1 H, and 13 C NMR and mass spectral data (see Experimental part).
Biological evaluation
Acetylcholinesterase and butyrylcholinesterase (AChE and BuChE) inhibition activity results The effects of twenty synthetic compounds were evaluated for AChE and BuChE inhibition using the modified method of Ellman et al. 33 Inhibitory activities were detected and results were expressed as IC 50 (nM) values ( Figure 2). The results of the assay showed that almost all compounds were moderate to strong inhibitors of AChE except 3b, 4c, 5a, 6a, 7 b, 9a, 10a, and 9b with less activity as compared to reference drugs (tacrine and donepezil). On the other side, p-chlorophenyl chalcone derivative 5b exhibited the strongest AChE inhibitory activity as well as in plaques and tangles in AD patients 34 . Also, it was reported that AChE activity decreases progressively in the brain of AD patients, BuChE activity shows some increase. BuChE may replace AChE by hydrolysing brain acetylcholine in some conditions, such as in mice nullizygote for AChE or in AD patients in advanced stages of the disease 34,35 . Based on the mentioned information, 5ub and 6b compounds may be used in both cases, mild and advanced AD cases. Inhibition of Ab1-42 self-induced aggregation AD is a chronic irreversible neurodegenerative disease. It is caused by accumulation of amyloid plaques in the brains of patients suffering from AD 36 . Plaques are mainly composed of Beta-amyloid (Ab) peptides: Ab1-40 and Ab1-42. Recent studies showed that two Beta-Amyloid peptides (Ab1-40 and Ab1-42) exist in brain tissues, cerebrospinal fluid (CSF), and plasma in patients suffering from AD. In particular, aggregated Ab1-42 is considered a validated biomarker for diagnosing AD 37 .
Tacrine as a reference drug was used in this study to evaluate the inhibitory activity of eight selected compounds on Ab1-42 aggregation. The compounds showed strong activity (IC 50 ¼ 5.16-22.40 mM) as compared to tacrine (IC 50 ¼ 3.50 mM) as indicated in Table 1. Interestingly, compound 5b showed more potent inhibitory activity (IC 50 ¼ 2.50 mM) as compared to tacrine. As well as compound 6b showed inhibitory activity (IC 50 ¼ 4.94 mM) nearly similar to tacrine (Table 1).
Nitric oxide (NO) assessment
In AD, increased production of vascular NO, a highly neurotoxic mediator in the CNS, may contribute to the vulnerability of neurons to injury and cell death 38 . Anti-neuroinflammatory activities of the most potent two compounds, 5b and 6b, were evaluated on production of NO in LPS-induced BV2 microglia cell lines. It was observed that compounds 5b and 6b induced a decrease in NO level (4.89 and 4.46 pg/mL, respectively) if compared to a positive control (6.42 pg/mL) ( Figure 3).
Cyclooxygenase-2 (COX-2) assessment
The two most potent compounds, 5b and 6b, were evaluated on production of COX-2 in LPS-induced BV2 microglia cell lines. Previously, it was reported that COX-2 is a key mediator in the inflammatory response and may play a role in neurodegeneration 39 . The results of this study revealed that compounds 5b and 6b induced a decrease in COX-2 levels of about 20 and 14%, respectively, as compared to a positive control ( Figure 4).
Interleukin-1b (IL-1b) assessment IL-1b is a pro-inflammatory cytokines involved in the pathogenesis of AD 40 . So, activity of 5b and 6b derivatives were evaluated against LPS-induced BV2 microglia cell lines production of IL-1b. They induced a decrease in IL-1b level to about 5% and 48%, sequentially, as compared to a positive control ( Figure 5).
Tumour necrosis factor-a (TNF-a) assessment TNF-a is a pro-inflammatory cytokine that has been demonstrated to have a key role in inflammation. TNF-a signalling exacerbates both Ab and tau pathologies in vivo, according to several lines of evidence based on genetic and pharmacological modifications. Anti-inflammatory therapies, both preventive and interventional, were found to reduce brain damage and improve cognitive function in rodent models of AD. In this work, results revealed that compounds 5b and 6b showed a remarkable decrease in TNF-a levels to 53 and 67% if compared to positive control ( Figure 6).
Cytotoxicity of synthetic compounds in SH-SY5Y and THLE2 cells An MTT assay was performed to investigate the effect of the selected two compounds, 5b and 6b, on cell viability using human neuroblastoma (SH-SY5Y) and normal hepatic (THLE2) cell lines. The cells were treated with compounds 5b and 6b, evaluated the cytotoxicity in comparison with staurosporine (IC 50 ¼ 11.1 mg/mL) for SH-SY5Y, as 5b had an IC 50 value 42.8 mg/mL and IC 50 value of 76.6 mg/mL for compound 6b. As well as staurosporine, which had IC 50 ¼ 34.6 mg/mL for THLE2 cells, compounds 5b and 6b exerted IC 50 values equal to 114 and 91.5 mg/mL, respectively ( Figure 7).
The X-ray crystallographic structures of both rhAChE in complex with donepezil (PDB: 4EY7) and hBuChE in complex with inden-naphthamide derivative (PDB: 4TPK) were obtained from a protein data bank.
Concerning docking studies inside the rhAChE active site, donepezil, the ligand compound, forms three binding interactions with the rhAChE active site. Thus, its dimethoxyphenyl ring formed an arene-arene interaction with Trp286 amino acid. Both -CH 2of piperidinyl moiety and the phenyl group of benzyl part could interact with Tyr341 and Trp86 amino acids through arene-H and arene-arene interactions, respectively. Its binding energy score was À17.2793 Kcal/mol.
By inspecting docking results of tested compounds 3c, 5b, 6b, 7c, and 10b, it was found that their binding energy scores ranged from À31.6883 to À17.0177 Kcal/mol. Moreover, they form binding interactions with Trp86, Tyr341, and Trp286 amino acids, the same as donepezil, via arene-arene and arene-H interactions, in addition to H-binding interactions with Tyr124, Ph225, Gly448, and His447 amino acids.
Thus, the binding mode of the most active AChE inhibitor, chalcone derivative 5b (IC 50 ¼ 27.54 nM), showed two hydrogenbonding interactions between SO 2 Me/Trp124 amino acid and C¼O/Ph225 amino acid. Moreover, the other chalcone derivative 6b, with -NHCOCH 2linker and benzylpipridine pharmacophore, exerted two arene-arene interactions between the phenyl ring of the benzyl part and that of -NHPh with Trp86 and Trp286 amino acids, respectively, beside arene-H interaction between -CH 2alkyl part of benzyl moiety and Tyr341 amino acid. Additionally, compound 7c, bearing -NHCOCH 2spacer, displayed arene-H interaction between the -SO 2 Me moiety and Trp286 amino acid. Moreover, -SO 2 Me pharmacophore of compound 3c, trimethoxy stilbene derivative, interacted with Trp86 amino acid through arene-H binding mode.
From the thiazole series, compound 10b interacted with Tyr341 and His447 amino acids through arene-H binding mode with both pyrrole and thiazole rings, respectively. It also exerted arene-arene interaction between pyrrole/Trp286 amino acid and a Hydrogen-bonding interaction with Gly448 amino acid.
Regarding the BuChE active site, it was found that the ligand compound reacts with the hBuChE active site through arene-arene interaction between -CH 2group/Trp82 amino acid and arene-cation interaction between piperidine-NH/Tyr332 amino acid. Additionally, Hydrogen-bonding interaction between the C¼O group and Gly116 amino acid was observed. Ligand Energy-binding score was À16.4403 Kcal/mol.
For tested derivatives, the most active one, 5b (IC 50 ¼ 36.85 nM), had an energy-binding score of À19.5216 Kcal/mol. It formed three arene-arene interactions between phenyl ring/Trp82, pyrrole ring/Trp82 and -CO-Ph ring/Tyr332 amino acid, the same as was observed in the case of ligand docking study. Additionally, stilbene derivatives containing pipridine ring 4a or benzyl piperidine moiety 4d, with energy-binding scores of À17.9970 and À11.4498 Kcal/mol, sequentially, displayed arene-H interactions with Trp82 and Gly116 amino acids.
Thiazole derivative 10 b showed higher binding affinity than the ligand for hBuChE. It formed a hydrophobic interaction with Trp82 amino acid. It was noticed that the pyridine ring in 4b and p-chlorophenyl scaffold in 7c were responsible for the hydrophobic interactions inside the hBuChE active site with Thr120 and Trp82 amino acids, sequentially.
Compound 6b showed extra interactions with the hBuChE active site. It formed binding interactions with Trp82, Il69, and Gln71 amino acids.
From the above data analysis, we can conclude that hydrophobic interactions are mainly responsible for the binding process inside both rhAChE and hBuChE active sites. The most important pharmacophores for activity are benzyl, phenyl, p-chlorophenyl, pyridine, pyrrole, thiazole, piperidine rings, besides, -SO 2 Me, C¼O, C¼S, -CH 2 -, and -NHCOCH 2moieties.
The obtained data are summarised in Table 2
ADME study
Predicted physicochemical properties and drug-likeness To explore drug-likeness properties of the most active derivatives, such as AChE and BuChE inhibitors, 3c, 4a, 4b, 4d, 5b, 6b, 7c, and 10 b compared with donepezil and tacrine drugs, theoretical calculations, such as molecular weight (MW), the number of hydrogen-bond acceptors and donors, number of rotable bonds, TPSA, percentage of absorption as well as lipophilic indicator logp "octanol/water" partition coefficient were evaluated (Table 3).
Missing more than one of Lipinski's parameters may be attributed to bioavailability problems in the target compounds as drugs predicted.
The obtained results showed that six of eight tested compounds, stilbene derivatives 3c, 4a, 4b, chalcone derivative 5b, hydrazone derivative 7c and thiazole derivative 10b, obeyed Lipinski's rule with no violation and may meet the criteria for orally active drugs. They had a similar drug-likeness to donepezil and tacrine.
Compounds 4d and 6b had slightly increased MW, over 500, besides having low membrane permeability with octanol/water partition coefficient of 6.02 and 5.38, respectively, more than the acceptable value.
In silico ADME prediction Pharmacokinetic properties, such as absorption, distribution, metabolism, and excretion of the most active derivatives 3c, 4a, 4b, 4d, 5b, 6b, 7c, and 10b were determined using in silico ADME properties prediction. The results were compared to donepezil and tacrine drugs. As shown in Table 4, all the target derivatives showed high intestinal absorption values ranging from 96.61% to 99.78%, which are nearer to those of reference drugs, donepezil (97.95%) and tacrine (96.51%).
Permeability for in-vitro CaCo-2 cells was in the low to moderate range (0.60 À 21.50).
Additionally, low permeability values for in-vitro MDCK cells in the range 0.04 À 0.43 were noticed.
Moreover, chalcone derivatives 5b and 6b had higher absorption into the CNS than donepezil. Their predicted blood-brain barrier (BBB) values were 0.21 and 0.22, sequentially, while that of donepezil was 0.18.
Lower skin permeability (SP) properties ranging from À1.59 to À2.04 were observed for all tested derivatives than those of reference drugs (À3.04). On the other hand, solubility in pure water for trimethoxy stilbene derivative 3c was 5.07 mg/L, close to that of donepezil 6 mg/L. From the above results, we conclude that tested compounds, especially, 3c, 4a, and 5b have good ADME properties and can be further optimised for durability.
Predicted toxicity properties
To predict the toxicity properties of the most active derivatives 3c, 4a, 4d, 5b, and 6b, the AMES test and carcino-Mouse/Rat were measured. Additionally, cardiac toxicity of the selected compounds was checked via hERG-inhibition. Standard drugs, donepezil and tacrine were used to compare the obtained results (Table 5).
Results showed that trimethoxy stilbene derivative, 3c and hydrazine derivative 7c, resemble to donepezil in its mutagenic behaviour in the AMES test and had negative carcinogenic effect in mice and rats, besides its medium-to low-risk effect as a cardiotoxic agent. On the other hand, chalcone derivative, 5b and thiazole analogue 10b, exerted similar effects on tacrine as being positive carcinogenic in mice and negative in rats and still having medium-risk or ambiguous behaviour as a cardiotoxic agent. They were differing in being non-mutagenic in AMES test. All other tested derivatives 4a, 4b, 4d, and 6b showed non-mutagenic effects in the AMES test, negative carcinogenic behaviour in mice and rats and medium-risk as cardiotoxic agents. From the above results, it was justified that the target derivatives may have good characters as lead drugs.
Metabolism prediction
In silico phase I metabolism study can explore inhibitors to cytochrome P450 isoforms, such as CYP1A2, CYP2C19, CYP2C9, CYP2D6, and CYP3A4 properties and predict excretion property of the target compounds through measuring P-glycoprotein (P-gp) substrates. Thus, all tested derivatives, 3c, 4a, 4b, 4d, 5b, 6b, 7c, and 10b were similar to donepezil and tacrine in that they could not inhibit CYP1A2 and CYP2D6 isoforms, respectively. While they could act as inhibitors to CYP2C9 and CYP3A4, except 3c and 7c, did not show any inhibitory activity on CYP3A4 isoform. On the other hand, only two derivatives, 4d and 6b could not act as CYP2C19 inhibitors mimic the action of both donepezil and tacrine. Regarding P-gp, two of the tested compounds 4d and 6b were considered as P-gp substrates ( Table 6).
Conclusion
A new series of indole-based compounds were designed and synthesised as potent anti-Alzheimer's and anti-neuroinflammatory agents. All the prepared compounds were in vitro evaluated for their AChE and BuChE inhibitory activities. explore the binding mode of active compounds, derivatives 3c, 5b, 6b, 7c, and 10b were docked inside the AChE active site, while, 4a, 4b, 4d, 6b, 7c, and 10b were chosen for docking into the BuChE active site. Data analysis showed that hydrophobic interactions were mainly responsible for the binding process beside H-bonding interactions with some important amino acids such as Trp86, Trp286, Tyr124, Tyr341, Phe225, His447, and Gly448 for AChE, and Trp82, Tyr332, Gly116, Thr120, Ile69, and Gln71 amino acids for BuChE. Finally, from drug-likeness and ADMET prediction results, it was found that six of eight tested compounds (stilbene derivatives 3c, 4a, 4b, chalcone derivative 5b, and hydrazone containing compounds 7c and 10b) obeyed Lipinski's rule of five and were considered as good candidates for further optimisation to develop new anti-Alzheimer/anti-neuroinflammatory drugs.
Chemistry
For determination of melting points, a Griffin apparatus was used without correction. Moreover, on a Shimadzu IR-435 spectrophotometer, Infra-red spectra (IR) were recorded, using KBr discs, and values were represented in cm À1 (Faculty of Pharmacy, Cairo University). Both 1 H NMR and 13 C NMR (DEPT-Q) were carried out using the Bruker instrument at 400 MHz for 1 H NMR and 100 MHz for 13 C NMR spectrophotometer (Faculty of Pharmacy, Beni-Suef University, Beni-Suef, Egypt and Faculty of Pharmacy, Mansoura University, Mansoura, Egypt), in DMSO-d 6 , D 2 O using TMS as an internal standard and chemical shifts were recorded in ppm on the d scale using DMSO-d 6 (2.5) as a solvent. Coupling constant (J) values were estimated at Hertz (Hz). Splitting patterns are designated as follows: s, singlet; d, doublet, t, triplet; q, quartette; dd, doublet of doublet; m, multiplet. A Hewlett Packard 5988 spectrometer (Palo Alto, CA), the device that was used for recording the electron impact (EI) mass spectra. C, H, N microanalysis was performed on Perkin-Elmer 2400 at the Microanalytical Centre, Cairo University, Egypt and was within þ 0.4% of theoretical values. To follow the course of reactions and to check the purity of final products, analytical thin-layer chromatography (TLC) (pre-coated plastic sheets, 0.2 mm silica gel with UV indicator [Macherey-Nagel]) was employed. All other reagents and solvents were purchased from the Aldrich Chemical Company (Milwaukee, WI) and, were used without further purification.
General procedure for the synthesis of compounds 3a-c Indole carboxaldehyde derivative 2 (0.01 mol, 2.23 g), the appropriate phenylacetic acid derivative (0.01 mol), and potassium carbonate (0.01 mol, 1.38 g) were dissolved in acetic anhydride (5 mL). The mixture was stirred at 90 C for 4-6 h (monitored by TLC). Water (10 mL) was added and the reaction mixture was stirred at 60 C for 1 h. The reaction mixture was cooled and acidified with 12 N HCl. The aqueous solution was extracted with CH 2 Cl 2 (3 Â 10 mL), and the obtained organic layers were combined and evaporated to dryness. The formed residue was crystallised from EtOAc to give compounds 3a-c. .09 a Human intestinal absorbtion (%), b in vitro CaCo cell permeability (nm/sec), c in vitro MDCK cell permeability (nm/s), d in vitro plasma protein binding (%), e in vitro blood-brain barrier penetration (C. brain/C. blood), and f Skin permeability. General procedure for the synthesis of compounds 4a-d A mixture of compound 3b (0.001 mol, 0.37 g) with N,N,N',N'-tetramethyl-O-(1H-benzotriazol-1-yl)uranium hexafluorophosphate (HBTU) (0.001 mol, 0.37 g) in dimethylformamide (2 mL) was stirred for 30 min at room temperature. Then, the appropriate amine (0.001 mol) and a catalytic amount of triethylamine were added. The reaction mixture was stirred for 2-4 h at room temperature (monitored by TLC). Water (10 mL) was added. The product was extracted using ethyl acetate. The combined extract was concentrated. The obtained crude compound was crystallised from 95% ethanol to give a pure form of the desired compounds 4a-d. 13 13 13 General procedure for synthesis of compounds Ca-c A mixture of the appropriate compound Ba-c (0.001 mol) and N,N,N',N'-tetramethyl-O-(1H-benzotriazol-1-yl)uranium hexafluorophosphate (HBTU) (0.001 mol, 0.37 g) in dimethylformamide (2 mL) was stirred for 30 min at room temperature. Then, p-aminoacetophenone derivative (A) (0.001 mol, 0.13 g) and a catalytic amount of triethylamine were added. The reaction mixture was General procedure for the synthesis of compounds 5a-c To a solution of the appropriate acetophenone derivative Ca-c (0.01 mol) in absolute ethanol (10 mL) containing sodium ethoxide (0.02 g Na metal, 0.01 mol, in 5 mL absolute ethanol), aldehyde derivative 2 (0.01 mol, 1.45 g) was added. The reaction mixture was stirred for 24 h at room temperature. The obtained solution was poured into ice-cold water and neutralised with few drops of Conc. HCl (indicated by litmus paper). The obtained solid was filtered off, dried, and crystallised from acetone to give pure form of the compounds 5a-c.
Assessment of AChE and BuChE inhibitory activities
The inhibitory efficacy of synthesised compounds 3a-c, 4a-d, 5a-c, 6a,b, 7a-c, 9a-c, and 10a,b against AChE and BuChE in comparison with reference drugs tacrine and donepezil, was investigated using a modified Elman's test. The reaction of thiocholine with 5,5-dithio-bis (2-nitrobenzoic) acid (DTNB), generates a yellow chromophore that can be quantified at 412 nm 41 .
Inhibition of Ab1-42 self-induced aggregation
Inhibition of self-induced b-amyloid Ab1-42 assessment was performed for the selected compounds 3c, 4a, 4b, 4d, 5b, 6b, 7c, and 10b in comparison with tacrine. Screening Ab42 ligands that could prevent aggregation is critical for developing potential therapeutic treatments. In BioVision's Beta-Amyloid 1-42 (Ab42) Ligand Screening kit, a dye binds to the beta-sheets of an aggregated amyloid peptide resulting in an intense fluorescent product at wave length 450 nm using a BIOLINE ELISA reader. In the presence of an Ab42 ligand, this reaction is impeded/abolished resulting in a decrease or total loss of fluorescence. This assay is useful in screening Ab42 ligands for developing potential therapeutic agents against AD 42 . The assessment was performed according to BioVision's Beta-Amyloid 1-42 (Ab42) Ligand Screening Kit Catalog No. K570-100.
Assessments of anti-neuroinflammatory activity
The most active tested compounds, namely, 5b and 6b were selected to be assessed on NO, IL-1B, TNF-a, and COX-2 production in LPS-induced BV2 microglial cell lines. LPS was the used positive control. NO plays an important role in neurotransmission, vascular regulation, immune response, and apoptosis. NO is rapidly oxidised to nitrite and nitrate which are used to quantitate NO. NO was estimated using the Abcam ELISA kit (catalog No. ab65328). Briefly, enzyme was added to cell lysate, followed with cofactor and incubated at room temperature for 60 min, after that Griess reagent was added and incubated at room temperature for 10 min., finally optical density was measured at 540 nm.
For COX-2, all reagents, samples, and standards were prepared as Kit instructions, then 100 ll of standard or sample were added to each well, incubated 2.5 h at room temperature. Then 100 ll of prepared biotin antibody was added to each well, incubated an hour at room temperature. After that 100 ll of prepared Streptavidin solution was added. Then it was incubated for 45 min at room temperature. An aliquot of 100 ll TMB One-Step Substrate Reagent was added to each well, incubated 30 min at room temperature, 50 lL stop solution added to each well. Finally, the optical density was read at 450 nm immediately.
For IL-1b, this assay employs the quantitative sandwich enzyme immunoassay technique. A monoclonal antibody specific for human IL-1b has been pre-coated onto a micro plate. Standards and samples are piped into the wells and any IL-1b present is bound by the immobilised antibody. After washing away any unbound substances, an enzyme-linked polyclonal antibody specific for human IL-1b is added to the wells. Following a wash to remove any unbound antibody-enzyme reagent, a substrate solution is added to the wells and colour develops in proportion to the amount of IL-1b bound in the initial step. The colour development is stopped and the intensity of the colour is measured 43 , the method of assessment was performed as instructed in the IL-1 b R&D system ELISA kit (catalog No. DLB50).
For TNF-a, cell lysate was used to assess TNF-a using the MyBioSource ELISA kit (Catalog No: MBS2502004). This assay employs the quantitative sandwich enzyme immunoassay technique 44 . Briefly, 100 ml of the samples were added to each well. Incubate for 90 min at 37 C, and immediately add 100 ll of Biotinylated Detection Ab working solution to each well. Incubate for 1 h at 37 C, then add 350 ml of wash buffer to each well, 100 ll of HRP Conjugate working solution to each well was added, incubated for 30 min at 37 C. Add 90 ll of Substrate Reagent to each well. Incubate for about 15 min at 37 C. Then, 50 ll of Stop Solution was added to each well. Finally, the optical density of each well was detected at 450 nm.
Cytotoxicity on SH-SY5Y and THLE2 cell lines Cell culture protocol. Microglia cell Line, BV-2, human neuroblastoma (SH-SY5Y), and normal hepatic (THLE2) cells were obtained from American Type Culture Collection, cells were cultured using DMEM (Invitrogen/Life Technologies, Carlsbad, CA) supplemented with 10% FBS (Hyclone), 10 mg/mL of insulin (Sigma, St. Louis, MO), and 1% penicillin-streptomycin. All of the other chemicals and reagents were from Sigma, or Invitrogen. Plate cells (cells density 1.2 À 1.8 Â 10,000 cells/well) in a volume of 100 mL complete growth medium þ 100 mL of the tested compounds per well in a 96-well plate for 24 h before the MTT assay.
After treatment of cells with the serial concentrations of the compound to be tested, incubation is carried out for 48 h at 37 C, and then the plates are to be examined under the inverted microscope and proceed for the MTT assay 45 .
In vitro cell viability assay (MTT assay method). The MTT method is simple, accurate, and yields reproducible results. It is used to investigate cytotoxicity of 5b and 6b in human neuroblastoma (SH-SY5Y) and normal hepatic (THLE2) cell lines. Cells were seeded in wells at number 10 6 cells/cm 2 . Each test should include a blank containing complete medium without cells.
Solutions of MTT, dissolved in medium or balanced salt solutions without red phenol, are yellowish in colour. Add reconstituted MTT to an amount equal to 10% of the cultural medium volume. Then the cultures were returned to the incubator for 2-4 h.
Mitochondrial dehydrogenases of viable cells cleave the tetrazolium ring, yielding purple formazan crystals which are insoluble in aqueous solutions. The crystals are dissolved in acidified isopropanol. The resulting purple solution is spectrophotometrically measured at a wavelength of 570 nm. Measure the background absorbance of multiwell plates at 690 nm and subtract from the 570 nm measurement.
An increase or decrease in cell number results in a concomitant change in the amount of formazan formed, indicating the degree of cytotoxicity caused by the test material.
Docking study
To identify molecular features that might be responsible for the biological activity of synthesised compounds and to predict their mechanism of action, a docking study was performed. X-ray crystal structure of rhAChE in complex with donepezil (https://www. rcsb.org/structure/4EY, PDB ID: 4EY7) and hBuChE with its ligand (https://www.rcsb.org/structure/4tpk, PDB ID: 4TPK) were downloaded from Protein Data Bank at Research Collaboration for Structural Bioinformatics (RSCB) Protein Database (PDB). All molecular modelling calculations and docking studies were carried out using Molecular Operating Environment Software (MOE 2014.0901). All water molecules were deleted. To ensure the accuracy of the docking protocol, validation was performed by redocking the co-crystallised ligand (donepezil) into the rhAChE active site and N-f[1-(2,3-dihydro-1H-inden-2-yl)piperidin-3yl]methylg-N-(2-methoxyethyl)-2-naphthamide into hBuChE active site with a resolution of 2.35 and 2.7 Å, and energy score of À17.2793 and À16.4403 Kcal/mol, respectively.
Selected target compounds were protonated, energy minimised by Merk Molecular Force Field (MMFF94X), and docked into enzyme active sites using the same protocol for ligand compounds. The most stable conformer was chosen and amino acid interactions were depicted. All docking data are summarised in Table 2.
ADMET study
Predicted molecular properties and drug-likeness To evaluate drug-likeness of the most active synthesised target derivatives 3c, 4a, 4d, 5b, and 6b, molinspiration (2018.02 version) 46 was used to calculate molecular properties such as MW, number of the hydrogen-bond acceptor (nON), number of hydrogen-bond donors (nOHNH), partition coefficient (logp), number of rotatable bonds (nrotb), topological polar surface area (TPSA), absorption percentage (%Abs), which was calculated using formula %Abs ¼ 109 -(0.345 Â TPSA), and violation of Lipinski's rule of five (n violation). Both acceptable values and predicted results of target compounds and standard drugs are listed in Table 3.
In silico ADME prediction PreADME online server 47 was used to predict in silico ADME properties of the selected compounds 3c, 4a, 4d, 5b, and 6b and compared with donepezil and tacrine drugs. Human intestinal absorption (HIA), cell permeability of CaCo-2 cell and Maden Darby Canine Kideny (MDCK) cell, PPB, BBB (BBB), SP, and pure water solubility were calculated and predicted values are listed in Table 4.
Predicted toxicity properties
PreADMET online server 47 was used to predict toxicity properties using the AMES test, rodent carcinogenicity assay (mice and rats), and hERG-inhibition. The obtained results for compounds 3c, 4a, 4d, 5b, 6b, donepezil, and tacrine are recorded in Table 5.
Metabolism prediction
Metabolism prediction for the tested compounds 3c, 4a, 4d, 5b, and 6b was examined using Swissadme online server 48 . The most important parameters used to measure metabolism and excretion were cytochrome P450 (CYP) isoforms and P-gp. All the obtained data for tested derivatives and standard drugs are listed in Table 6.
|
v3-fos-license
|
2020-11-05T09:08:27.467Z
|
2020-10-30T00:00:00.000
|
227173246
|
{
"extfieldsofstudy": [
"Computer Science"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.mdpi.com/2078-2489/11/11/510/pdf",
"pdf_hash": "417b01469ddd885e63b701bf4c08c18c75ac598a",
"pdf_src": "Adhoc",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44327",
"s2fieldsofstudy": [
"Business",
"Computer Science"
],
"sha1": "2766fbe250f1c447eb72ef70280e4026d353103d",
"year": 2020
}
|
pes2o/s2orc
|
Evaluation of Push and Pull Communication Models on a VANET with Virtual Tra ffi c Lights
: It is expected in a near future that safety applications based on vehicle-to-everything communications will be a common reality in the tra ffi c roads. This technology will contribute to improve the safety of vulnerable road users, for example, with the use of virtual tra ffi c light systems (VTLS) in the intersections. This work implements and evaluates a VTLS conceived to help the pedestrians pass safely the intersections without real tra ffi c lights. The simulated VTLS scenario used two distinct communication paradigms—the pull and push communication models. The pull model was implemented in named data networking (NDN), because NDN uses natively a pull-based communication model, where consumers send requests to pull the contents from the provider. A distinct approach is followed by the push-based model, where consumers subscribe previously the information, and then the producers distribute the available information to those consumers. Comparing the performance of the push and pull models on a VANET with VTLS, it is observed that the push mode presents lower packet loss and generates fewer packets, and consequently occupies less bandwidth, than the pull mode. In fact, for the considered metrics, the VTLS implemented with the pull mode presents no advantage when compared with the push mode.
Introduction
Statistics show that pedestrians are more vulnerable to accidents than other road users. Indeed, in European Union, in 2016, 21% of all traffic fatalities were pedestrians [1]. Traffic signals play an important role to improve vulnerable road users (VRU) safety, because red lights stop cars at the intersections so that VRUs (pedestrians, bicyclists) can cross safely. Unfortunately, many intersections have no traffic light systems (TLS). However, there has been a significant increase in the number of connected devices on the public roads with the increase of connected vehicles. The use of vehicle-to-everything (V2X) communications is still expanding and improving, but in the future it will be certainly a common technology in the roads. It is expected that safety applications based on V2X communications will come up, thus contributing particularly to improve the safety of VRUs. Inspired by this near future reality, this work implements and evaluates a virtual traffic light system (VTLS) conceived to help the pedestrians pass safely the intersections without real TLSs. The VTLS is implemented and evaluated using two distinct communication paradigms-the pull and push communication models, as discussed next.
The pull model was implemented with named data networking (NDN) [2], which is a very popular information centric networking architecture. The goal of NDN is to redesign completely the Internet by replacing internet protocol (IP) datagrams with content chunks as a universal component of transport. Neither IP addresses nor port numbers are used in NDN packets. The communications in NDN are driven by the receivers and involve the exchange of interest and data packets. Basically, a consumer sends an interest packet to the network asking for content, and then a data packet carrying the requested content is replied by the provider. This communication model allows the decentralization through in-network caching, making it very appropriate for large-scale environments with highly dynamic topologies, such as VANETs.
Three major data structures are present in the NDN nodes: Content Store (CS), Pending Interest Table (PIT), and Forwarding Information Base (FIB). The CS is a temporary cache of data packets received by the router, and it is used to satisfy the future interests. The PIT stores all interests received by the router that were not still satisfied. The FIB stores information related with forwarding, namely the outgoing faces to forward the interests that match a name prefix. NDN uses natively a pull-based communication model, where consumers send requests to pull the contents from the provider. A distinct communication paradigm is followed by the push-based model. The consumers subscribe previously the information and then the producers distribute (i.e., push) to the consumers the available information, periodically (e.g., monitoring messages) or based on events (e.g., safety warnings).
To help understand the focus of this work, let us consider the following use-case. A pedestrian equipped with a smartphone walks along the sidewalk of a city, where all road intersections have one road-side unit (RSU) to implement a virtual semaphore. To increase the safety of this vulnerable road user (VRU) during the passing of the intersections, the pedestrian's smartphone communicates with the RSU in order to inform his/her position and the crosswalk that intends to use. After receiving the data from the pedestrian, the RSU sets the state of the virtual traffic light signals and this information is delivered to the vehicles approaching the intersection controlled by the RSU. If the vehicle receives a red light signal from the RSU, then it must stop at the intersection to let the pedestrian pass safely on the crosswalk. If the vehicle receives a green light signal from the RSU, then it must follow the traffic rules to pass the intersection in presence of other vehicles. The RSU interaction with pedestrians and cars can follow the pull or the push communication models, as illustrated in Figure 1. In the push model, the pedestrian P sends the data message D to the RSU S and this sends the virtual traffic light message L to car C. In the pull model, the messages D and L are only sent after being requested by RSU S (through message I) and car C (through message J), respectively. The pull-based communication model offers some advantages when compared to the push model, namely in terms of bandwidth control and number of data requests. Indeed, the pull model is able to regulate better the communications. For example, in a group of RSUs close enough to each other, interests may be sent out of phase by the RSUs to guarantee that at any time only one RSU is sending an interest in that communication domain. This traffic distribution helps to achieve a better use of the channel bandwidth, which in turn helps to decrease packet collisions. Moreover, an RSU is able to ask for specific information from data providers, such as pedestrians. However, as pedestrians can only send data after receiving an interest from the RSU, this imposes some synchronicity in the transmissions of the pedestrians, which can originate a significant content for the wireless channel if there are a great number of pedestrians near the RSU. This problem may be less relevant in the push model, because transmissions are asynchronous. The pull model also has the disadvantage of requiring a more complex communication model, which may impact the data delivery delay and the reliability of the system. Indeed, the pull model is more suitable for data without strict timeliness constraints, because a request for the data must be received by the provider before being transmitted. Moreover, the pull mode requires the double of the messages for pedestrians sending the information to the RSU and also for the cars receiving the virtual traffic lights from the RSU. So, if the probability of losing a message in the wireless channel is p (for simplicity, all messages have the same size), then the success probability for a pedestrian sending the information to the RSU or for the cars receiving the traffic lights from the RSU is (1−p) in the push mode, and (1−p) 2 in the pull mode. So, the mentioned success probability is (1−p) times lower in pull mode than in push mode.
It is expectable that the pull mode be outperformed by the push mode in the implementation of the VTLS scenario presented in the use-case. However, it is not clear how worst is the performance of the pull mode when compared to the push mode, or if such performance degradation would prevent the usage of the pull mode to implement a VTLS. The goal of this work is to clarify these issues, by comparing the performance of the push and pull models on a VANET with virtual traffic lights placed on the road intersections to improve the pedestrians' safety. The implemented VTLS only tries to protect the pedestrians from the vehicles, and not the vehicles from other vehicles.
Related Works
Diverse studies and projects have been developed using vehicle-to-pedestrian communications for safety of VRUs [3], including vehicular NDNs [4]. However, to the best of our knowledge, only the work indicated in [5] presents a VTLS application for VANETs over NDN. Instead of installing traffic lights at every intersection, it is used a RSU in each intersection acting as traffic controller. The RSU collects the information of vehicles that have arrived or are going to arrive at the intersection, processes the information, and then sends a message for every vehicle to pass the intersection or stop. A geolocation-based forwarding strategy is used to disseminate packets. The authors claim that this was the first design of a smart traffic light system in a vehicular NDN. Nevertheless, the presence of VRUs is not considered in the simulation scenario.
There are other works that addressed the use of virtual traffic lights in VANETs or intersection signal decision-making algorithms. Originally proposed by Ferreira et al. [6], VTLSs were studied in posterior works, such as [6][7][8]. However, these works do not use NDN and do not take in consideration the presence of pedestrians. For example, in the work presented in [6], the vehicles moving on the same direction form a cluster. The cluster leader is the vehicle that moves farther from the intersection, and is responsible for choosing the priorities and broadcasting the VTL messages to the other vehicles of the cluster. In the paper indicated in [7], each vehicle collects the neighbor information to select a leader. Then, the leader creates and maintains the VTL, and broadcasts the traffic signals to the remaining vehicles of the group. In [8], the vehicles send its own information to the cloud infrastructure, which then informs the vehicles to pass or stop at the intersection. An algorithm is proposed in [9] to define the priorities of the road intersections with VTLS. Vehicles exchange information and then priority is given to the vehicle that first arrives at the intersection or to the Information 2020, 11, 510 4 of 20 priority vehicles. An intersection signal control mechanism is proposed in [10], whereby the RSU at the intersection collects the real-time information of far vehicles. Then, after merging this data with the information of vehicles in the intersection via image acquisition, the waiting times of the traffic flows are predicted for the signal decision-making. The work in [11] proposes an algorithm that assigns vehicles to the group of each lane and calculates traffic volume and congestion degree using the traffic information of each group through inter-vehicle communications, without requiring cameras. In [12], a dynamic traffic regulation method is proposed, where the driver's willingness is taken into account, using a distributed collective decision mechanism to control the virtual traffic light.
In order to benefit from the one-way delay offered by the push-based models, a few works have investigated push-based content retrieval solutions for NDNs [13,14], including the use of long-lived interests [15]. These special interests allow producers to send multiple data packets for a certain time period, without requiring additional interests, thus reducing the PIT size and the network traffic. Moreover, the work presented in [16] proposes a mechanism for push-based data dissemination for vehicular NDNs, in which a producer node can inject content without preceding interest packets, thus reducing both the content forwarding and caching delay. The push mode is also recommended by ETSI to implement a use-case involving traffic lights [17], where VRUs and vehicles broadcast continuously messages at a certain frequency to the RSU. This unit analyzes the crossing status and then transmits the information to the nearby vehicles to let the VRUs pass safely the crosswalk.
Pull-Based Virtual Semaphore
This section presents the implementation of the use-case, described previously, using the pull-based model. As NDN integrates natively a pull-based model, the pull-based virtual semaphore is implemented in NDN. In the proposed VTLS, the consumer can be static (RSU) and dynamic (vehicles), and the producer can be static (RSU) and dynamic (pedestrians). All exchanges of messages with the RSU are done using direct line of sight communications, as the RSU is assumed to be in a strategic position at the intersection, so that all nearby pedestrians and cars can reach it in one-hop. The role of the pedestrians, vehicles, and RSU in the VTLS will be discussed in the next sections.
Pedestrians and RSU
The RSU broadcasts periodically an interest to know all pedestrians that are less than a certain distance to it. The interest name has the following format: /vtlsId/VRU, where /vtlsId/ is the identification of the domain name of the virtual traffic light system, and "VRU" is the content's directory path. The nonce, the hop limit, the distance range, and the GPS coordinates (or any other form of unique identification) of the RSU are also sent within the interest packet. The range and the GPS coordinates of the RSU are application parameters used to parameterize the data request. These two parameters are sent in the "application parameters" field of the interest packet, and are only used for the pedestrians to know whether the received interest should be ignored or not. The other parameters are sent respectively in the "nonce" and "hop limit" fields of the interest packet [18]. The hop limit is set to one, for the pedestrians do not forward any received interest. An interest packet is uniquely identified by the name and the nonce. The nonce is a random number generated by the consumer application and is used to detect duplicate packets.
After receiving an interest from the RSU, the minimum distance to the RSU is calculated by the pedestrian. If this distance is above the announced range, then the received interest is ignored by the pedestrian. Only the pedestrians inside the range and approaching or leaving the intersection controlled by the RSU reply to the interest, sending a data packet with the same name of the interest, along with the following information: the pedestrian identification, the GPS coordinates of the pedestrian, the crossroad heading, and the next sidewalk GPS coordinates. The GPS coordinates, the crossroad heading, and the next sidewalk GPS coordinates of the pedestrian are a tuple that is obtained from the content store (CS) of the pedestrian, which is updated regularly by the NDN daemon running in the smartphone. The "next sidewalk GPS coordinates" identifies the sidewalk that the pedestrian intends to take after passing the intersection. This identification is done through the GPS coordinate of a point of the next sidewalk. The "crossroad heading" is the direction that the pedestrian will take after reaching the intersection, and it can assume the following values: go straight, turn left, and turn right. When the pedestrian is moving to the intersection, he/she can indicate the planned crossroad heading using, for example, a specific smartphone application. As the pedestrians are configured for not caching the content of any other node, the pedestrian's CS only holds the own pedestrian's content. If the reply of a pedestrian is lost because of a communication problem in the wireless channel, a more updated data packet can be sent in response to the next interest broadcast by the RSU.
Whenever a data packet is received from a pedestrian, the RSU saves the information contained in the data packet in the internal database. So, one interest broadcast by the RSU may generate multiple entries in this internal database-one entry for each distinct reply received from a pedestrian that is inside the RSU announced range. Afterwards, the RSU invokes an internal application, named traffic management controller (TMC), which calculates the light signal associated to each crosswalk and for all possible vehicle directions, taking in consideration the pedestrian registers in the internal database. Then, the RSU caches the traffic light signals returned by the TMC in the CS. The entries of the CS are updated whenever the internal database of the RSU is changed, for example, when the RSU receives a data packet from a pedestrian. Once a pedestrian has crossed the intersection and starts moving away from it, the RSU is able to know this situation from the data packet received from the pedestrian and, consequently, the RSU deletes the register in the internal database associated to that pedestrian. The register of a pedestrian is also deleted if no reply is received from that pedestrian during a predefined time interval. When the internal database of the RSU becomes empty, the TMC sets green light to all crosswalks and directions, and the CS is updated accordingly.
Vehicles and RSU
When a vehicle approaches an intersection, it sends periodically an interest to the RSU requesting the respective virtual traffic light signal of that intersection. The interest name has the following format: /vtlsId/RSU/roadId1/roadId2, where /vtlsId/ is the identification of the domain name of the VTLS, and "RSU/roadId1/roadId2" is the content's directory path. The component roadId1 identifies the road where the vehicle is located and roadId2 identifies the road that the vehicle intends to take after passing the intersection. This information can be obtained, for example, from the GPS track, if the destination was defined by the driver at the beginning of the travel, or from the turn signal. If the car is unable to determine roadId2, it sends the interest name "RSU/roadId1/*", and the RSU replies with the traffic lights of all crosswalks. The nonce and the hop limit are also sent within the interest packet. The hop limit is set to one for the vehicles not forwarding any received interest. The RSU that should reply to the interest can be decided from the components roadId1 and roadId2.
After receiving an interest from a vehicle, the RSU gets the virtual traffic light from the CS, which has one entry for each possible direction a car may follow in the intersection. For example, if an intersection has four crosswalks, as shown in Figure 2, there are three possible directions that a car may take after reaching a crosswalk (left, right, ahead), and so the CS of the RSU contains 4 × 3 = 12 entries. After receiving the traffic light signal from the CS, the RSU replies to the vehicle with a data packet, having the same name of the interest and carrying the traffic light signal in the content. The virtual traffic light signal is zero for green light signal, and one for red light signal (for sake of simplicity, the yellow light signal is not considered in this work). Since the location of the moving pedestrians near the intersection is always changing, the vehicle sends periodically an interest to the RSU in order to update the information of the traffic light signal, as the vehicle approaches the intersection. After receiving a data packet with the same name of the interest sent to the RSU, the vehicle stores the new traffic light signal in the CS and deletes the traffic light signal of the data packet received previously. This information will be used by the vehicle in the intersection to let the nearby pedestrians pass safely. The interests received from the vehicles and the data packets sent by the RSU are ignored by the pedestrians. Also, the data packets received from the pedestrians are ignored by the vehicles.
As each vehicle receives its own traffic light signal, two vehicles in the same road may receive different traffic light signals, depending on their crossroad headings. The crossroad heading is the direction that the vehicle will take after reaching the intersection (go straight, turn left, turn right), and it may deduce from the components roadId1 and roadId2. In the example of Figure 2, vehicles A and B are both moving straight ahead and receive a green light signal from the RSU, because their routes do not intersect the trajectory of the pedestrian P. However, vehicle C receives a red light signal, because it intends to turn left and this maneuver will intersect the pedestrian in the inferior horizontal crosswalk.
To understand better the meaning and the importance of the crossroad heading, let us consider the example of Figure 3. If the pedestrian P wants to reach the sidewalk S, it can do it by taking the two crosswalks marked by (i) the continuous line; or (ii) by the dashed line. In the first case, the crossroad heading is "turn right," and in the second case, the crossroad heading is "go straight." As each case may imply distinct traffic light signals for the vehicles, then the crossroad heading is an important parameter to be taken in consideration by the TMC. For example, if the pedestrian P choose to follow the continuous line path, then cars A, B, and C receive red light signals, because the trajectory of the three cars intersect the pedestrian's trajectory. However, if the pedestrian P chooses to follow the dashed line path, then all cars may receive green light signals while the pedestrian walks through the superior horizontal crosswalk. However, when the pedestrian reaches the left vertical crosswalk and if cars A, B, and C have not yet crossed the intersection, then cars A and B receive red light signal and car C receives green light, because this car, unlike A and B, does not intersect the pedestrian's trajectory. After receiving a data packet with the same name of the interest sent to the RSU, the vehicle stores the new traffic light signal in the CS and deletes the traffic light signal of the data packet received previously. This information will be used by the vehicle in the intersection to let the nearby pedestrians pass safely. The interests received from the vehicles and the data packets sent by the RSU are ignored by the pedestrians. Also, the data packets received from the pedestrians are ignored by the vehicles.
As each vehicle receives its own traffic light signal, two vehicles in the same road may receive different traffic light signals, depending on their crossroad headings. The crossroad heading is the direction that the vehicle will take after reaching the intersection (go straight, turn left, turn right), and it may deduce from the components roadId1 and roadId2. In the example of Figure 2, vehicles A and B are both moving straight ahead and receive a green light signal from the RSU, because their routes do not intersect the trajectory of the pedestrian P. However, vehicle C receives a red light signal, because it intends to turn left and this maneuver will intersect the pedestrian in the inferior horizontal crosswalk.
To understand better the meaning and the importance of the crossroad heading, let us consider the example of Figure 3. If the pedestrian P wants to reach the sidewalk S, it can do it by taking the two crosswalks marked by (i) the continuous line; or (ii) by the dashed line. In the first case, the crossroad heading is "turn right," and in the second case, the crossroad heading is "go straight." As each case may imply distinct traffic light signals for the vehicles, then the crossroad heading is an important parameter to be taken in consideration by the TMC. For example, if the pedestrian P choose to follow the continuous line path, then cars A, B, and C receive red light signals, because the trajectory of the three cars intersect the pedestrian's trajectory. However, if the pedestrian P chooses to follow the dashed line path, then all cars may receive green light signals while the pedestrian walks through the superior horizontal crosswalk. However, when the pedestrian reaches the left vertical crosswalk and if cars A, B, and C have not yet crossed the intersection, then cars A and B receive red light signal and car C receives green light, because this car, unlike A and B, does not intersect the pedestrian's trajectory. Unlike the traditional NDN behavior, the moving nodes (pedestrians, cars) of the scenario with the NDN-based semaphore system do not save the unsatisfied interests in the PIT, neither cache the data of other nodes in the CS, because there is no relevant advantage in doing so. In fact, as the node only sends a packet when is close to the RSU, the data packet can reach the RSU directly, in one-hop. Moreover, as the moving nodes have only one face to receive and transmit packets, there is no role for the FIB. Indeed, the packets are usually flooded in the NDN-enabled VANETs [19][20][21].
Flowcharts
The flowcharts of the algorithms used by the RSU, pedestrians and vehicles in the pull-based scenario are presented next. The algorithms are shown in a simplified way, illustrating only the basic actions. Figure 4 shows the algorithm used by the RSUs in pull mode. After starting up, the RSU sends periodic interests (I pkt) to the nearby pedestrians. When a data packet (D pkt) is received from a pedestrian, the location conditions are checked. If these conditions are valid, then the RSU saves in the internal database the content, namely the identification, current sidewalk, next sidewalk, and the crossroad heading of the pedestrian. Then, the TMC is called to define the traffic light signals. By analyzing the information contained in the database, the TMC decides the traffic light signals for all routes a car may take after getting the intersection, and saves this information in the CS. After receiving an interest packet from a vehicle that is going to the intersection, the RSU broadcasts a data packet containing the respective traffic light signal. Unlike the traditional NDN behavior, the moving nodes (pedestrians, cars) of the scenario with the NDN-based semaphore system do not save the unsatisfied interests in the PIT, neither cache the data of other nodes in the CS, because there is no relevant advantage in doing so. In fact, as the node only sends a packet when is close to the RSU, the data packet can reach the RSU directly, in one-hop. Moreover, as the moving nodes have only one face to receive and transmit packets, there is no role for the FIB. Indeed, the packets are usually flooded in the NDN-enabled VANETs [19][20][21].
Flowcharts
The flowcharts of the algorithms used by the RSU, pedestrians and vehicles in the pull-based scenario are presented next. The algorithms are shown in a simplified way, illustrating only the basic actions. Figure 4 shows the algorithm used by the RSUs in pull mode. After starting up, the RSU sends periodic interests (I pkt) to the nearby pedestrians. When a data packet (D pkt) is received from a pedestrian, the location conditions are checked. If these conditions are valid, then the RSU saves in the internal database the content, namely the identification, current sidewalk, next sidewalk, and the crossroad heading of the pedestrian. Then, the TMC is called to define the traffic light signals. By analyzing the information contained in the database, the TMC decides the traffic light signals for all routes a car may take after getting the intersection, and saves this information in the CS. After receiving an interest packet from a vehicle that is going to the intersection, the RSU broadcasts a data packet containing the respective traffic light signal. Figure 5 shows the algorithms used by pedestrians and vehicles in pull mode. Pedestrian: After starting up the VTLS application, the pedestrian's smartphone waits for external communication messages. If an interest is received from the RSU controlling the intersection that the pedestrian is going to, and if the pedestrian is at a distance to this RSU lower than the range value announced in the interest, then the pedestrian sends to the RSU a data packet containing the identification, current sidewalk, next sidewalk, and crossroad heading of the pedestrian. Otherwise, the received message is ignored by the pedestrian.
Flowchart of Pedestrians and Vehicles
Vehicle: When the vehicle is at a distance to the RSU lower than a pre-defined value, the vehicle sends periodically an interest to this RSU. When a data packet is received from this RSU, the vehicle saves in the CS the received traffic light signal. When the vehicle gets close to the intersection, it checks the last saved traffic light signal. If this signal is red, then the vehicle must stop at the intersection, because there is a pedestrian near or on the crosswalk that intersects the vehicle trajectory. If it is green, then there is no pedestrian near or on the crosswalk that intersects the vehicle trajectory. In this case, the vehicle should pass the intersection considering only the presence of other vehicles in the intersection. Recall that the virtual semaphore only tries to protect the pedestrians from the vehicles, and not the vehicles from other vehicles.
Push-Based Virtual Semaphore
This section presents the implementation of the virtual semaphore using the push-based communication model.
Pedestrians and RSU
When a pedestrian, walking to an intersection, is less than a certain distance to the RSU Pedestrian: After starting up the VTLS application, the pedestrian's smartphone waits for external communication messages. If an interest is received from the RSU controlling the intersection that the pedestrian is going to, and if the pedestrian is at a distance to this RSU lower than the range value announced in the interest, then the pedestrian sends to the RSU a data packet containing the identification, current sidewalk, next sidewalk, and crossroad heading of the pedestrian. Otherwise, the received message is ignored by the pedestrian.
Vehicle: When the vehicle is at a distance to the RSU lower than a pre-defined value, the vehicle sends periodically an interest to this RSU. When a data packet is received from this RSU, the vehicle saves in the CS the received traffic light signal. When the vehicle gets close to the intersection, it checks the last saved traffic light signal. If this signal is red, then the vehicle must stop at the intersection, because there is a pedestrian near or on the crosswalk that intersects the vehicle trajectory. If it is green, then there is no pedestrian near or on the crosswalk that intersects the vehicle trajectory. In this case, the vehicle should pass the intersection considering only the presence of other vehicles in the intersection. Recall that the virtual semaphore only tries to protect the pedestrians from the vehicles, and not the vehicles from other vehicles.
Push-Based Virtual Semaphore
This section presents the implementation of the virtual semaphore using the push-based communication model.
Pedestrians and RSU
When a pedestrian, walking to an intersection, is less than a certain distance to the RSU installed at that intersection, the pedestrian's smartphone sends (pushes) periodically a message to this RSU. Only the pedestrians inside the range and moving toward or just leaving the intersection controlled by the RSU are allowed to send messages. The message sent by the pedestrian contains the following information: application identification, "VRU", pedestrian identification, current sidewalk of the pedestrian, next sidewalk of the pedestrian, and crossroad heading. "VRU" is a reserved string, which is used to inform the receiver that the message was sent by a pedestrian. The remaining parameters were already discussed in the pull-based mode.
Vehicles and RSU
The RSU caches the data received from the nearby pedestrians. The RSU periodically calls the TMC, which decides the virtual traffic lights signals of the intersection. There is one traffic light signal for each crosswalk of the intersection controlled by the RSU. Whenever there is at least one person on a crosswalk or a person close to a crosswalk that he/she intends to pass, the TMC sets a red signal for that crosswalk. The RSU broadcasts periodically a message containing the set of traffic lights of the intersection, along with the identification of the RSU. The set of traffic lights contains one signal for each crossroad of the intersection, as defined by the TMC. For example, in the case of Figure 2, the RSU transmits a message containing one red light for the inferior horizontal crosswalk, and three green lights for the remaining crosswalks. This set of traffic light signals can be represented, in binary, as 1000. The RSU has no information about the nearby vehicles, because the vehicles do not send any data to it. Consequently, the RSU is not able to define the traffic light signal for a specific vehicle, as it does in the pull-based system.
After receiving the traffic light signals from the RSU, the car determines if its route intersects a crosswalk with red light. If true, then the car stops at the intersection to let pass the pedestrians near or on those crosswalks. Otherwise, the car driver must follow the traffic rules to pass the intersection in presence of other vehicles. Recall that the VTLS implemented in this work was only directed to improve the pedestrians' safety.
If the car does not receive any reply from the RSU after sending a certain number of interests, the OBU issues an alert informing the driver about the unresponsiveness or inexistence of the RSU at the intersection. In this case, the driver should take full control of the situation.
Flowcharts
The algorithms run by the RSU, pedestrians and vehicles in the push-based scenario are presented next. The algorithms are shown in a simplified way, illustrating only the basic actions. Figure 6 shows the algorithm used by the RSUs in push mode. When a packet is received from a pedestrian, the location conditions are checked and if these are valid, then the RSU saves the pedestrian information (identification, current sidewalk, next sidewalk, crossroad heading) in the internal database. Afterwards, the TMC is called to determine the traffic light signals, based on the cached information. Then, the RSU broadcasts periodically the updated traffic light signals to the vehicles. crosswalk with red light. If true, then the car stops at the intersection to let pass the pedestrians near or on those crosswalks. Otherwise, the car driver must follow the traffic rules to pass the intersection in presence of other vehicles. Recall that the VTLS implemented in this work was only directed to improve the pedestrians' safety.
Flowchart of RSUs
If the car does not receive any reply from the RSU after sending a certain number of interests, the OBU issues an alert informing the driver about the unresponsiveness or inexistence of the RSU at the intersection. In this case, the driver should take full control of the situation.
Flowcharts
The algorithms run by the RSU, pedestrians and vehicles in the push-based scenario are presented next. The algorithms are shown in a simplified way, illustrating only the basic actions. Figure 6 shows the algorithm used by the RSUs in push mode. When a packet is received from a pedestrian, the location conditions are checked and if these are valid, then the RSU saves the pedestrian information (identification, current sidewalk, next sidewalk, crossroad heading) in the internal database. Afterwards, the TMC is called to determine the traffic light signals, based on the cached information. Then, the RSU broadcasts periodically the updated traffic light signals to the vehicles. Pedestrian: After starting up the smartphone application, when the pedestrian is at a distance to the RSU of the intersection lower than a pre-defined value, then the smartphone sends periodically a message to the RSU, containing the identification, current sidewalk, next sidewalk, and crossroad heading of the pedestrian.
Flowchart of RSUs
Vehicle: After starting up the VTLS application, the OBU of the vehicle listens for external communication messages. Whenever it receives a message from the RSU of the intersection that the vehicle is rolling to, and if the vehicle is at a distance to this RSU lower than a pre-defined value, then the vehicle gets the set of traffic light signals contained in the received message and save it in the internal cache. The vehicle determines if its route intersects a crosswalk with red light. If true, then the vehicle stops at the intersection to let pass the pedestrians on those crosswalks. If false, the car driver must follow the traffic rules to negotiate the intersection in presence of other vehicles.
VTLS Simulation Scenario and Results
To evaluate the VTLS using the pull, and the push-based modes, a set of simulations were carried out on a grid road map with seven horizontal roads and seven vertical roads. The simulation setup is presented in more detail next. Then, the results obtained in the simulations are shown and discussed.
Simulation Setup
The simulation setup regarding the used simulator, the road map configuration, the wireless communications, the VTLS implementation, the simulation parameters, and the simulation runs are presented in the following.
Simulator: It was used the simulator Veins-5.0, modified to allow pedestrian's communications. Veins is a vehicular network simulation framework that couples the mobility simulator SUMO-0.32.0 with a wireless network simulator built on the discrete event simulator OMNeT++. Veins has a manager module to synchronize the mobility of the vehicles between the wireless network simulator and SUMO (simulation of urban mobility). Pedestrian: After starting up the smartphone application, when the pedestrian is at a distance to the RSU of the intersection lower than a pre-defined value, then the smartphone sends periodically a message to the RSU, containing the identification, current sidewalk, next sidewalk, and crossroad heading of the pedestrian.
Vehicle: After starting up the VTLS application, the OBU of the vehicle listens for external communication messages. Whenever it receives a message from the RSU of the intersection that the vehicle is rolling to, and if the vehicle is at a distance to this RSU lower than a pre-defined value, then the vehicle gets the set of traffic light signals contained in the received message and save it in the internal cache. The vehicle determines if its route intersects a crosswalk with red light. If true, then the vehicle stops at the intersection to let pass the pedestrians on those crosswalks. If false, the car driver must follow the traffic rules to negotiate the intersection in presence of other vehicles.
VTLS Simulation Scenario and Results
To evaluate the VTLS using the pull, and the push-based modes, a set of simulations were carried out on a grid road map with seven horizontal roads and seven vertical roads. The simulation setup is presented in more detail next. Then, the results obtained in the simulations are shown and discussed.
Simulation Setup
The simulation setup regarding the used simulator, the road map configuration, the wireless communications, the VTLS implementation, the simulation parameters, and the simulation runs are presented in the following.
Simulator: It was used the simulator Veins-5.0, modified to allow pedestrian's communications. Veins is a vehicular network simulation framework that couples the mobility simulator SUMO-0.32.0 with a wireless network simulator built on the discrete event simulator OMNeT++. Veins has a manager module to synchronize the mobility of the vehicles between the wireless network simulator and SUMO (simulation of urban mobility).
Road map:
The simulation was carried on a grid road map with a size of 7 × 7. The length of each road is around 200 m and each road connected to an intersection has a crossroad at that intersection, as shown in Figure 8. The routes of the cars were generated with the SUMO traffic generator (randomTrips.py). This tool was configured to generate a car every 1.0 s to run a minimum trip distance of 2000 m with a maximum speed of 10.0 m/s. The pedestrians walk a maximum distance of 2000 m with a speed between 1.1 and 1.4 m/s. Cars and pedestrians leave the simulation after completing theirs trips.
Communications:
The vehicles and pedestrians used the same communication parameters. All vehicles owned an OBU running WSMP over IEEE 802.11p. All pedestrians used a smartphone also provided with WSMP over IEEE 802.11p. The transmission power was 20 mW, which corresponds to a signal range of around 530 m. The total length of the MAC frames containing the messages was 166 bytes. The transmission bit rate was 6 Mbps, as this value has been generally assumed as the default channel bit rate. No channel switching was used. The simple path loss propagation model was used, and no buildings were considered in the simulation scenario.
Virtual semaphores: Simulations were carried out with twenty-five virtual semaphores, each one placed at the center of the intersection with four roads. For simplicity, the yellow light was not implemented in the semaphores.
Parameters: The values of the parameters used in the simulations are shown in Table 1. The name of the parameter makes clear its meaning, except the parameters ndn_*, wsm_*, person_*, which are explained in the following.
In the pull mode (NDN), when a car is at a distance to the road end lower than the value specified in ndn_car_dst_to_road_end, it starts sending interests to the RSU with a periodicity of ndn_car_interest_mesg_period seconds. The RSU sends interests to the pedestrians with a periodicity of ndn_rsu_interest_mesg_period seconds. Only the pedestrians at a distance to the road end lower than person_dst_to_road_end or at a distance from the road beginning lower than person_dst_from_road_start reply to the interests. In this way, the RSU is able to know the pedestrians
Communications:
The vehicles and pedestrians used the same communication parameters. All vehicles owned an OBU running WSMP over IEEE 802.11p. All pedestrians used a smartphone also provided with WSMP over IEEE 802.11p. The transmission power was 20 mW, which corresponds to a signal range of around 530 m. The total length of the MAC frames containing the messages was 166 bytes. The transmission bit rate was 6 Mbps, as this value has been generally assumed as the default channel bit rate. No channel switching was used. The simple path loss propagation model was used, and no buildings were considered in the simulation scenario.
Virtual semaphores: Simulations were carried out with twenty-five virtual semaphores, each one placed at the center of the intersection with four roads. For simplicity, the yellow light was not implemented in the semaphores.
Parameters: The values of the parameters used in the simulations are shown in Table 1. The name of the parameter makes clear its meaning, except the parameters ndn_*, wsm_*, person_*, which are explained in the following. In the pull mode (NDN), when a car is at a distance to the road end lower than the value specified in ndn_car_dst_to_road_end, it starts sending interests to the RSU with a periodicity of ndn_car_interest_mesg_period seconds. The RSU sends interests to the pedestrians with a periodicity of ndn_rsu_interest_mesg_period seconds. Only the pedestrians at a distance to the road end lower than person_dst_to_road_end or at a distance from the road beginning lower than person_dst_from_road_start reply to the interests. In this way, the RSU is able to know the pedestrians that are approaching the intersection and those that have crossed the intersection and are leaving it. In the push mode, when a pedestrian is at a distance to the road end lower than person_dst_to_road_end, the smartphone of the pedestrian starts sending interests to the RSU with a periodicity of wsm_person_mesg_period seconds. The RSU announces the state of the traffic lights to the cars with a periodicity of wsm_rsu_beacon_tx_period seconds.
So, according to the parameters defined in Table 1, in the push mode, the pedestrians send a message to the RSU every 500 ms, when they are at a distance lower than 4 m to the crosswalk of the intersection. The RSU broadcasts to the nearby cars a message with the traffic light signals every 200 ms. In the pull mode (NDN), each RSU broadcasts an interest to the pedestrians every 500 ms. The pedestrian replies to the received interest when it is positioned at a distance lower than 4 m to the crosswalk of the intersection. Whenever a car is at a distance lower than 18 m to the crosswalk of the intersection, it sends an interest to the RSU every 500 ms asking for the traffic light signals.
Simulation runs. The simulations were ran during 500 s for each one of the three tested modes-notls, push, and pull, where notls (no TLS) means the absence of virtual traffic light signals, push refers to the push-based mode, and pull to the push-based mode (implemented by NDN). Six distinct ratios R of "number of pedestrian/number of cars" were considered: 2.5, 2.0, 1.5, 1.0, 0.5, and 0.25. For example, a ratio of 2.5 means that the number of pedestrians walking in the sidewalks is 2.5 times higher than the number of cars rolling in the map roads. The ratio R becomes relatively stable after a certain time (~150 s). The results were collected during this stabilized period, which is between 150 s and 500 s. In all simulations, a car enters in the simulation every 1.0 s. In order to obtain the specified ratios, the pedestrian generation period was respectively 0.26, 0.33, 0.44, 0.66, 1.32, and 2.64 s. For example, the generation pedestrian period of 0.26 means that a pedestrian enters in the simulation every 0.26 s. The pedestrian generation period (T) is related to the ratio (R) approximately by the equation: T = 0.66/R. Eighteen (3 × 6) simulation runs were ran to build one set of results. The trips of cars and pedestrians were chosen randomly at the beginning of each set of simulations. However, the trips taken by the pedestrians and the cars do not change for each set of test modes (notls, push, pull) run in the simulation at a certain ratio R.
Simulations were run first without using the VTLS, i.e., it only used the native SUMO strategy to let pass pedestrians on the crosswalks, which have always priority over the cars in the crosswalks. This simulation sets a priori the optimal situation for the cars, in terms of lowest waiting time, to let the pedestrians pass in the intersections. Then, simulations were run using the push and the pull-based modes with the same traffic, and pedestrian mobility traces used in the run without VTLS. The simulations using the push and the pull-based modes were run with the native SUMO strategy turned off. In this way, the traffic at the intersections is only controlled by the VTLS in order to let the pedestrian pass safely in the crosswalks.
The results shown next represent the average values obtained after running 35 sets of simulations, where each set includes the notls, push, and pull test modes. So, these 35 sets of simulations corresponds to 35 × (3 × 6) individual simulation runs, and 35 × 6 distinct pedestrians, and cars trips.
Results
This section presents the results obtained for the following metrics: traffic queue size, car trip distance, car stop time, and communication metrics: sent packets, and packet loss. The traffic queue size is the number of cars queued per road at the intersections. The car trip distance is the distance ran by all cars, i.e., the summation of the individual car trip distances, in kilometers. The car stop time is the total stopped time of all cars, i.e., the summation of the individual car stopped times, in minutes. The communication metrics are explained later.
Excepting the communication metrics, only the results obtained for the cars are presented next. The pedestrians were not considered in the results, because the virtual semaphores do not control the pedestrians. In the graphics presented next, pull is synonymous of pull-based mode, push means push-based mode, and notls denotes that no virtual semaphores were used. The ratio R is defined as: R = number of pedestrians/number of cars. Recall that the notls mode was used only as reference, because it indicates a priori the optimal performance regarding the mobility of the cars.
To have an idea of the number of pedestrians crossing the intersections, Figure 9 shows, for different ratios R, the maximum, average, and minimum number of persons that crossed the twenty-five intersections with RSUs during a simulation run of five hundred seconds. As in the simulation scenario the cars always give way to the pedestrians on the crosswalks (i.e., a pedestrian never stops at a crosswalk to give way to a car), the graphics are the same for notls, push, and pull test modes, because the same mobility trace of pedestrians is used in these three test modes. The results shown next represent the average values obtained after running 35 sets of simulations, where each set includes the notls, push, and pull test modes. So, these 35 sets of simulations corresponds to 35 × (3 × 6) individual simulation runs, and 35 × 6 distinct pedestrians, and cars trips.
Results
This section presents the results obtained for the following metrics: traffic queue size, car trip distance, car stop time, and communication metrics: sent packets, and packet loss. The traffic queue size is the number of cars queued per road at the intersections. The car trip distance is the distance ran by all cars, i.e., the summation of the individual car trip distances, in kilometers. The car stop time is the total stopped time of all cars, i.e., the summation of the individual car stopped times, in minutes. The communication metrics are explained later.
Excepting the communication metrics, only the results obtained for the cars are presented next. The pedestrians were not considered in the results, because the virtual semaphores do not control the pedestrians. In the graphics presented next, pull is synonymous of pull-based mode, push means push-based mode, and notls denotes that no virtual semaphores were used. The ratio R is defined as: R = number of pedestrians/number of cars. Recall that the notls mode was used only as reference, because it indicates a priori the optimal performance regarding the mobility of the cars.
To have an idea of the number of pedestrians crossing the intersections, Figure 9 shows, for different ratios R, the maximum, average, and minimum number of persons that crossed the twenty-five intersections with RSUs during a simulation run of five hundred seconds. As in the simulation scenario the cars always give way to the pedestrians on the crosswalks (i.e., a pedestrian never stops at a crosswalk to give way to a car), the graphics are the same for notls, push, and pull test modes, because the same mobility trace of pedestrians is used in these three test modes. Figure 10 shows the maximum, and average traffic queue sizes found at each road connected to the twenty-five intersections with RSUs. These results were obtained immediately before the end of the simulation (i.e., t ≈ 500 s). The average traffic queue sizes show no significant difference between the push and pull modes. When R is above 1, the average traffic queue sizes become larger with the use of VTLS than without using it. Figure 10 shows the maximum, and average traffic queue sizes found at each road connected to the twenty-five intersections with RSUs. These results were obtained immediately before the end of the simulation (i.e., t ≈ 500 s). The average traffic queue sizes show no significant difference between the push and pull modes. When R is above 1, the average traffic queue sizes become larger with the use of VTLS than without using it. Figure 11 shows the average distance rolled by each car during the simulation. The graphic is normalized to 100%, which corresponds to 1483.4 m. As expected, the best results were obtained without using the VTLS, where the cars reached a longer distance than the cars controlled by virtual traffic light systems. Moreover, the difference between the push and pull modes is negligible. As expected, the distance travelled by the cars decreases with the increment of the number of pedestrians, since the cars tend to be stopped longer in the crossing to let pass the pedestrians. Such situation becomes more notorious with the use of VTLS than without using any VTLS. This shows that the algorithms used by VTLS are less efficient than the one used by SUMO in terms of traffic fluidity at the intersections. Figure 12 shows the average time that each car was stopped at the intersections or jammed in the traffic queues during the simulations. The graphic is normalized to 100%, which corresponds to 106.4 s. Once again, there is no significant difference between the push and pull modes. Considering the results obtained for the trip distance, this is an expected result. Indeed, the stop time of the cars increases with the number of pedestrians, since the cars tend to be longer stopped in the crossing to The results also show that, in the three test modes, there were roads at the intersections with queue sizes of eight cars at least. Figure 11 shows the average distance rolled by each car during the simulation. The graphic is normalized to 100%, which corresponds to 1483.4 m. As expected, the best results were obtained without using the VTLS, where the cars reached a longer distance than the cars controlled by virtual traffic light systems. Moreover, the difference between the push and pull modes is negligible. As expected, the distance travelled by the cars decreases with the increment of the number of pedestrians, since the cars tend to be stopped longer in the crossing to let pass the pedestrians. Such situation becomes more notorious with the use of VTLS than without using any VTLS. This shows that the algorithms used by VTLS are less efficient than the one used by SUMO in terms of traffic fluidity at the intersections. Figure 11 shows the average distance rolled by each car during the simulation. The graphic is normalized to 100%, which corresponds to 1483.4 m. As expected, the best results were obtained without using the VTLS, where the cars reached a longer distance than the cars controlled by virtual traffic light systems. Moreover, the difference between the push and pull modes is negligible. As expected, the distance travelled by the cars decreases with the increment of the number of pedestrians, since the cars tend to be stopped longer in the crossing to let pass the pedestrians. Such situation becomes more notorious with the use of VTLS than without using any VTLS. This shows that the algorithms used by VTLS are less efficient than the one used by SUMO in terms of traffic fluidity at the intersections. Figure 12 shows the average time that each car was stopped at the intersections or jammed in the traffic queues during the simulations. The graphic is normalized to 100%, which corresponds to 106.4 s. Once again, there is no significant difference between the push and pull modes. Considering the results obtained for the trip distance, this is an expected result. Indeed, the stop time of the cars Figure 11. Average car trip distance, normalized to 100% = 1483.4 m. Figure 12 shows the average time that each car was stopped at the intersections or jammed in the traffic queues during the simulations. The graphic is normalized to 100%, which corresponds to 106.4 s. Once again, there is no significant difference between the push and pull modes. Considering the results obtained for the trip distance, this is an expected result. Indeed, the stop time of the cars increases with the number of pedestrians, since the cars tend to be longer stopped in the crossing to let pass the pedestrians. This situation becomes more notorious with the use of VTLS than without using it.
Car Stop Time
Information 2020, 11, x FOR PEER REVIEW 16 of 20 let pass the pedestrians. This situation becomes more notorious with the use of VTLS than without using it.
Communication Metrics
The results obtained for the number of sent packets and packet loss are presented and discussed next. The curves for the case of using no VTLS are not shown in the graphics, because in this test mode the number of packets sent by the nodes is always zero. Figure 13 shows the total number of packets sent by all nodes (cars, pedestrians, RSUs), as well as the packets sent partially by the pedestrians, cars, and RSUs. The graphic is normalized to 100%, which corresponds to 190,742 packets. It observed that, globally, the pull mode generated, in average, 3.9 (±0.6) times more packets than the push mode in all ratios R.
Sent Packets
The results show that in the pull mode, the RSUs are the nodes that generate more packets, followed by the cars. In the push mode, the pedestrians are the nodes that generate more packets, followed by the RSUs, and the cars generate no communication traffic. In the push mode, the number of packets sent by the RSU does not change with the increment of the ratio R.
Communication Metrics
The results obtained for the number of sent packets and packet loss are presented and discussed next. The curves for the case of using no VTLS are not shown in the graphics, because in this test mode the number of packets sent by the nodes is always zero. Figure 13 shows the total number of packets sent by all nodes (cars, pedestrians, RSUs), as well as the packets sent partially by the pedestrians, cars, and RSUs. The graphic is normalized to 100%, which corresponds to 190,742 packets. It observed that, globally, the pull mode generated, in average, 3.9 (±0.6) times more packets than the push mode in all ratios R.
Communication Metrics
The results obtained for the number of sent packets and packet loss are presented and discussed next. The curves for the case of using no VTLS are not shown in the graphics, because in this test mode the number of packets sent by the nodes is always zero. Figure 13 shows the total number of packets sent by all nodes (cars, pedestrians, RSUs), as well as the packets sent partially by the pedestrians, cars, and RSUs. The graphic is normalized to 100%, which corresponds to 190,742 packets. It observed that, globally, the pull mode generated, in average, 3.9 (±0.6) times more packets than the push mode in all ratios R.
Sent Packets
The results show that in the pull mode, the RSUs are the nodes that generate more packets, followed by the cars. In the push mode, the pedestrians are the nodes that generate more packets, followed by the RSUs, and the cars generate no communication traffic. In the push mode, the number of packets sent by the RSU does not change with the increment of the ratio R. The results show that in the pull mode, the RSUs are the nodes that generate more packets, followed by the cars. In the push mode, the pedestrians are the nodes that generate more packets, followed by the RSUs, and the cars generate no communication traffic. In the push mode, the number of packets sent by the RSU does not change with the increment of the ratio R.
Packet Loss
The signal-to-interference-plus-noise ratio (SNIR) lost packets indicates the number of lost packets due to bit errors, caused by packet collisions or noise interferences at the destination receivers. A TxRx lost packet is a packet that was not transmitted neither received, because a packet arrived to the wireless interface precisely when another packet was being sent by this wireless interface. So, the TxRx parameter evaluates how often a wireless interface is receiving and transmitting packets at the same time, causing the loss of both packets. The total number of lost (unreceived) packets by a node at the lower communication layers is the sum of the total SNIR lost packet plus the TxRx lost packets in that node. The percentage of lost packets by SNIR and TxRx problems is calculated by the expression: (SNIRlostPkts + TxRxLostPkts)/(SNIRlostPkts + TxRxLostPkts + recvdPkts) * 100%, where SNIRlostPkts is the number of SNIR lost packets, TxRxLostPkts is the number of TxRx lost packets, and recvdPkts is the number of packets received by the wireless node. This metric can be used as an indirect indication of the bandwidth occupancy of the wireless channel, in that the higher is its value, the more occupied is the wireless channel used by the wireless communication modules. Figure 14 shows the average percentage of the SNIR + TxRx lost packets of all nodes (pedestrians, cars, RSUs), using the push, and pull modes. The results show indirectly that the wireless channel bandwidth globally available in the simulated scenario is more occupied in pull mode than in push mode. Indeed, the difference in the average SNIR + TxRx packet loss between the pull and push mode increases with the ratio R, from 1.3 percentage points (p.p.) (R = 0.25) to 1.9 p.p. (R = 2.5). The results also revealed that the packet losses were almost all caused by SNIR problems, as the losses caused by TxRx problems were really quite negligible (<0.002%).
Information 2020, 11, x FOR PEER REVIEW 17 of 20 Figure 13. Total sent packets, and packets sent partially by pedestrians, cars, and RSUs, normalized to 100% = 190,742 packets.
Packet Loss
The signal-to-interference-plus-noise ratio (SNIR) lost packets indicates the number of lost packets due to bit errors, caused by packet collisions or noise interferences at the destination receivers. A TxRx lost packet is a packet that was not transmitted neither received, because a packet arrived to the wireless interface precisely when another packet was being sent by this wireless interface. So, the TxRx parameter evaluates how often a wireless interface is receiving and transmitting packets at the same time, causing the loss of both packets. The total number of lost (unreceived) packets by a node at the lower communication layers is the sum of the total SNIR lost packet plus the TxRx lost packets in that node. The percentage of lost packets by SNIR and TxRx problems is calculated by the expression: (SNIRlostPkts + TxRxLostPkts)/(SNIRlostPkts + TxRxLostPkts + recvdPkts) * 100%, where SNIRlostPkts is the number of SNIR lost packets, TxRxLostPkts is the number of TxRx lost packets, and recvdPkts is the number of packets received by the wireless node. This metric can be used as an indirect indication of the bandwidth occupancy of the wireless channel, in that the higher is its value, the more occupied is the wireless channel used by the wireless communication modules. Figure 14 shows the average percentage of the SNIR + TxRx lost packets of all nodes (pedestrians, cars, RSUs), using the push, and pull modes. The results show indirectly that the wireless channel bandwidth globally available in the simulated scenario is more occupied in pull mode than in push mode. Indeed, the difference in the average SNIR + TxRx packet loss between the pull and push mode increases with the ratio R, from 1.3 percentage points (p.p.) (R = 0.25) to 1.9 p.p. (R = 2.5). The results also revealed that the packet losses were almost all caused by SNIR problems, as the losses caused by TxRx problems were really quite negligible (<0.002%).
Application Message Loss
The average percentage of messages lost at the application layer was calculated too. For simplicity, this metric considers only the messages sent by pedestrians and cars that failed to reach the application layer of the RSUs. The messages sent by the RSUs that failed to reach the application layers of the pedestrians and cars were not considered. The percentage of lost messages is calculated by the expression: (sentPktPed + sentPktCar − recvdPktRSU)/(sentPktPed + sentPktCar) * 100%, where sentPktPed and sentPktCar are the total number of messages sent respectively by all pedestrians and cars, and recvdPktRSU is the total number of messages received by all RSUs from the pedestrians and the cars, and not from other RSUs. So, in push mode, this metric indicates the
Application Message Loss
The average percentage of messages lost at the application layer was calculated too. For simplicity, this metric considers only the messages sent by pedestrians and cars that failed to reach the application layer of the RSUs. The messages sent by the RSUs that failed to reach the application layers of the pedestrians and cars were not considered. The percentage of lost messages is calculated by the expression: (sentPktPed + sentPktCar − recvdPktRSU)/(sentPktPed + sentPktCar) * 100%, where sentPktPed and sentPktCar are the total number of messages sent respectively by all pedestrians and cars, and recvdPktRSU is the total number of messages received by all RSUs from the pedestrians and the cars, and not from other RSUs. So, in push mode, this metric indicates the messages lost by the RSUs from the nearby pedestrians. In pull mode, it measures the messages lost by the RSUs from the nearby pedestrians and cars. The messages received by the RSU from pedestrians and cars out of range are ignored in this metric. Figure 15 shows the results of the average message loss, in percentage, obtained with the push, and pull modes. The average application message lost is lower in the push mode than in the pull mode. This result is somewhat expected taking in consideration the results obtained for the SNIR + TxRx lost packets. The difference in the average message loss between the pull and push mode increases with the ratio R, from 0.031 p.p. (R = 0.25) to 0.19 p.p. (R = 2.5).
Information 2020, 11, x FOR PEER REVIEW 18 of 20 messages lost by the RSUs from the nearby pedestrians. In pull mode, it measures the messages lost by the RSUs from the nearby pedestrians and cars. The messages received by the RSU from pedestrians and cars out of range are ignored in this metric. Figure 15 shows the results of the average message loss, in percentage, obtained with the push, and pull modes. The average application message lost is lower in the push mode than in the pull mode. This result is somewhat expected taking in consideration the results obtained for the SNIR + TxRx lost packets. The difference in the average message loss between the pull and push mode increases with the ratio R, from 0.031p.p. (R = 0.25) to 0.19 p.p. (R = 2.5).
Conclusions
When compared with the push mode, the pull mode (implemented by NDN) revealed similar performance in all metrics, excepting the communication metrics (sent packets and packet loss). Indeed, when compared to the push mode, both the SNIR + TxRx packet loss and the application message loss are higher with the pull mode, with maximum differences of 1.9 p.p. and 0.19 p.p., respectively, obtained for the ratio R (number of pedestrian/number of cars) equal to 2.5. Comparatively to the notls mode, which defines a priori the optimal performance regarding the mobility of the cars, the performances of the push, and pull modes are particularly worst for the stop time and the car trip distance of the cars, when the ratio R is above 0.5. Regarding the traffic queue sizes, no significant difference was observed between the three test modes (notls, pull, push).
The results show that the push mode presents lower packet loss and generates fewer packets, and consequently occupies less bandwidth than the pull mode. In fact, for the considered metrics, the virtual semaphore implemented with the pull mode presents no advantage when compared with the push mode. However, this does not mean that the pull mode should not be considered to implement a VTLS. Indeed, apart from the communication metrics, the performances obtained with the pull and push modes are very similar. Moreover, in pull mode, the performance, in terms of packet collisions, may be improved if the nearby RSUs could somehow regulate the transmission of interests, so that these are sent out of phase to the pedestrians.
This work has considered only the implementation of a VTLS directed to the pedestrian's safety. However, it would be convenient that the VTLS also takes the vehicles safety in consideration. This issue will be tackled in a future work.
Conclusions
When compared with the push mode, the pull mode (implemented by NDN) revealed similar performance in all metrics, excepting the communication metrics (sent packets and packet loss). Indeed, when compared to the push mode, both the SNIR + TxRx packet loss and the application message loss are higher with the pull mode, with maximum differences of 1.9 p.p. and 0.19 p.p., respectively, obtained for the ratio R (number of pedestrian/number of cars) equal to 2.5. Comparatively to the notls mode, which defines a priori the optimal performance regarding the mobility of the cars, the performances of the push, and pull modes are particularly worst for the stop time and the car trip distance of the cars, when the ratio R is above 0.5. Regarding the traffic queue sizes, no significant difference was observed between the three test modes (notls, pull, push).
The results show that the push mode presents lower packet loss and generates fewer packets, and consequently occupies less bandwidth than the pull mode. In fact, for the considered metrics, the virtual semaphore implemented with the pull mode presents no advantage when compared with the push mode. However, this does not mean that the pull mode should not be considered to implement a VTLS. Indeed, apart from the communication metrics, the performances obtained with the pull and push modes are very similar. Moreover, in pull mode, the performance, in terms of packet collisions, may be improved if the nearby RSUs could somehow regulate the transmission of interests, so that these are sent out of phase to the pedestrians.
This work has considered only the implementation of a VTLS directed to the pedestrian's safety. However, it would be convenient that the VTLS also takes the vehicles safety in consideration. This issue will be tackled in a future work.
|
v3-fos-license
|
2020-10-28T19:16:27.310Z
|
2020-10-21T00:00:00.000
|
226273388
|
{
"extfieldsofstudy": [
"Environmental Science"
],
"oa_license": "CCBY",
"oa_status": "HYBRID",
"oa_url": "https://ohiojournalofscience.org/article/download/7121/5782",
"pdf_hash": "79d87d21b8a72d17f48de7655e1d4d512a47471e",
"pdf_src": "Anansi",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44328",
"s2fieldsofstudy": [
"Environmental Science"
],
"sha1": "b2137293fc6a351797559595decd1632947d0d8b",
"year": 2020
}
|
pes2o/s2orc
|
Effects of Stormwater Management and an Extended Culvert on Stream Health in Dug Run, Allen County, Ohio, USA
Changes in stream hydrology and habitat—associated with urbanization—have impacted diversity, abundance, and movement of both macroinvertebrates and fish. In 2008 the University of Northwestern Ohio began developing the western half of the campus, incorporating stormwater management practices. This provided an opportunity to examine 3 sections of the Dug Run stream that flows through campus: 1 section on the western half of campus that filters stormwater through the soil, and 2 sections on the eastern half of campus which are affected by both urbanization and a culvert that extends under a building and a road. Significant differences in macroinvertebrate Stream Quality Monitoring (SQM) index scores ( p < 0.001), fish diversity ( p < 0.010), and abundance of Orangethroat Darters (Etheostoma spectabile) ( p < 0.001) were observed between the western and the 2 eastern sections of Dug Run. Lower SQM index scores and lower Orangethroat Darter abundances were found in the urbanized sections of the stream, while lower fish diversity numbers were found upstream of the culvert. The western portion of campus, designed to filter stormwater runoff through the soil, was the only section studied with sensitive macroinvertebrates, a higher SQM index score, and a greater abundance of Orangethroat Darters. Publication Date: October 2020 https://doi.org/10.18061/ojs.v120i2.7121 OHIO J SCI 120(2):61-69 in pool-riffle sequences, changes in in-stream velocity, and alterations to in-stream habitat (Paul and Meyer 2001). Urban areas have been found to increase levels of nitrates, conductivity, turbidity, and temperature—while decreasing oxygen levels—in streams: all of these factors can contribute to poorer macroinvertebrate assemblages (Shilla and Shilla 2011). Several studies have shown that increasing the area of impervious surfaces and urban stormwater drainage can have negative impacts on stream biota (Walsh et al. 2005; Wang et al. 2012; Walsh and Webb 2016). Urbanization negatively affects both the diversity and abundance of macroinvertebrates and fish (Wheeler et al. 2005). Darter species have been negatively affected by urbanization (Onorato et al. 2000; Stranko et al. 2010). Kemp and Spotila (1997) captured more Tessellated Darters (Etheostoma olmstedi) in nonurbanized sites in the Valley Creek watershed while sampling during 1993 to 1994. Sampling in the Valley Creek watershed in 2001 to 2002 found no Shield Darters (Percina peltata) and reductions in Tessellated Darters, likely from urbanization (Steffy and Kilham 2006). The Johnny Darter (Etheostoma nigrum), Greenside Darter (Etheostoma blennioides), and Orangethroat 1 Address correspondence to David A. Zuwerink, University of Northwestern Ohio, 1744 Hartzler Road, Lima, OH 45805, USA. Email: dzuwerink@unoh.edu © 2020 Zuwerink et al. This article is published under a Creative Commons Attribution 4.0 International License ( https://creativecommons.org/licenses/by/4.0/) 62 VOL. 120(2) URBANIZATION IMPACTS ON MACROINVERTEBRATES AND FISH Darter (Etheostoma spectabile) have been captured in Dug Run (Ohio EPA 2013). While the Orangethroat Darter is more tolerant of turbid water and silted bottoms than other darter species, the Orangethroat Darter populations have been reduced in areas with heavy silting or where other pollutants become excessive (Trautman 1981). Not only have changes in runoff negatively impacted stream health by increasing impervious surfaces along the stream, but the addition of culverts in urban areas change stream flow. Culverts tend to channelize streams, increase erosion and sedimentation, and influence water temperature (Vaughan 2002). These changes have negative impacts on stream biota (Khan and Colbo 2008; Favaro et al. 2014), the movement of fish (Benton et al. 2008), the number of fish, and alteration of stream habitat (Wheeler et al. 2005). Changes in stream flow, due to culverts, both impedes movement of indigenous species (Foster and Keller 2011) and lowers light levels that influence the movement of both fish and macroinvertebrates (Jones et al. 2017). Because the culvert in the current study—located on the east end of campus—is approximately 128 m in length and 2.5 m in width, it would affect stream flow, habitat, and light through a long stretch of the stream. The goal of this project was to determine if macroinvertebrate and fish assemblages in Dug Run differed on 3 sections of the stream due to differences in stormwater management and the presence of an extended culvert. It was hypothesized that (1) Stream Quality Monitoring (SQM) index scores would be higher on the section managed for stormwater runoff, (2) fish diversity scores would differ between the 3 sections, (3) Orangethroat Darter abundance would differ between the 3 sections, and (4) SQM index scores would not change over the course of the study on the west end where stormwater is filtered by soil. METHODS AND MATERIALS The data for this study was collected in Dug Run from the fall of 2015 to the fall of 2018. Samples were collected 4 times a year during the months of September, January, April, and July. For this study, the stream was divided into 3 sections for analysis (Fig. 1). Section A is about 170 m long on the west end of campus. This area is surrounded by grass, athletic fields, and some buildings. In about one-third of this section, the stream was straightened and the banks were sloped back to reduce erosion. Upstream of this section is a narrow woodlot bordering both sides. This end of campus was designed so stormwater does not reach the stream as surface runoff, but rather is absorbed by the ground. Section B is 240 m long and is highly urbanized downstream of a culvert. Buildings and parking lots were built near the stream. Erosion control measures, including netting and rip rap, were installed near the footbridges crossing the stream. Pipes from the parking lots divert stormwater directly to the stream. Section C is 170 m long and is upstream of the culvert. There is a woodlot upstream of the primary section sampled. There are some dry retention basins to prevent stormwater from directly entering the stream, but some stormwater does run into the stream. To sample macroinvertebrates a kick-seine was placed on the downstream end of the riffle, then one person would use their boots to stir up the streambed in the riffle upstream of the net. After the sediment settled out, the net was lifted, rolled up, and taken to a cloth sheet where it was unrolled. Nets were checked for organisms; for identification, organisms collected were placed in plastic trays filled with water. Once macroinvertebrates were collected and identified, the nets were lifted and the sheets were checked for organisms that moved through the net. Most species were individually counted, but the numbers of highly abundant species such as midge FIGURE 1. Graphic showing the 3 sections (A, B, and C) that were studied in Dug Run, at the University of Northwestern Ohio, from fall 2015 to fall 2018. A culvert (approximately 128 m long) extends from the eastern border of section B to the western border of section C. Dug Run flows through this culvert, below a building, parking lot, and road. OHIO JOURNAL OF SCIENCE 63 D. A. ZUWERINK ET AL. XX. larvae, aquatic worms, and planaria were estimated by counting the number found in a smaller area and multiplying that count based on total area. It was assumed that these estimates were consistent, but abundance was not used in the statistical analyses. Instead, macroinvertebrate SQM index scores were calculated using the Ohio EPA stream quality assessment form (Kopec and Lewis 1983). Fish were collected in each section using a minnow seine, measuring 1.2 m high × 1.8 m wide with a 5 mm mesh, during 0.5-hour blocks as part of a classroom project. Each 0.5-hour block was considered 1 sample. All fish collected during the sampling period were placed in a bucket. After the sampling period the fish were identified, counted, and released back into the same reach of the stream. Each class was given a different portion of the stream to sample to try to minimize impacts of disturbance and prevent collecting the same fish in subsequent samples. Fish diversity was calculated using the Shannon index (Shannon 1948). Habitat was only evaluated in the spring of 2017. The qualitative habitat evaluation index (QHEI) was used to assess habitat quality (Rankin 1989), and the Wolman pebble count (Wolman 1954) was used to assess differences in riffle habitat between the 3 sections of the stream. The Kruskal-Wallis H test was used to analyze differences between the 3 sections of stream sampled. Spearman’s ρ was used to analyze any trend in the SQM index scores on the west end of campus over the course of the study. RESULTS A difference in the accumulated total value of macroinvertebrate assemblages was found—from the fall of 2015 to the fall of 2018—between the 3 locations, with a total of 17 groups of macroinvertebrates on the west end of campus (section A), 11 groups downstream of the culvert on the east end of campus (section B), and 9 groups upstream of the culvert on the east end of campus (section C) (Table). There were habitat differences between the 3 sites with substrate and riffle quality having the greatest influence on scores. A thin layer of silt was covering much of the sediment below the culvert. Sensitive species were only found on the west end of campus (section A). There was a significant difference in macroinvertebrate SQM index scores between each of the 3 sections of the campus (Fig. 2)(H = 23.01; p < 0.001). A significant decline in SQM index scores was also observed over the course of the study on the west side of the campus (ρ = −0.44; p = 0.007), although the recent increase in SQM index scores suggests this trend may not be linear (Fig. 3). There was a significant difference in fish diversity between the 3 sections, although the difference was due to (1) lack of fish diversity upstream of the culvert (Fig. 4)(H = 16.30; p < 0.010), and (2) a significant difference in Orangethroat Darter abundance between each section—with lower abundances in the urbanized sections (Fig. 5)(H = 27.25; p < 0.001). FIGURE 2. A comparison of Stream Quality Monitoring (SQM) index scores for macroinvertebrates at 3 sections of Dug Run on the University of Northwestern Ohio campus, fall 2015 to fall 2018. Samples were collected during the months of January, April, July, and September. 64 VOL. 120(2) URBANIZATION IMPACTS ON MACROINVERTEBRATES AND FISH
INTRODUCTION
Dug Run, a tributary of the Ottawa River in Allen County in northwestern Ohio, flows east to west along the southern border of The University of Northwestern Ohio. The university has been building around Dug Run, but changes in stormwater management regulations-and a desire for more green space-has resulted in visual differences along the stream. Over 80% of the surface on the east end of the campus is impervious due to roads, parking lots, and buildings, with stormwater directed to the stream. Additionally, an approximately 128 m long culvert influences stream flow. The west end of campus, by contrast, was designed to carry stormwater into a series of retention basins. Once in the basins, the stormwater would be filtered by soil before reaching the stream (Patrick J. Beam, Beam Designs, personal communication). This difference in landscape design has provided an opportunity to observe the impact each design is having on Dug Run.
Changes in the land surface during urbanization have altered the type and magnitude of runoff processes (Booth and Bledsoe 2009). These hydrological changes can have dramatic impacts on the organisms living in the streams due to changes in pool-riffle sequences, changes in in-stream velocity, and alterations to in-stream habitat (Paul and Meyer 2001). Urban areas have been found to increase levels of nitrates, conductivity, turbidity, and temperature-while decreasing oxygen levels-in streams: all of these factors can contribute to poorer macroinvertebrate assemblages (Shilla and Shilla 2011). Several studies have shown that increasing the area of impervious surfaces and urban stormwater drainage can have negative impacts on stream biota (Walsh et al. 2005;Wang et al. 2012;Walsh and Webb 2016). Urbanization negatively affects both the diversity and abundance of macroinvertebrates and fish (Wheeler et al. 2005). Darter species have been negatively affected by urbanization (Onorato et al. 2000;Stranko et al. 2010). Kemp and Spotila (1997) (Steffy and Kilham 2006). The Johnny Darter (Etheostoma nigrum), Greenside Darter (Etheostoma blennioides), and Orangethroat Darter (Etheostoma spectabile) have been captured in Dug Run (Ohio EPA 2013). While the Orangethroat Darter is more tolerant of turbid water and silted bottoms than other darter species, the Orangethroat Darter populations have been reduced in areas with heavy silting or where other pollutants become excessive (Trautman 1981).
Not only have changes in runoff negatively impacted stream health by increasing impervious surfaces along the stream, but the addition of culverts in urban areas change stream flow. Culverts tend to channelize streams, increase erosion and sedimentation, and influence water temperature (Vaughan 2002). These changes have negative impacts on stream biota (Khan and Colbo 2008;Favaro et al. 2014), the movement of fish (Benton et al. 2008), the number of fish, and alteration of stream habitat (Wheeler et al. 2005). Changes in stream flow, due to culverts, both impedes movement of indigenous species (Foster and Keller 2011) and lowers light levels that influence the movement of both fish and macroinvertebrates (Jones et al. 2017). Because the culvert in the current study-located on the east end of campus-is approximately 128 m in length and 2.5 m in width, it would affect stream flow, habitat, and light through a long stretch of the stream.
The goal of this project was to determine if macroinvertebrate and fish assemblages in Dug Run differed on 3 sections of the stream due to differences in stormwater management and the presence of an extended culvert. It was hypothesized that (1) Stream Quality Monitoring (SQM) index scores would be higher on the section managed for stormwater runoff, (2) fish diversity scores would differ between the 3 sections, (3) Orangethroat Darter abundance would differ between the 3 sections, and (4) SQM index scores would not change over the course of the study on the west end where stormwater is filtered by soil.
METHODS AND MATERIALS
The data for this study was collected in Dug Run from the fall of 2015 to the fall of 2018. Samples were collected 4 times a year during the months of September, January, April, and July. For this study, the stream was divided into 3 sections for analysis ( Fig. 1). Section A is about 170 m long on the west end of campus. This area is surrounded by grass, athletic fields, and some buildings. In about one-third of this section, the stream was straightened and the banks were sloped back to reduce erosion. Upstream of this section is a narrow woodlot bordering both sides. This end of campus was designed so stormwater does not reach the stream as surface runoff, but rather is absorbed by the ground. Section B is 240 m long and is highly urbanized downstream of a culvert. Buildings and parking lots were built near the stream. Erosion control measures, including netting and rip rap, were installed near the footbridges crossing the stream. Pipes from the parking lots divert stormwater directly to the stream. Section C is 170 m long and is upstream of the culvert. There is a woodlot upstream of the primary section sampled. There are some dry retention basins to prevent stormwater from directly entering the stream, but some stormwater does run into the stream.
To sample macroinvertebrates a kick-seine was placed on the downstream end of the riffle, then one person would use their boots to stir up the streambed in the riffle upstream of the net. After the sediment settled out, the net was lifted, rolled up, and taken to a cloth sheet where it was unrolled. Nets were checked for organisms; for identification, organisms collected were placed in plastic trays filled with water. Once macroinvertebrates were collected and identified, the nets were lifted and the sheets were checked for organisms that moved through the net. Most species were individually counted, but the numbers of highly abundant species such as midge larvae, aquatic worms, and planaria were estimated by counting the number found in a smaller area and multiplying that count based on total area. It was assumed that these estimates were consistent, but abundance was not used in the statistical analyses. Instead, macroinvertebrate SQM index scores were calculated using the Ohio EPA stream quality assessment form (Kopec and Lewis 1983).
Fish were collected in each section using a minnow seine, measuring 1.2 m high × 1.8 m wide with a 5 mm mesh, during 0.5-hour blocks as part of a classroom project. Each 0.5-hour block was considered 1 sample. All fish collected during the sampling period were placed in a bucket. After the sampling period the fish were identified, counted, and released back into the same reach of the stream. Each class was given a different portion of the stream to sample to try to minimize impacts of disturbance and prevent collecting the same fish in subsequent samples. Fish diversity was calculated using the Shannon index (Shannon 1948).
Habitat was only evaluated in the spring of 2017. The qualitative habitat evaluation index (QHEI) was used to assess habitat quality (Rankin 1989), and the Wolman pebble count (Wolman 1954) was used to assess differences in riffle habitat between the 3 sections of the stream. The Kruskal-Wallis H test was used to analyze differences between the 3 sections of stream sampled. Spearman's ρ was used to analyze any trend in the SQM index scores on the west end of campus over the course of the study.
RESULTS
A difference in the accumulated total value of macroinvertebrate assemblages was found-from the fall of 2015 to the fall of 2018-between the 3 locations, with a total of 17 groups of macroinvertebrates on the west end of campus (section A), 11 groups downstream of the culvert on the east end of campus (section B), and 9 groups upstream of the culvert on the east end of campus (section C) (Table). There were habitat differences between the 3 sites with substrate and riffle quality having the greatest influence on scores. A thin layer of silt was covering much of the sediment below the culvert.
Sensitive species were only found on the west end of campus (section A). There was a significant difference in macroinvertebrate SQM index scores between each of the 3 sections of the campus (Fig. 2)(H = 23.01; p < 0.001). A significant decline in SQM index scores was also observed over the course of the study on the west side of the campus (ρ = −0.44; p = 0.007), although the recent increase in SQM index scores suggests this trend may not be linear (Fig. 3). There was a significant difference in fish diversity between the 3 sections, although the difference was due to (1) lack of fish diversity upstream of the culvert (Fig. 4)
DISCUSSION
The design of the west end of campus appears to have had a positive impact on the quality of the water in the stream. Although scores still remain low compared to natural areas, possibly due to channel modification throughout the study area, the presence of sensitive species on the west end is an indication of good water quality. Hydrological responses to urbanization have been found to contribute to lower Index of Biotic Integrity (IBI) scores (DeGasperi et al. 2009). Stahley and Kodani (2011) found lower macroinvertebrate scores near parking lots (possibly due to silt, oils, and automotive chemicals), while suburban areas also had depressed macroinvertebrate populations (possibly due to mowing or a lack of natural vegetation). Roy et al. (2014) found that stormwater management approaches that included biofiltration wales, pervious pavement, green roofs, and rain gardens did not translate into changes in biotic health. These results could be due to the length of their study, previous damage to the system, or outside stressors that were not impacted by managing stormwater runoff. This current study indicates that managing for stormwater runoff (on the west end of campus) resulted in the stream attaining higher SQM index scores than a section directly impacted by stormwater runoff (on the east end of campus). However, the lack of natural vegetation, combined with intensive mowing, may have contributed to lower scores than would be present in an undisturbed area of the stream.
Not only were differences observed in macroinvertebrates between the 3 sections of campus, but changes in SQM index scores also occurred over time on the western portion of the campus. Possible causes of decline in SQM index scores include removal of trees during the summer of 2016, natural cycles in insect populations, or disturbances due to quarterly sampling. Rios and Bailey (2006) found forest shade and coverage increased macroinvertebrate richness and diversity. Despite the presumption that aquatic insects do not have outbreaks, Lancaster and Downes (2018) found examples of cyclical patterns in aquatic insects. Increases in SQM index scores at the western portion of campus (section A), observed toward the end of the reporting period, may indicate that at least some of the decline may be due to cyclical patterns in yearly insect populations. Small changes in canopy cover and annual insect cycles could have stronger impacts on SQM index scores at sites without a large number of sensitive species.
Fish diversity did not appear to be affected in the section surrounded by buildings and parking lots; however, the presence of the culvert did appear to reduce diversity by affecting habitat upstream of the culvert. Sediment is being trapped above the culvert; below the culvert the water has scoured the stream down to the bedrock in some places. While research has shown that urbanization affects fish diversity (Tabit and Johnson 2002), the presence of several more tolerant species in the section of Dug Run that was studied, and/or passive sampling, likely contributed to the lack of differences that were found. Similar to the present study, Wellman et al. (2000) found that culverts caused sediment accumulationalthough they did not find this impacted fish diversity. The lack of fish diversity upstream of the culvert on the UNOH site could be due to the length of the culvert. The Orangethroat Darter (one of the more sensitive species found in the current study) was rare in the urbanized sections, suggesting this fish had been negatively affected by pollution from the paved surfaces. A large amount of silt can be seen in the stream, west of the culvert (section B), likely coming from the surrounding parking lots. Other studies have also found urbanization to negatively impact darter species (Tabit and Johnson 2002;Wheeler et al. 2005;Horwitz et al. 2008), and Orangethroat Darters have been negatively impacted in areas with excessive siltation and pollution (Trautman 1981). The Johnny Darter and Greenside Darter have been collected further downstream in Dug Run (Ohio EPA 2013) but were not collected in the current study, possibly due to impacts related to urbanization.
Conclusion
This study provides evidence that stormwater management practices can have positive effects on macroinvertebrates and some fish, although mitigating stormwater alone is insufficient to maintain a highly diverse and healthy population. Booth (2005) suggested restorative land use planning actions required to attain a sustainable ecological goal: creating reserves, minimizing the footprint of road and utility crossings, preforming hydrologic rehabilitation such as stormwater infiltration or onsite retention and erosion control, re-establishing age structure of riparian vegetation, and reconnecting floodplains with their associated channels.
Riparian vegetation influences macroinvertebrate assemblages (Rios and Bailey 2006), with natural riverbanks providing the most suitable habitat for macroinvertebrates as compared to rip rap, fascine, and other bank stabilization efforts (Cavaillé et al. 2018). Vegetation cover can influence temperatures and provide food for macroinvertebrates and fish, help stabilize banks, and reduce erosion that could damage habitat important to aquatic organisms. Maintaining as many natural stream characteristics as possible can help minimize damage to in-stream communities.
|
v3-fos-license
|
2024-03-03T17:51:49.996Z
|
2024-02-28T00:00:00.000
|
268176040
|
{
"extfieldsofstudy": [],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.frontiersin.org/journals/immunology/articles/10.3389/fimmu.2024.1361009/pdf",
"pdf_hash": "d86c156c5fbc26bc6ece2af8a9ad07e4c417a3b9",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44330",
"s2fieldsofstudy": [
"Medicine"
],
"sha1": "908788f61aee619ffd6c0fbc3426ed2a612cec96",
"year": 2024
}
|
pes2o/s2orc
|
Carcinogenic mechanisms of virus-associated lymphoma
The development of lymphoma is a complex multistep process that integrates numerous experimental findings and clinical data that have not yet yielded a definitive explanation. Studies of oncogenic viruses can help to deepen insight into the pathogenesis of lymphoma, and identifying associations between lymphoma and viruses that are established and unidentified should lead to cellular and pharmacologically targeted antiviral strategies for treating malignant lymphoma. This review focuses on the pathogenesis of lymphomas associated with hepatitis B and C, Epstein-Barr, and human immunodeficiency viruses as well as Kaposi sarcoma-associated herpesvirus to clarify the current status of basic information and recent advances in the development of virus-associated lymphomas.
Introduction
The most consistent risk factors for malignant lymphoma comprise immune dysfunction and infectious agents that are primarily viruses.The concept of virus-induced lymphoma is not new, because viruses are associated with ~15% of all types of cancer (1).The pathogenesis of virus-associated lymphoma is complex and involves viral infection, immune disorders or deprivation of immunity, the tumor microenvironment (TME), and several viral coinfections.The complex biological properties of the virus itself, a delicate balance between viral and host immunity, and difficulties with establishing animal models have hindered research and understanding of the pathogenesis of virus-associated lymphoma.Lymphomaassociated viruses are very diverse (Table 1).Examples are large double-stranded DNA genomes (Epstein-Barr virus, EBV; Kaposi sarcoma-associated herpesvirus, KSHV), small double-stranded DNA genomes (hepatitis B virus; HBV), and positive-sense single-stranded RNA genomes (hepatitis C virus; HCV).Sufficient evidence indicates that human immunodeficiency virus (HIV), EBV and KSHV are pathogenic factors in lymphoma.However, other evidence indicates a possible relationship between HIV and viruses that cause hepatitis (HBV and HCV) and might be more limited and indirect than EBV and KSHV (2)(3)(4).Overall, general pathogenic mechanisms for the development of virusassociated lymphoma have been identified.Viruses can directly infect and transform lymphocytes, and viral antigen products or soluble factors induce chronic B-cell activation and promote transformation.Long-term immunodeficiency, such as that caused by HIV, facilitates viral evasion of the immune response and leads to tumor cloning.Current options for treating virusassociated lymphoma include radiotherapy, chemotherapy, immunotherapy, as well as antiretroviral, antiviral, and targeted therapy.Nevertheless, most virus-associated lymphomas are typically more chemoresistant and have a poorer prognosis than solid tumors.Therefore, a deeper understanding of the molecular mechanisms of virus-associated lymphoma will provide directions to develop targeted therapies.
Epstein-Barr virus
Epstein-Barr virus (EBV) is the most prevalent human oncovirus (5), and > 90% of adults are infected during their lifetime (6).The main mode of transmission of EBV is through oral transmission via saliva, and the current study confirms that the main tropism of EBV is for B cells and epithelial cells, and the presence of EBV has been demonstrated in tumor cells derived from NK/T cells and leiomyosarcoma (7).When EBV was first isolated from a Burkitt lymphoma (BL) cell line in 1964 (8), its association with cancer was widely studied.According to the 2016 WHO classification, EBV is associated with lymphomas, including mature B-cell tumors, mature T-cell and Natural killer (NK)-cell tumors, Hodgkin lymphoma (HL), and post-transplant lymphoproliferative disorders (9).The prognosis is worse for patients with HL and diffuse large B-cell lymphoma (DLBCL) who are EBV + than EBV -. (10) NK/T-cell lymphoma (NKTCL), a rare subtype of EBV-associated non-Hodgkin lymphoma (NHL), has similarly shown poorer outcomes (11).
Epstein-Barr virus structure
Epstein-Barr virus (also known as human herpesvirus 4; HHV-4), belongs to the gamma herpesvirus family.The EBV virion has a diameter of 150-170 nm and consists of a lipoprotein capsule and an icosahedral nucleocapsid, including 162 capsid particles.The viral genome comprises double-stranded DNA of 170 kb.This virus is permanently latent in lymphocytes, free in the cytoplasm as circular DNA and can integrate into cellular chromosomes (12).The life cycle of EBV is biphasic, with lytic replication and a latent phase, and the usual progression of EBV latency in B cells from type III to types II to I has been detailed in a review (13).After infecting resting naïve B cells, EBV enters type III latency, when all latency genes are expressed.The production of highly immunogenic viral proteins triggers a powerful cytotoxic T cell response.Subsequently, the virus restricts gene expression and enters type II latency by expressing Epstein-Barr nuclear antigen (EBNA)-1, latent membrane protein (LMP)-1, and LMP-2.B cells differentiate into memory B cells during this phase.Finally, EBV restricts gene expression to latency type I, where only EBNA-1 and EBV-encoded small RNAs (EBERs) are expressed (14).Table 2 shows EBV gene expression during various latent infections.
Carcinogenic mechanisms of EBV
The range of EBV-associated lymphomas is extraordinarily broad, and each has unique developmental pathways.Differences in EBV gene expression among them reflect the different pathogenic roles of EBV.Despite the current scale of research into the relationship between EBV and lymphoma, the etiological role of EBV is difficult to explain.This is partly because the virus acts differently on various tumors and partly because current disease models do not adequately replicate subtle changes in the virus-host balance among EBV-associated cancers.Moreover, although 95% of adults are persistently infected with EBV, most do not develop EBV-associated lymphomas.Therefore, the virus does not act alone, which warrants further exploration.Therefore, we would like to further summarize the mechanism of EBV-associated lymphoma from the perspective of the virus itself.
Expression of viral protein
Latent proteins are essential for the transformation of normal B lymphocytes into lymphoblastoid cell lines (LCLs), and they are involved not only in driving the overexpression of oncogenes, the silencing of tumor suppressors, the cell cycle, migration, but also in the regulation of adhesion.
EBNA1
EBNA1 is the only consistently expressed viral protein during the latent phase of EBV, and it is indispensable for the propagation and propagation of the latent viral genome.The current study finds that EBNA1 has significant pleiotropic effects, (15) including disruption of p53 stability (16)(17)(18) and promyelocytic leukemia (PML) nuclear bodies (19), and EBNA1 also affects several currently known signaling pathways involved in cell proliferation and apoptosis, known to include interference of EBNA1 with TGF-b signaling (20, 21) and inhibition of NF-kB activity.(22) Moreover, previous studies have also found that stable or transient infection with EBNA1 leads to oxidative stress, allowing reactive oxygen species accumulation and has a variety of effects on cell growth and survival, involving the induction of apoptosis as well as DNA damage.(23,24) In particular, EBNA1 can co-immunoprecipitate with Nm23-H1 in lymphocytes, which may contribute to the spread of EBV-associated tumors (25, 26).In fact, EBNA1 is actually highly antigenic, and T cells targeting ENBA1 are present in infected individuals (27).Therefore clarifying the immunomodulatory role of EBNA1 for the host has long been a focus of attention for researchers, which has been comprehensively summarized in a recently published review (28).Most published studies have now been limited to immune evasion or immunosuppression (28), include that EBNA1 can specifically bind to viral and cellular DNA for sequences (29)(30)(31) and can also enhance and inhibit the transcription of viral and cellular genes (32,33), and mediate the maintenance of the EBV genome (34).Recent studies have confirmed the trans-immune evasion ability of EBNA1.EBNA1 can inhibit the expression of these genes and enhance the survival and proliferation of infected cells by binding to DNA near the transcriptional start site of NKG2D ligand and c-Myc gene (35).In another study (36), EBNA1 was found to target c-Myc by chromatin immunoprecipitation (ChIP) sequencing of endogenous bromodomain-containing protein 7 (BRD7) in Burkitt lymphoma (BL), thereby regulating the viral infection status by coordinating with host BRD7.In addition, other studies have found that the expression of Galectin-9 (Gal-9) is positively regulated by EBNA1 at both the mRNA and protein levels (37), and Gal-9 has been shown to be a ligand for immune proteins on immune cell subpopulations and is also involved in cell proliferation and differentiation (38).
EBNA2
Many of the virus' latent genes are expressed in currently established EBV-infected cell lines.Of high interest, Pich et al. (39) explored in depth the first 8 days of infection by using EBV derivatives with a single mutation in EBV and found that EBNA2 played and its important role in activating naïve human B lymphocytes, inducing growth, and facilitating division, and in particular EBNA2 prevented the death of a subpopulation of infected cells.However, EBNA-LP, LMP2A, and miRNAs only have supportive and auxiliary functions.Even EBNA1, which has been in the spotlight, seems to be nonessential for cell activation in early viral infection.Previously known studies have extensively explored the mechanism of action of EBNA2, which is not only a potent activator of transcription of genes such as CD23 (40) and Cmyc (41), but also negatively regulates genes such as BCL6 and lg (42).Of interest is the previous finding that restricted expression of EBV latent genes contributes to viral persistence by down-regulating the plasma cell master regulator Blimp1, which induces and maintains the mature B-cell phenotype (43).EBNA2 is also a functional homologue of activated Notch (44), while both C-myc and activated Notch have oncogenic properties.In a recent study by Zhang et al. (45) it was demonstrated that LMP1 and EBNA2 constitute the minimum EBV proteins required for B-cell transformation, emphasizing the important role of EBNA2 in B-cell transformation, even though the study did not provide an in-depth investigation of the mechanism.EBNA2 is involved in host immunomodulation through its regulation of miRNAs.In B-cell lymphoma, EBNA2 positively regulates miRNA-21 and negatively regulates the expression of miRNA-146a, which affects the antiviral response of the innate immune system and is involved in EBVinduced B-cell transformation.The detailed mechanism has not been published up to now.The study by Anastasiadou et al. (46) found that EBNA2 down-regulated miRNA-34 by recruiting early B-cell factor 1 (EBF1) to the promoter and increased PD-L1 expression in BL and DLBCL.Other research found that EBNA2 also reduces ICOSL expression by inducing miRNA-24 while maintaining proproliferative C-myc levels to evade host immune responses (47).
EBNA-LP
Current studies on EBNA-LP are limited.Like EBNA2, EBNA-LP is also expressed early in infection, and EBNA-LP acts mainly as a co-activator of EBNA2 and participates in B-cell transformation by activating viral and cellular transcription (48).In addition, some studies have demonstrated other effects of EBNA-LP.These include regulation of specific alternative splicing (49), promotion of transcription factor recruitment, and involvement in cell growth and survival (50).
EBNA3
The EBNA3 family, consisting of the EBNA3A, EBNA3B, and EBNA3C genes, is thought to be a nonredundant family of EBV genes that likely arose from gene duplication during the evolution of primate gamma herpesviruses (51).The production of EBNA3 proteins is thought to be tightly regulated and, because of their low protein levels and turnover efficiency, these proteins are very stable (52).Interestingly, the EBNA3 family has conflicting roles in carcinogenesis.EBNA3A and EBNA3C promote carcinogenesis, whereas EBNA3B inhibits carcinogenesis (53).EBNA3A stimulates cell proliferation by inhibiting p21 WAF/CIPI , targeting tumor suppressor pathways and altering cell cycle regulation (54).The mechanisms by which EBNA3C promotes lymphoma development are more diverse, including regulation of cyclin D2 (55) and targeting of tumor suppressor pathways (53).The role of EBNA3 family proteins in EBV-associated B-cell lymphomagenesis has been systematically described (51).Numerous synergistic collaborations between the EBNA3 protein families have been recognized, mostly involving cooperation between EBNA3C and EBNA3A or EBNA3B.Only in the absence of EBNA3C is there moderate cooperation between EBNA3A and 3B.The cooperation between the EBNA3 protein families has been described in detail in the review by Styles et al. (56).
LMP1
Among the proteins expressed during EBV viral latency, LMP1 has been of great interest, which is expressed in HL, DLBCL, and post-transplant lymphoproliferative disorder(PTLD) (57,58), and is essential for the transformation of viral B cells into lymphoblastoid cell lineages, which has been meticulously reviewed in many previous studies (59) (60, 61) (62).The oncogenic mechanism of LMP1 in EBV-associated lymphomas is very complex.EBV not only promotes oncogenic pathways such as Janus kinase/signal transducer, nuclear factor-kB (NF-kB), phosphatidylinositol-3-kinase/protein kinase B (PI3K/AKT), mitogen-activated protein kinase (MAPK), and transcriptional activator of transcription (JAK/STAT) (63), but also, because of its own weaker immunogenicity, it can bypass the targeting effect of CD8 + T-cells and fail to elicit an appreciable immune response in EBV-positive healthy people (62).More importantly, LMP1 was associated with increased expression of PD-L1 in a variety of lymphomas (64), which provided new clues to further explore the immunomodulatory role of LMP1.The latest study by Giehler et al. (65) demonstrates a direct protein-protein interaction between LMP1 and TNF receptor-associated factor 6 (TRAF6), which underlies C-terminal activation region 2 (CTAR2) signaling and the survival of LMP1-transformed B-cells, resolving what we have always wondered.
LMP2A and LMP2B
LMP2A is expressed in various B-cell malignancies, including HL, PTLD, and BL, but our current studies on the mechanism by which LMP2A promotes lymphomagenesis are not in-depth.Using transgenic mice, Fish et al. (66,67) demonstrated that LMP2A accelerated lymphoma development in vivo by exploiting the role of MYC in the cell cycle, particularly during p27 kip1 degradation.The latest study utilized phosphoproteomics and transcriptomics to further explore the molecular mechanisms by which LMP2A affects Bcell biology, and found that LMP2A down-regulates cyclic checkpoint genes, including CDKN1B(p27) and CHEK1, as well as the tumor suppressor RB1 (68).
The function of LMP2B is largely unknown.Earlier studies demonstrated that LMP2B negatively regulates the function of LMP2A to prevent the transition from latent to lytic EBV replication (69).In addition, LMP2B affects epithelial cell behavior, such as cell adhesion and motility (70).
Genetic instability
Genetic instability is one of the major common features of cancer and can be observed at the chromosomal or genetic level in malignant cells (71).Integration of EBV into the host genome may be a common occurrence in lymphomas, but our understanding of this is limited.On the one hand, the large size of the EBV genome itself makes it difficult to determine the integration site with the host genome and to analyze it further, on the other hand, the highly methylated DNA hinders the mapping of the EBV genome, and not only that, multiple copies of the viral exons can generate interference noise at the integration site (72), which makes it more difficult to study it in depth.Previous studies have demonstrated the integration of EBV in the chromosomal genome of BL (73) and other B-cell lymphomas (74, 75).Takakuwa et al. (71) demonstrated in Raji that integration of EBV into 6q15 resulted in loss of expression of the human Bach2 gene (BACH2) at the mRNA and protein levels.BACH2 has been shown to have a significant inhibitory effect on cellular proliferation, and deletion of BACH2 expression may contribute to the development of B-cell lymphomas, including BL. Related studies have previously analyzed copy number alterations (CNAs) and gene expression profiles of EBV + and EBV -DLBCL samples confirming that EBV + DLBCL has fewer genomic alterations (76).In a recent whole-exome sequencing of EBV + DLBCL, it was shown that a heterogeneous mutational landscape is associated with DNA double-strand breakhomologous recombination repair failure, and genes found to have a high number and frequency of mutations include serine protease 3 (PRSS3), MUC3A and MUC16 (77).A recent study by Zhou et al. (78) demonstrated an elevated frequency of mutations in MYC and RHOA in patients with EBV + DLBCL.An updated mutational map of EBV + DLBCL has been comprehensively characterized, complementing previous studies with recurrent alterations in CCR6, CCR7, DAPK1, TNFRSF21, and YY1 (79), further elucidating the mechanism by which EBV leads to Bcell transformation.
MicroRNAs
EBV was the first virus to detect viral miRNAs (80).The EBV genome encodes 44 mature miRNAs belonging to two distinct classes, BamHI-A region rightward transcript (BART) and Bam HI fragment H rightward open reading frame 1 (BHRF1), which have different expression levels in different EBV latency types (81).Among them, BART transcripts encode 22 miRNA precursors and 40 mature miRNAs, while BHRF1 transcripts express three miRNA precursors to produce four mature miRNAs.Current published literature has demonstrated that EBV-encoded miRNAs play an important role in the development and progression of EBV-associated malignancies, including cell proliferation, apoptosis, invasion, and transformation (82, 83).Moreover, EBV miRNAs can even directly target immune-related genes, allowing infected cells to evade surveillance and destruction of the immune system (84), (85).However, EBV miRNAs have different expression profiles in different cancer types.In EBV-infected DLBCL, all EBV-miRNAs except BHRF1 cluster and EBV-miR-BART15 and -20 could be detected, as demonstrated in Imig et al.And in NK/T-cell lymphomas, the most highly expressed viral miRNAs were miR-BART1-5p, miR-BART5, miR-BART7, miR-BART11-5p, and miR-BART19-3p, accounting for 50% of viral miRNAs and approximately 1% of total miRNAs (86).Studies have described the presence and expression levels of EBV miRNAs and host miRNAs in different lymphomas, with some focusing on patient samples and others on different cell line models for in vitro experiments.EBV microRNA profiles and human microRNA profiles for EBV-associated lymphomas are detailed in a recent study by Soltani et al. (87) What's more, published studies have confirmed that EBV-encoded miRNAs may interfere with host miRNAs, which actually leads to even more complications (87).
EBV miRNAs are essential for regulating the viral life cycle.It was demonstrated as early as lizasa et al. (88) that EBV-miRNA-BART6-5p targets four sites within the 3'-UTR of human Dicer mRNA and comprehensively affects the maturation of the miRNAs, resulting in the total repression of these molecules, which helps to maintain latent infection.Of particular note, in addition to EBV miRNAs, EBVassociated products also contribute to the downregulation of Dicer, such as the EBNA1 protein, which has been described in detail in Mansouri et al. (89) EBV miRNA biogenesis and action are also affected by adenosine to inosine (A-to-I) RNA editing.A-to-I editing of pri-miR-BART6-5p was found in EBV-infected BL to activate Zta and Rta viral proteins encoding EBNA2 viral oncogenes and essential for lysis and replication, leading to the transition of the viral cycle to type III latency (88).Of interest, EBV-encoded miRNAs are also involved in host cell growth, cell cycle, and apoptosis.PRDM1/Blimp1 is a major regulator of terminal B-cell differentiation and is well known as an oncogene in aggressive lymphomas.Nie et al. (90) have demonstrated that the cellular target of the EBV-miRNA-BHRF1-2 is PRDM1, and that by inhibiting the PRDM1-mediated function and conferred a growth advantage to EBV-infected B cells, promoting lymphoma development.Another study confirmed that the EBV-miRNAs-BART9 were involved in the proliferation of Nasal NK/T cell lymphomas (NKTCL) by regulating the level of LMP-1 (91).
The success and persistence of any viral infection depends on a complex balance with the host immune system, and EBV miRNAs are also involved in the regulation of the host immune system.(Figure 1) EBV-miRNAs-BART6-3p was found to mediate downregulation of the interleukin-6 receptor (IL-6R) in BL (92), which is involved in regulating key cellular processes, including cell proliferation, survival, and response to host pathogens after dimerization receptor binding to interferon-a, IL-12, or IL-27 (93).In addition, the EBV-miRNAs-BART20-5p were shown to inhibit T-bet translation through secondary inhibition of p53 (94).
The role of EBV-encoded miRNAs in immunomodulation has been well and exhaustively described (84, 95-97).The latest research has confirmed that in DLBCL, EBV-miRNA-BHRF1-2-5p targets LMP1 to drive the expression of PD-L1 and PD-L2, exerting context-dependent immune counter-regulation, leading to immune escape and contributing to persistent viral infection (98).In another study, Murer et al. (99) used NOD-SCID gc null (NSG) and HLA-A2 transgenic NSG mice to construct a mice model infected with an EBV variant infection lacking viral miRNAs and a mice model infected with wild-type EBV, which found that the viral load in mice infected with EBV variants lacking viral miRNAs was significantly reduced, and the proliferation frequency of EBVinfected B cells was also decreased.What's more, the depletion of T CD8 + cells led to the formation of lymphomas n the mouse model infected with the viral miRNA-deficient variant, which supports the notion that EBV miRNAs play a major role in immune evasion in vivo and support tumor development.The role of EBV virusencoded microRNAs in human lymphomas can be found in the review by Navari et al. (82).
Hepatitis B virus
According to the World Health Organization (WHO), 257 million people worldwide have chronic HBV infection defined as hepatitis B surface antigen (HbSAg) positivity.The geographic epidemiological profile of HBV is clear according to the WHO; the prevalence is 6.1% in Africa, the Western Pacific, and Southeast Asia, and 1.6% in Europe and North America.(100) Worldwide, the most common route of transmission of HBV is perinatal, but it can also be transmitted percutaneously and via mucous membranes, as well as through sexual intercourse.(101) When infection occurs, the host may experience acute infection with full recovery, or chronic infection or an acute course leading to hepatic failure.(102) The relationship between HBV infection and NHL has been explored (103)(104)(105).However, HBsAg + is not associated with elevated risk of HL, multiple myeloma (MM), or various types of leukemia (106).Compared with HBsAg -DLBCL, the median age of HBsAg + DLBCL onset is younger, with more frequent splenic or retroperitoneal lymph node involvement, more advanced disease, and significantly worse outcomes (107).The results of other studies are similar (106, [108][109][110].A metaanalysis of 58 studies revealed that HBV infection leads to a 2.5fold increased risk of NHL, and data from a stratified analysis suggest a closer association between HBV infection and B-cell, than T-cell NHL (111).Why HBV infection is more closely associated with B-than T-cell lymphoma requires elucidation in functional studies.
Hepatitis B virus structure
The hepatitis B virus (HBV) is a prototype that belongs to a family of small, enveloped, hepatotropic DNA viruses that infect a narrow host range of mammals and birds and preferentially orientate towards hepatocytes (112).After HBV infection of hepatocytes, the genome of HBV is delivered into the nucleus and repaired in the nucleus to form covalently closed circular DNA (cccDNA), which is then used as a template to guide the transcription of viral RNA.cccDNA is highly stable in the nucleus of infected hepatocytes, which is why chronic hepatitis B is difficult to treat thoroughly (113).The HBV genome contains four overlapping open reading frames (ORFs), four promoters, two enhancer elements (EN1 and EN2), a polyadenylation site for viral RNA transcription and several cis-acting signals for DNA replication.The ORFs P, S, C, and X in the negative strand respectively encode DNA polymerase, HBsAg protein, core and pre-core proteins, and X protein (HBx).(114).
Such DNA viruses have unusual replication features through RNA intermediates and can integrate into the host genome.
Carcinogenic mechanisms
The biological mechanisms through which HBV infection causes lymphoma are unclear.Those specific to HBV-associated lymphoma have been inferred primarily from studies of HBVassociated hepatocellular carcinoma (HCC) and HCV-associated lymphoma.We emphasize the importance of the humoral and cellular immune systems are important for viral clearance (115), as both are activated by HBV infection and exert antiviral effects.The two immune system might destroy host cells that are already infected with HBV.Therefore, the potential role of HBV in the development of lymphoid disease might be very complex.Various hypotheses have been proposed to explain the mechanisms through which HBV causes lymphoma, and these are summarized below (Figure 2).
Chronic antigenic stimulation
The hypothesis that chronic antigenic stimulation causes lymphomas remains controversial.Chronic local antigenstimulated immune responses caused by HBV infection might be associated with the development of lymphoma (116).A large 14year follow-up cohort study in Korea (106) consistently associated HBsAg + with elevated risk of NHL, suggesting that chronic infection promotes the development of lymphoma.Risk of B- NHL is not increased in individuals previously infected with HBV or vaccinated against HBV (117,118).Nucleic acid sequences specific to HBV have been detected in peripheral blood nuclei and hematopoietic tumor cells of patients with HBsAg + (3, 119, 120), which might result in chronically stimulated B cells that transform into B-cell NHL.Peripheral blood mononuclear cells (PBMCs) derived from patients with chronic HBV infection have immortalization potential when cultured in vitro (121).New cells identified in the peripheral blood of some patients with nonlymphoid chronical HBV infection were later confirmed as being of B-cell origin.Moreover, the immunophenotype of these cells was similar to that of most HBsAg + B-cell NHL.This supported the relevance of HBV-induced B-cell NHL, although none of the patients developed lymphoma during > 1 year of follow-up.Furthermore, a 42.1% and 65.5% bias towards Immunoglobulin Heavy Variable 4-34 (IGHV4-34) heavy, and Immunoglobulin Kappa Variable 4-1 (IGKV4-1), light-chain genes respectively in HBsAg + DLBCL, exceeded that in normal peripheral blood B cells and B-cell NHLs (107).However, these results were contradicted by a study that found no evidence of biased IGVH gene usage or the stereotyped third complementarity determining region (CDR3) (122).Unlike classical antigen-driven hepatitis C virus-associated lymphoma, the chronic antigen stimulation model seems less applicable to HBV-associated DLBCL.
Genomic instability or mutation
Hepatitis B viral DNA is integrated into the chromosomal DNA of lymph node cells (123).A genome-wide investigation of HBV integration in HCC found that HBV integration alters chromosomal stability and gene expression, and shortens the overall survival of infected individuals (124).Approximately 50% of woodchuck hepatitis virus (WHV) is integrated into the myelocytomatosis oncogene (MYC) family of genes and it affects the proto-oncogene in woodchuck models of HCC with chronic WHV infection (125).In fact, HBV integration is common, occurring in 80%-90% of HBV-associated HCC (126,127).(128).The expression of six of these genes is increased in NHL whereas that of HDAC4 is not, suggesting that HBV integration leads to the cis-activation of primary oncogenes rather than the inactivation of tumor suppressor genes.However, no evidence of HBV DNA integration into the tumor genome has been found in either HBV-associated FL (129, 130) or DLBCL (122).A trend towards an increased genome-wide mutational load has been identified by whole genome, or whole exon sequencing in the coding regions of HBsAg + Transducer And Activator Of Transcription 6 (STAT6), AT-Rich Interaction Domain 1A (ARID1A), and Guanine Nucleotide-Binding Protein Subunit Alpha-13 (GNA13).Furthermore, the most obvious mutational pathways were HBV infectionassociated, followed by the Forkhead Box O (FoxO), Wingless/ Integrated (Wnt), Janus Kinase/STAT (JAK-STAT), B-Cell Receptor (BCR), Phosphatidylinositol-3 Kinase (PI3K), and Nuclear Factor Kappa B (NF-kB) signaling pathways.
Expression of viral protein
The HBx protein encoded by the X gene was once named "viral oncoprotein."This protein is involved in hepatocyte transformation through regulation of the cell cycle and the pleiotropic activity of DNA repair and signaling pathways (131-133).The expression of HBV antigens, especially HBx protein, is abundant in HBV + DLBCL sera (103).These findings were consistent with the significantly elevated HBx levels in HCC due to stable HBV integration (124, 134).The HBx protein inhibits p53 in hepatocytes, which leads to abnormal hepatocyte division and HCC (135,136).A similar B cell mechanism might contribute to the malignant transformation and development of B cell NHL (3).Among the various activities of HBx, its transactivation might play a crucial role in carcinogenesis.Interaction between HBx and the acetyltransferase CREBBP/p300 facilitates the recruitment of these cofactors to the CREB-responsive promoter, which leads to the activation of gene expression (112).A Chinese study of HBVassociated FL found significantly upregulated CREBBP-binding genes in HBsAg + , compared with HBsAg -FL (129).This could explain the low dependence of HBsAg + FL on CREBBP mutations in that study, as interaction between HBx and CREBBP/p300 might mimic the role of mutant CREBBP during the early stages of lymphoma.The contribution of HBx to the pathogenesis of lymphoma remains obscure, and further investigation is needed to verify its mechanism of action.
Tumor microenvironment
The tumor microenvironment is a complex system of cellular and subcellular components with reciprocal signaling pathways that play key roles in carcinogenesis (137).Tumorigenesis is dependent on the TME, and stroma is uniformly and inappropriately activated in cancer, thus contributing to the malignant features of tumors (138).Chronic and persistent HBV infection induces immune cell dysfunction, T-cell failure, as well as the extensive activation and production of numerous cytokines, chemokines and growth factors that constitute a sophisticated TME that might affect cancer development (139,140).Hepatitis B surface antigen-positive FLs might have an altered TME with increased infiltration of cluster of differentiation (CD)8 + memory T cells, CD4 + Th1 cells, M1macrophages and increased T cell failure (129).This was consistent with similar findings in HCC associated with HBV.
The unique biological characteristics of HBV complicates exploring curative mechanisms, and animal models have various strengths and weaknesses.This might explain to some degree, the limited progress of investigations into HBV-related lymphoma.
Hepatitis C virus
An estimated 71.1 million people worldwide are infected with HCV, with an annual incidence of 1.75 million (141).The most common routes of HCV transmission are blood transfusions, health care-related injections and injecting drug use (142).Most people (75-80%) will develop chronic infection after exposure to HCV, and the clinical cases of acute hepatitis C are less than 25% (142).In addition to infecting hepatocytes, HCV can infect other cells, such as lymphocytes (143).A possible association between HCV infection and NHL was first described in 1994 (144).A study of 150,000 patients with HCV in the USA found that HCV infection increased risk of lymphoma by 20%-30% (145).Epidemiological data show no, or only a slight increase in the risk of T-cell NHL and HL (146,147), while the strongest evidence is for B-cell NHL (148).A meta-analysis found that the prevalence of HCV infection in patients with B-cell NHL is ~15% (149), and others have reached similar conclusions (150-152).We found that the histological subtypes of NHL most closely associated with HCV infection were marginal zone lymphoma (MZL), lymphoplasmacytic lymphoma, and DLBCL (153-156).Clinical HCV + NHL usually occurs after infection for >15 years (157) and patients with HCV + DLBCL usually have higher International Prognostic Index (IPI) scores and LDH levels (158,159).
Hepatitis C virus structure
The life cycle of HCV begins with the binding of HCV to specific entry factors on hepatocytes, after which the virus is internalized into the cytoplasm.Subsequently, its genomic RNA is released and used for multiprotein translation and viral replication (143).The small, enveloped, positive-sense, single-stranded RNA HCV belongs to the Flaviviridae family of the genus Hepatophilus.The icosahedral diameter of the envelope particles is 56-65 nm (160), whereas that of the viral core is ~45 nm (161).The HCV genome is a positive single-stranded RNA comprising ~9,600 nucleotides.It encodes a single open reading frame (ORF) flanked by five and three untranslated regions (UTRs).The HCV polyprotein encoded by a single ORF is ~3,000 amino acids long and undergoes co-translational and post-translational processing by cellular and viral proteases to form three structural proteins (core, E1, and E2), an ion channel protein (p7), and the nonstructural (NS) proteins, NS2, NS3A, NS4A, NS4B, NS5A, and NS5B.The structural and NS proteins are located at the N-terminus, whereas other proteins are located at the C-terminal end (162).
Carcinogenic mechanisms
The integration of single-stranded RNA into HCV nucleic acid sequences of the host genome appears to be impossible owing to the absence of a reverse transcriptase.Therefore, it indirectly exerts oncogenic effects by modulating the host immune system (163).Liver cells and lymphocytes share the HCV receptor, CD81 (164,165).Activation-mediated CD81 differs from other B cell stimuli because it induces the preferential proliferation of naïve B cells.Expression of the C-X-C Motif chemokine receptor 3(CXCR3) is upregulated in CD81-activated B lymphocytes, but decreased when stimulated with different substances (166).This interaction between HCV and the immune system might underlie the immune and lymphoid tissue proliferative disorders that frequently accompany chronic HCV infections.Three theories might explain HCV transformation (Figure 3).
Chronic antigenic stimulation
The defined pathogenic link between chronic Helicobacter pylori infection and the development of mucosa-associated lymphoid tissue (MALT) gastric lymphoma suggests that chronic antigenic stimulation can determine the likelihood of NHL (167).Notably, the regression of MALT lymphoma after HP eradication makes this possibility more plausible (168).Splenic lymphoma regression after antiviral therapy similarly eradicates HCV (169).About 10% of patients with type II mixed cryoglobulinemia (MC) develop overt B-NHL after 5-7 years of follow-up ( 170), and HCV is a major etiological factor in MC and might also be the cause of its evolution to overt NHL (171)(172)(173).HCV-associated type II MC expresses immunoglobulins encoded mainly by germline V H 1-69 and VkA27 genes.A preference for the V H 1-69/VkA27 combination in HCV-associated lymphomas is consistent with the possible role of antigen selection in the expansion of B cell clones (174).In addition, B-cell receptors expressed by lymphomas in patients infected with HCV rarely react with viral proteins (175).Notably, the highly biased stereotyped BCR sequence of HCV + B-NHL has also been found in other HCV-B-cell malignancies (176).This confirmed that HCV-associated lymphoma cells originate from precursors with autoimmune properties rather than from B cells that express antiviral BCR.
The HCV envelope protein E2 can bind to CD81 expressed on B cells (164).This receptor is upregulated in HCV infection and MC and positively correlates with viral load (177).Moreover, CD81 forms a conjugate complex with CD19 and CD21 in human B cells (178, 179), and the attachment of the B cell antigen receptor (BCR) to any component of this complex decreases the threshold required for BCR-mediated B-cell proliferation (180).Bound E2-CD81 is also involved in activating the transcription factor NF-kB, which subsequently increases the expression of Bcl-2 protein, thus enhancing B cell survival and protecting human B lymphocytes from Fas-mediated apoptosis (181).In addition, HCV E2 binds to CD81 antibodies on neonatal human B cells, which leads to the activation and sequential proliferation of the C-JUN N-terminal kinase pathway (166).Furthermore, HCV-E2 binding to CD81 directly prevents the functional activation of NK cells, providing an effective immune escape strategy for the virus (182).Overall, the interaction between HCV and CD81 promotes chronic infection and facilitates the development of HCV-associated Bcell lymphoma.
Hit-and-run theory
Some evidence indicates that intracellular viral replication is not required for tumor transformation (183).The hit-and-run theory suggests that viruses play a predisposing role in cancer formation and that the viral genome can be completely lost after the host cell has accumulated numerous mutations (184).This mechanism was suggested for HCV (185).Infection with HCV results in a 5-10-fold increase in the frequency of mutations in the Ig heavy chain, B cell Lymphoma 6 (BCL-6), Protein 53 (p53) and Catenin genes in HCVinfected B-cell lines and HCV-associated peripheral blood Frontiers in Immunology frontiersin.orgmononuclear cells, lymphomas, and HCC in vitro.The authors concluded that HCV induces a mutator phenotype by causing changes in proto-oncogenes and oncogenes that successively lead to oncogenic B cell transformation, even when the virus might have already left the cells.The same group also conducted RNA interference experiments and found that HCV induced errorprone DNA polymerases z, i, and activation-induced cytidine deaminase.All these together contribute to increased mutation frequency, complementing the oncogenic mechanism of HCV causing lymphoma.Some controversy remains regarding the clinical applicability of these findings, as they have not been confirmed in vivo (186, 187).Infection with HCV stimulates nitric oxide (NO) production by activating the inducible NOS (iNOS) gene through the viral core and NS3 protein (188).Nitric oxide causes DNA breaks and enhances DNA mutations.The HCV core protein binds to NBS1 and inhibits formation of the Mre11/NBS1/Rad50 complex, thus affecting Ataxia Telangiectasia-Mutated (ATM) activation and inhibiting DNA binding by repair enzymes (189).Infection with HCV inhibits multiple DNA repair processes and leads to chromosomal instability, which explains its oncogenicity from a different perspective.
Expression of viral protein
Hepatitis C viral RNA and protein were detected in an established HCV-infected B-NHL cell line in vitro using RNase protection assays and immunoblotting (190).That study confirmed that HCV can infect primary human hepatocytes, PBMCs and established Raji B cell lines in vitro, indicating that HCV can replicate in B cells.Ample evidence supports the notion that intracellular viral proteins contribute to oncogenic transformation.Interferon regulatory factor-1-null (irf-1(-/-)) mice with inducible and persistent expression of HCV structural proteins (irf-1/CN2 mice) have been established (191).These mice have a high incidence of lymphoma and lymphoproliferative disorders.The HCV core and E2 proteins are responsible for the expression of interleukin (IL)-2, -10, and -12, as well as the induction of Bcl-2 in the presence of nucleocapsid proteins in the context of complex signaling networks in these mice (191).Another transgenic mouse model expressing HCV core protein frequently developed follicular center cell-type lymphoma, and HCV core mRNA was detected in lymphoma tissues (192).Transgenic RzCD19Cre mice express the complete HCV genome in B cells (193).However, the incidence of DLBCL in RzCD19Cre mice was only 25%.The incidence of B-cell lymphoma correlated significantly with serum levels of soluble interleukin-2 receptor a subunit (sIL-2Ra) only in the RzCD19Cre mice.
MicroRNA and cytokines
Small non-coding MicroRNAs (miRNAs) sequence-specifically regulate gene expression at the post-transcriptional level (194).They play roles in controlling various biological functions such as developmental patterns, cell differentiation, proliferation, genomic rearrangement and transcriptional regulation (195).MicroRNA-26b is significantly downregulated (P = 0.0016) in HCV + splenic marginal zone lymphoma (SMZL) and this might cause miR-26b to stop inhibiting never in mitosis gene A (NIMA)-related Kinase 6 (NEK6) and have oncogenic potential in HCV-associated SMZL (196).MicroRNA-26b functions not only in the specific area of HCV-associated SMZL, but also in HCV-associated NHL, including MZL and DLBCL (197).Overall, these findings suggest that miRNA network dysregulation is involved in the development of HCVassociated lymphomas.
Cytokines are small glycoproteins and peptides that usually have relatively short half-lives and act via autocrine and paracrine signaling.Cytokines mediate interactions between immune and non-immune cells in tumors and can promote or inhibit cancer cell growth (198).B-cell activating factor (BAFF) is a key survival factor for B cells that is upregulated during HCV infection (199).An excess of BAFF in the absence of protective tumor necrosis factor (TNF) leads to a high incidence of lymphoma in BAFF transgenic mice, suggesting that BAFF functions in promoting B-cell malignancy (200).Notably, other cytokines and growth factors, including IL-6, -17, -10 and TGF-ß, also contribute to B-cell proliferation in HCV infection (201)(202)(203).
However, the molecular mechanisms underlying the development of HCV-associated lymphomas remain poorly understood.The prevailing views are not mutually exclusive and might involve parallel pathways leading to HCV-associated lymphoma, as it is likely that a combination of translational conditions is required to eventually lead to the development of lymphoma.Additional bridging studies combining in vivo and ex vivo investigations are required to further explore this topic.
Human immunodeficiency virus
It is estimated that 38.6 million people are currently infected with HIV-1 worldwide, that some 25 million people have died, and that heterosexual transmission remains the dominant mode of transmission, accounting for about 85 per cent of all HIV infections (204).HIV infection carries multiple immune cell types for CD4 and CXCR4/CCR5 co-receptors.This includes helper T cells, macrophages.If untreated, it may also infect microglia and astrocytes of the nervous system (205).An association between HIV and aggressive lymphoma was initially reported in 1982 (206).As the most prevalent malignancies among patients infected with HIV, the relative risks of NHL and HL are 60-200-and 8-10-fold higher than patients with lymphoma without HIV infection, respectively (207,208).The WHO classification system recognizes subtypes of HIV-NHL (9b).Over 95% of malignancies are of B-cell origin, including DLBCL and BL, whereas plasmablastic, T-cell, and primary effusion lymphomas, are rare, and primary central nervous system (CNS) lymphoma is a very rare B-cell subtype that was more prevalent during the early stages of the AIDS epidemic.These lymphomas have high-grade features such as typically late presentation, extra-nodal involvement, and a marked tendency to involve the gastrointestinal tract, CNS, liver, bone marrow and perinodal soft tissues (209).Despite the introduction of highly active antiretroviral therapy (HAART) and the improved survival rates of patients infected with HIV during the past 20 years, malignant lymphoma remains the leading cause of morbidity and mortality (210).
Human immunodeficiency virus structure
The two types of HIV isolates comprise types 1 (HIV-1) and 2 (HIV-2).The globally predominant pathogen of AIDS is HIV-1, whereas HIV-2 is restricted to certain areas of West and Central Africa (211).Human immunodeficiency virus forms spherical, membrane-enveloped, pleomorphic virions, 1,000-1,500 Å in diameter.that contain two copies of a single-stranded, positivesense RNA genome (212) This virus is characterized by the structural genes gag, pol, env (211).Like other retroviruses, gag genes encode the structural proteins of the core (p24, p7, and p6) and matrix (p17), and env genes encode the viral envelope glycoproteins gp120 and gp41.The pol encodes enzymes that are essential for viral replication.
Carcinogenic mechanisms
That HIV causes chronic antigenic stimulation, immune dysregulation, is generally accepted.However, the high incidence of lymphoma in patients who are HIV + despite the introduction of HAART suggests that incomplete immune reconstitution or factors unrelated to immune dysfunction also play causative roles.Although HIV-1 infects a subpopulation of human cells, namely CD4 + cells, soluble HIV-1 proteins that are detectable in serum from infected individuals invade and/or bind to receptors in uninfected cells, including B lymphocytes and endothelial cells.These proteins interfere with host gene expression and other cellular processes, ultimately leading to cellular transformation and the development of HIV-associated lymphomas.This section summarizes current mainstream views (Figure 4).
Chronic antigen stimulation and cytokines
Although HIV infection is characterized by a reduction in the function or number of CD4 + T cells (213), the obviously increased B cell activation in HIV infection is primarily driven by the abnormal production of B cell-stimulating cytokines such as IL-6 and chronic antigenic stimulation.Elevated levels of circulating free immunoglobulin light chains in patients at increased risk of HIVassociated lymphoma might represent a marker for polyclonal Bcell activation (214).In addition, evidence indicates a skewed IGHV repertoire in specific HIV-NHL categories.Heterogeneous expression of IGHV genes in HIV-NHL might be related to specific pathways of antigenic stimulation (215).
Serum levels of IL6, IL10, C-reactive protein (CRP), soluble (s) CD23, sCD27, and sCD30 are significantly higher in patients with HIV-NHL compared with HIV + or AIDS controls after adjusting for numbers of CD4 + T-cells (216).The CD40 ligand (CD40L) can insert itself into the surface of HIV-1 particles when budding from activated CD4 + T cells (217), and HIV containing CD40 ligand (CD40L) activates B cells, which leads to secretion of the cytokines, IL-6, IL-10, IL-12 and TNF-a (218), in a way that mimics physiological stimulation.The role of CD40L in cancer has been detailed in a review (219).The HIV-1 trans-activator of transcription (Tat) induces the expression of IL-6 and IL-10 at the cellular level.Findings were similar at the individual level by in transgenic mice (220), and numerous spleens from Tat-transgenic mice had malignant lymphomas of B-cell origin.The HIV Tat also enhances the intrinsic antibody diversification mechanism by increasing activation-induced deaminase (AID)-induced somatic mutations in the variable heavy chain (VH) region of human B cells (221), which might lead to genome-wide mutations in malignant B cells among patients with HIV.
Mechanism of HIV causing lymphoma development.Mice transgenic for a defective HIV-1 provirus lacking part of the gag-pol region overexpress the HIV proteins p17, gp120, and negative regulatory factor (nef), then develop B-cell lymphoma (222).This supports the pathogenic role of aberrant HIV protein and B-cell-stimulating cytokine expression during lymphoma formation.Indeed, the HIV-1 matrix protein p17 persists in the germinal center after HIV-1 drug inhibition, and its variants (vp17s) activate Akt signaling and promote the growth of transformed B cells.This protein might also upregulate LMP-1 in B lymphocytes infected with EBV, leading to lymphoma development (223).Infection HIV can directly induce lymphoma formation.The oncogenic effects of HIV-1 proteins have been reviewed in detail elsewhere and are not discussed herein.
Immunodeficiency status
For immunity, although multiple mechanisms may contribute to the development of lymphoma in HIV-infected individuals, two mechanisms appear to be involved: (1) loss of immunoregulatory control of EBV and/or KSHV; (2) chronic B-cell activation due to immune dysfunction caused by HIV infection.The cooperation of HIV, EBV, and KSHV in the pathogenesis of lymphoma and resulting microenvironmental abnormalities have been reviewed in detail elsewhere (225,226).Table 3 shows associations between HIV-associated lymphoma and EBV and KSHV infections.It has long been shown that B-cell activation and immature phenotypic changes in vivo are accompanied by polyclonal Ig production in HIV-infected individuals (228).Notably, recent studies suggest that HIV may contribute to lymphomagenesis by acting directly on B lymphocytes as a key microenvironmental factor.It is worth noting that recent studies have shown that HIV may lead to lymphomagenesis by acting directly on B lymphocytes as a key microenvironmental factor.Various HIV-encoded proteins, including gp120, may trigger and maintain abnormal activation of B cells, abnormal secretion of cytokines IL6 and IL10, and so on, which have been stated in other subsections of HIV-associated lymphomas in this paper.Perhaps it is time to revisit the second immune-related mechanism.
Abnormal DNA rearrangements and genetic abnormalities
Retroviruses damage DNA via various mechanisms such as genome integration, replication, inflammation, and direct interaction of viral proteins with DNA and HIV might be randomly integrated into the human genome.However, a pattern of integrated duplicated Alu elements and introns of Breast Cancer Gene 1 (BRCA1) has been identified (229) that supports the tendency of HIV-1 to integrate near the Alu class of human repetitive elements (230).
A genome-wide analysis of 57 HIV lymphomas found that genes associated with fragile sites such as Fragile Histidine Triad (FHIT; FRA3B), WW domain-containing oxidoreductase (WWOX; FRA16D), Deleted in Colon Cancer (DCC; FRA18B), and Parkinson Protein 2 (PARK2; FRA6E), are frequently inactivated by mesenchymal deletions in HIV-NHL, and that the prevalence of FHIT alterations is significantly higher in HIV-DLBCL (231).Among these, FHIT, WWOX and DCC are tumor suppressor genes that are frequently inactivated in various human malignancies (232-234).Thus, HIV might act directly at the genomic level to promote the pathogenesis of HIV-NHL, and this translational effect is partially independent of the expression of viral oncogenes.Human immunodeficiency virus induces c-myc dysregulation in B cells, and levels of viral RNA and myc expression correlate (235).Expression of the highly oncogenic transcription factor c-myc is enhanced at the transcriptional and translational levels in the presence of HIV-1 Tat protein (236).
Other factors
Viruses and their components manipulate the expression of host miRNAs and play important roles in cancer pathogenesis.Hsa-miR-200c-3p is significantly downregulated in HIV-associated BL, and the zinc finger E-box binding homeobox epithelialmesenchymal transition (EMT) transcription factors ZEB1 and ZEB2 are upregulated and actively help to promote tumor metastasis and invasion (237).Moreover, miRNA-21 is significantly elevated in peripheral B cells of patients infected with HIV, suggesting that it might contribute to the maintenance of B cell hyperactivation (238).A proteomic analysis of plasma proteins from AIDS-NHL recently identified 20 host proteins and a set of protein combinations that might serve as biomarkers for the pathogenesis of AIDS-NHL (239).This indicates a new direction towards a better understanding of the pathogenesis of HIV lymphoma.
Kaposi sarcoma-associated herpes virus
This virus (human herpesvirus-8, HHV-8) is the causative agent of Kaposi sarcoma (KS) and is associated with the lymphoproliferative primary exudative lymphoma (PEL) and the plasmablastic form of MCD (240, 241).The other types of lymphoma associated with KSHV are KSHV-positive large B-cell lymphoma not otherwise specified (NOS) and GLPD.The geographic distribution of KSHV is variable, with the prevalence of infections being highest in sub-Saharan Africa (seropositivity > 50%), intermediate in Mediterranean, Middle Eastern, and Caribbean countries (seropositivity 20%-30%), and lowest in Asia, Europe, and the USA (seropositivity < 9%) (242).At present, the transmission route of KSHV is not completely clear, but it is believed that the infection mainly occurs through salivary transmission (243).Several studies have shown that KSHV can infect almost any type of cells, including epithelial cells, monocytes, macrophages, dendritic cells, T cells and fibroblasts.(243) Lymphoproliferative primary exudative lymphoma is a rare HIVassociated non-Hodgkin lymphoma (NHL) that accounts for ~4% of all HIV-associated NHL.This type of lymphoma tends to locate in the pleural space, pericardium, and peritoneum.It is morphologically variable with an empty lymphocyte immunophenotype and evidence of KSHV infection (244).It is aggressive, rapidly progressive, and is associated with high mortality rate; the average survival of patients with PEL is 2-6 months (245).
Carcinogenic mechanisms
KSHV has evolved to produce a large number of viral gene products that intricately subvert normal cellular pathways.The proteins encoded by KSHV that are thought to have transformative and oncogenic properties include latent proteins, which increase the survival and proliferation of infected cells, and lytic proteins, which are thought to mediate tumor growth.Due to space constraints, this section only summarizes the main mechanisms.
viral proteins 6.2.1.1 LANA
The mechanisms underlying KSHV carcinogenesis remain unclear.Analysis of infected cells by immunofluorescence and immunohistochemistry confirmed that LANA is one of the latent proteins consistently present in all KSHV-infected tumor cells of Kaposi's sarcoma, PEL and MCD.(251) As a multifunctional protein, LANA is involved in the regulation of transcription, chromatin remodeling, exome maintenance, DNA replication, and the control of latency and lytic phase reactivation.In addition, LANA is also involved in cell cycle regulation, which has been described in the review by Wei et al. (251) LANA binds to and inactivates the tumor suppressor proteins TP53 and retinoblastoma (RB1), thereby regulating cell growth.(252) LANA expression also affects MYC levels by binding to the negative regulator GSK-3b and thus promotes lymphomagenesis.(253) Based on current knowledge, LANA appears to provide the basis for at least the formation of KSHV-associated lymphomas.
Viral cyclin
Viral cyclin (ORF72) is a viral homolog of cell cycle protein D (254) which plays an important role in lymphangiogenesis via several functions.Physiologically, cyclin D forms a complex with cyclin-dependent kinase (CDK) and CDK4 that phosphorylates retinoblastoma protein (Rb) and leads to the release of E2F transcription factors (255).The KSHV vCyclin interacts with CDK6 to promote cell cycle progression (256,257).Moreover, the vCyclin/CDK6 complex can phosphorylate nuclear phospholipid histone chaperones, leading to genomic instability (258).
vFLIP
vFLIP is the viral homologue of cellular FLIP.Transgenic mice expressing vFLIP exhibit B cell transdifferentiation and acquire the ability to express histiocyte/dendritic cell markers (259).These mice have hematological properties typical of PEL and MCD.Previously, it has been found that vFLIP prevents apoptosis by up-regulating NF-kB.(260) In addition, the study of Lee et al. demonstrated that vFLIP can protect cells by preventing autophagy to further maintain latency.(261).
miRNAs
KSHV miRNAs are generated from 12 pre-miRNA transcripts in the latency region, ultimately producing at least 17 mature miRNAs.(263) The biogenesis of KSHV miRNAs and their role in the development of KSHV-associated malignant tumors has recently been described in detail.(242,264) Among the large number of miRNAs encoded by KSHV, KSHV-miRNA-K11 compares particularly because it shows significant homology to cellular miRNA-155.(265) MiRNA-155/bic overexpression can be observed in many human B-cell lymphomas, (266) and B-cell lymphomas can be induced in mice.(267).
Conclusions
The main aspect of virus-driven lymphangiogenesis initially focuses on the direct transforming activity of a single viral oncogenic product.However, cooperation among different viruses also plays crucial roles in the development, survival, and dissemination of lymphoid malignancies.Therefore, many studies have targeted relationships among the microenvironment, oncogenesis, tumor growth, and dissemination.How EBV and KSHV support each other in terms of persistence and lymphangiogenesis has been explained in recent reviews (268), (269).A relationship between EBV and HCV replication markers has not been identified in patients with AIDS (270), which is in contrast to other known coinfections.Indeed, HCV and HBV coinfection inhibits HCV replication, whereas HCV and HIV co-infection stimulates HCV replication and exacerbates HIV-associated immunosuppression, and EBV and HIV co-infection stimulates HIV replication in CD4T cells (271, 272).All of these complicate understanding the mechanisms through which co-infection causes carcinogenesis.To further elucidate and characterize the mechanisms of viral induction of lymphoma is a considerable challenge that will require an integrated multidisciplinary approach involving epidemiologists, molecular biologists, and immunopathologists.
FIGURE 1 EBV
FIGURE 1 EBV miRNAs are involved in regulating the host immune response.Biogenesis of EBV-encoded miRNAs is dependent on host mechanisms and comprehensively controls the antiviral adaptive immune response of infected B cells.Immediately after infection, the viral DNA genome is circularized and virally encoded and noncoded RNAs are expressed.EBV miRNAs support immune evasion at multiple levels.1)EBV miRNA-BHRF1-2-5p targets the viral antigen LMP1, driving the expression of PD-L1 and PD-L2, and facilitating viral persistence in host cells.2) EBV miRNAs also effectively interfere with MHC class I-mediated antigen presentation by targeting the antigen transporter protein, TAP2.TAP2 is a target of miRNA-BHRF -13 and -BART17.3)EBVmiRNA inhibits the expression of lysosomal enzymes (IFI30, LGMN, and CTSB), of which IFI30 and LGMN are under the control of miR-BART1 and -BART2, respectively, and CTSB is controlled by miRNA-BART2 and -BHRF1-2, inhibiting the antigen presentation ability to CD4 + T cells via MHC class II.4) EBV miRNA-BART20-5p inhibits T-bet translation by secondary inhibition of p53 and thus inhibits T-bet translation.5)EBV miRNAs also control the expression of inflammatory cytokines (IL-12, IL-6, and IFN-a), thus inhibiting cytokine-mediated immune response.6)miRNA -BHRF1-3 reduces the secretion of the NK cell ligand CXCL-11, allowing infected B cells to evade immunization by NK cells and T cells.7)EBV acts in trans on uninfected macrophages in tumors by secreting exosomes and promotes lymphoma development.CXCL-11,C-X-C motif chemokine ligand 11; ER, endoplasmic reticulum; TCR, T-cell receptor; MHC, major histocompatibility complex; NKG2D, natural killer group 2D; MICB,MHC class I chain-related molecule B.
FIGURE 2
FIGURE 2Mechanism of HBV causing lymphoma development.
FIGURE 3
FIGURE 3Mechanism of HCV causing lymphoma development.
TABLE 2
EBV viral gene expression during different types of latent infection.
TABLE 3
Lymphomas in patients infected with HIV include pathological subtypes with different virus-specific associations.
|
v3-fos-license
|
2024-06-26T06:16:56.596Z
|
2024-06-24T00:00:00.000
|
270710386
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "HYBRID",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s00405-024-08758-y.pdf",
"pdf_hash": "a1be02a28e0258bb45ce81d814821f50964fe7f9",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44331",
"s2fieldsofstudy": [
"Medicine"
],
"sha1": "61f18ac4920963c8ab7e860ab1cedb257bfd6808",
"year": 2024
}
|
pes2o/s2orc
|
Patterns and timing of recovery from facial nerve palsy after nerve-sparing parotid surgery: the role of neuromuscular retraining
Objectives Among the complications of parotid surgery, facial palsy is frequent and burdened by high functional and social impact for the patient. There are few data on the efficacy of facial neuromuscular retraining (FNR) in patients with facial palsy after parotid surgery, and no data exist on its impact in timing and extent of recovery. Material and methods A retrospective study was conducted on patients undergoing FN sparing parotid surgery and suffering from postoperative facial palsy. Among 400 patients undergoing surgery between July 2016 and May 2023, those with the preservation of the FN and onset of facial palsy were selected. Nerve function was evaluated during 2 years follow up using the House-Brackman (H&Bs) and Sunnybrook scales (SBs). Results A total of 46 patients undergoing partial or total parotidectomy were included. At discharge 18 patients (39,1%) had IV to VI grade paralysis according to the H&Bs and the mean SBs value was 54. At 2 and 6 months after surgery, the average value of Sunnybrook increased to 76.5 and 95.4 respectively. After 12 months no patients with IV to VI grade paralysis were represent in our cohort. Two years after surgery, only five patients (10.9%) had persistent grade II paralysis according to HBs. Conclusions Our study supports the efficacy of FNR in the rehabilitation of facial paralysis after nerve-sparing parotidectomy. The greater functional improvement is achieved within the first 6 months of rehabilitation. A significant improvement is detected still after 18 months, supporting the importance of long rehabilitation for patients without complete recovery after the first year.
Introduction
Transitory facial nerve (FN) palsy is among the most relevant complications of parotid surgery, with an incidence varying widely between 10 and 65% of cases [1,2].Dissection of the salivary parenchyma from the facial nerve branches could cause nerve functional impairment, despite the fact that the continuity of the fibers is preserved.An apparent intact FN sheath could hide non-functioning or interrupted axons (neurapraxia or axonotmesis) or even neurotmesis, caused by nerve traction or manipulation during surgery, that increase the nerve stimulation threshold and clinically determine facial nerve palsy (FNP) [3].In such cases, since axonotmesis is potentially reversible, the chance of spontaneous complete recovery is high.However, in cases with neurotmesis the healing process could last several months and occasionally the FN function could not return to normal.A number of therapeutic approaches have been applied to post-surgical FNP, ranging from medical therapy to FN motor rehabilitation, including Neuromuscolar Retraining (NMR), a rehabilitation strategy based on active small, slow, symmetric movements and passive facial external and intraoral massages [4].While several authors have explored risk factors for iatrogenic FNP [1,2,5] and on the surgical rehabilitation techniques in case of intraoperative nerve section, very few have focused on the impact of post-operative treatments on the recovery of FN palsy with an intact FN.In particular, the role of NMR after nerve-sparing parotid surgery has never been investigated.
The aim of this study was to assess the pattern and timing of recovery from FNP in a cohort of patients undergoing parotid surgery with FN anatomic preservation and rehabilitated with NMR.The impact of different clinical variables on facial nerve recovery was investigated to identify prognostic factors for complete recovery of nerve function.
Patients
This is a retrospective review of patients treated with partial or total parotidectomy between July 2016 and May 2023 at the Department of Otorhinolaryngology-Head and Neck Surgery of the University Hospital of Modena and Bologna, two tertiary referral centres.Parotid surgery was performed in all cases by standard approach through Blair's modified incision, under continuous operative facial monitoring with Nerve Integrity Monitor (NIM) system (Medtronic, USA).Surgery of the parotid gland was classified according to the European Salivary Gland Society (ESGS) classification system [6].Patients developing FNP despite intraoperative preservation of facial nerve integrity, and who were sent to rehabilitation with NMR were included.Patients with normal facial function after surgery, those who underwent intraoperative section of one or more branches, or refused FN rehabilitation, were excluded.
Assessment of post-operative facial nerve function
All patients were assessed at our institutional FNP rehabilitation clinic by a multidisciplinary team made up of an otolaryngologist and a speech therapist dedicated to FP, within a mean time of 15 days from surgery (range 2-37).The severity of the facial deficit was classified according to the House-Brackman rating scale (HBs) and the Sunnybrook scale (SBs).The first evaluates the capacity of the movements of one side of the face, and of the different branches of the facial nerve, according to six grades, from I = normal facial function to VI = complete FNP [4,7,8] The SBs is more complex but more specific as it comprises a global evaluation created by the study of the individual areas of the face, compared to the unaffected side [9].It evaluates the resting symmetry, the symmetry in performing voluntary movements, and the presence or absence of synkinesis.The SBs ranges from 0 to 100, with 100 representing a completely normal facial function [4,7,10].
Facial palsy rehabilitation program
The rehabilitation method through NMR was applied in the selected cohort of patients, regardless the grade of postoperative facial palsy.The patient was educated to perform selective motor control strategies, first in the presence of the dedicated speech therapist and then independently at home.The main objectives of this program are the reduction of synkinesis and facial muscles hypertonic contraction, and the improvement of facial symmetry.Moreover, the use of mirror exercises allows the sensory input to enhance the central control of the movements of single mimic muscle, and thus promote neural adaptation.The facial massages are effective in preventing post-paretic syndrome and the small movements allow reduction of recruitment and hyperactivity of adjacent muscles, with an improvement in fine muscular coordination.The slow pattern of contraction allows the patient to have adequate feedback, by carefully observing the movement and correcting the velocity or the strength of contraction, as required.Finally, the symmetry of movements allows a physiologic activation of the affected side and avoids the muscular over-contraction on the healthy side [10].For the purpose of the present study, grade II or III FNP according to HBs was defined as "mild", while FNP equal or worse to grade IV according to HBs was considered "severe".After the initial treatment meeting, during which the patient was taught the NMR methodology, the patient had to practice the exercises every day at home.A complete clinical assessment of the nerve function was performed after surgery, at the beginning of rehabilitation (T0) at 2,6,12, 18 and 24 months after surgery (namely T2, T6, T12, T18 and T24).At each follow-up visit, HBs and SBs were applied, and data prospectively recorded on a digital database.
Statistical analysis
The statistical analysis was conducted with SPSS 19.0 for Windows (IBM inc, USA) and JASP for windows version 0.16.3.0.The normal distribution of continuous variables was assessed by means of the Shapiro -Wilk test, and distribution parameters (mean, median, standard deviation, and range) were calculated.Comparison between categorical variables was carried out with the Chi square test or Fisher's exact test as appropriate.Comparison between continuous variables with normal distribution was made with Student's T-Test, while Mann -Whitney test was used for those without normal distribution.The temporal evolution of facial palsy evaluated with the SBs or the HBs were assessed with the repeated measure ANOVA.Variances were tested with Levene's test for equality of variances.The multivariate analysis was conducted with linear regression analysis, considering the global value of the SBs or HBs at last available follow-up.Linear regression for Sunnybrook scale at intermediate timepoints (e.g.T2) were conducted to check for differences among variables after observing different trends in repeated ANOVA curves.Differences were considered statistically significant for p values ≤ 0.05 with confidence interval set at 95%.
Ethical committe
This retrospective multicentric study was approved by the Internal Review Boards (IRB) of the University Hospitals of Modena and Bologna (901/2021/OSS/AOUMO; 160/2022/ OSS/AOUBO).The study was performed according to the Declaration of Helsinki.
Results
Among the 400 patients who underwent parotid surgery in the considered period at the two Institutions, 46 patients (16 men and 30 women) met the inclusion criteria and were followed for a mean period of 12 months (range 2-24 months).Mean age at surgery was 54 (range 17-86).Table 1 reports the details of the surgery performed.FN was reported as responding at intraoperative final stimulation in 44 pts, while in 2 cases was not responding.Unfortunately, the intensity of the final stimulation in mV was not retrievable in all patients.
The histological diagnosis of the surgically removed lesions mostly consisted of neoplastic lesions in 45 patients (98%), of which 12 were malignant and 33 benign, as detailed in Fig. 1.It is important to specify that the patients included in the study who had malignancies did not undergo sacrifice of the facial nerve.
Figure 2 reports the evolution of the facial palsy distribution according to HBs from the first evaluation to the end of follow-up, while Fig. 3 shows the trend of the mean value of SBs during follow-up.At the first evaluation at the FNP rehabilitation clinic, the average Sunnybrook rating was 54.6 and 28 patients (60%) had a mild FNP.A worsening of the FNP compared to the immediate post-operative time was detected using the HBs.The topographical distribution of facial paralysis involved cervical-facial branches in 30 patients, temporo-facial branches in 2 patients, and both major nerve trunks in 14 patients.
At 2-month follow-up, the FN function was retrieved for 34 patients, and the mean value of the Sunnybrook scale had increased to 76.5.The reason for the number of patients not evaluated at 2 months after surgery is due to geographical constraints.However, it should be noted that these same patients underwent NMR at home as per logopedic recommendation.Specifically, 45.7% of patients (21/34) had a mild degree of paralysis and 6 of them (6/34, 13%) had already reached complete recovery.Six months after surgery and NMR, Sunnybrook's mean value was 95.4.Furthermore, 21 patients had fully recovered from facial paralysis, and the highest degree of deficit in the cohort was grade III according to HBs (24 patient grade II and 1 patient grade III).At 12-month follow-up, the full recovery rate of FN function was 72% (33/46) and the worst degree of paralysis was grade II according to HBs (13/46).At 18-and 24-month follow-up, the full recovery rate was 85% and 89% respectively.Two years after surgery, only five cohort patients
Medical therapy
The rehabilitation process was supported by medical therapy based on oral steroids with decalage (prednisone 1 mg/ kg/die) for a mean time of 13 days (7-30 days) and vitamin B complex oral supplements for a mean time of 20 days (15-30 days).This therapy was administered in most of the patients (82,6%) with moderate-severe FNP according to HBs.
We observed a significant difference of recovery after 2 months, in favour of patients not treated with medical therapy, which, however, disappeared after 6 months.The difference between Sunnybrook global value between patients undergoing medical treatment at 2 months at 6 months (and other timepoints) was tested with the Mann-Whitney test.The significance was 0.045 at 2 months and 0.89 at 6 months.
Univariate and multivariate analysis of other predictors
To identify the possible factors with an impact on facial palsy improvement during the follow-up, the following variables were included in the univariate analysis: severity of the paralysis, age at time of surgery, sex of patient, type of parotidectomy, responsiveness of the facial nerve to NIM at the end of surgery.The results of the FNR in (10.9%) had persistent grade II paralysis according to HBs.According to the T-test with paired samples (Table 2), the improvement in facial nerve function according to SBs and HBs was statistically significant when comparing between paired follow-up periods up to 18 months (p < 0.05).On the contrary, when comparing FN function between 12 and 18, and between 18 and 24 months, no significant difference was detected.When considering the Sunnybrook scale with independent T-test, a statistically significant recovery of patients with moderate-severe paralysis, was observed in the first 2 months of rehabilitation (Table 3).In summary, after an early worsening between discharge and first evaluation even when weighted for patient's age.The same variable did not prove to be a significant predictor in all other timepoints.
Discussion
Facial nerve palsy is a major complication of parotid surgery, which can occur even in case of intraoperative preservation of nerve integrity.The lack of voluntary (motor) and involuntary (emotional) facial expression, deficits in stomatognathic functions (phonation, chewing, swallowing, yawn, smile, bite) and eyelid function are among the consequences of FNP.Furthermore, failure to treat FNP may be associated with incomplete or aberrant nerve regeneration, the development of dysfunctional facial movements, which could further impair patient's quality of life.To maximize the patient's functional recovery process, some recovery strategies have been developed, among which NMR.This rehabilitation system was developed in the Netherlands in the 1970s, as a non-surgical therapy for facial movement recovery and the prevention of muscle atrophy, whether used individually or as a complement to surgical treatment patients with mild vs. severe paralysis according to HBs are reported in Fig. 4, which shows a difference between the average HB values of the patients with mild and severe FP until 2-month follow-up.At 6-month follow-up the two groups showed similar HBs grading distribution.This data was confirmed by the T-test with independent samples analysis on mean values of SBs, for which a steep recovery of patients with severe paralysis was found within the first two months, reaching similar average SBs values between the two groups at 6-month evaluation (Tables 2 and 3).Eventually, the functional recovery of the FN during the different timepoints of the follow-up was not influenced by the age at time of surgery, the sex of patient, the type of parotidectomy, the responsiveness of the FN to NIM at the end of surgery, nor the medical treatment of the paralysis.In addition, the multivariate analysis with linear regression showed no significant impact of the type of surgery, radiotherapy, sex, age, time from surgery to the rehabilitation beginning and at any timepoint from 2 to 24 months.The only variable which resulted a significant predictor at 2 months was a mild (HB ≤ 3) vs. moderate-severe palsy (HB > 3) at T0, Fig. 4 Trends of recovery from facial nerve palsy with neuromuscular retraining, comparing patients affected by mild (equal or less than grade III) to severe (higher than grade III) facial palsy patients with moderate-severe paralysis experienced rapid improvement within the first 6 months of rehabilitation, and thereafter maintained a trend comparable to that of patients with mild-to-moderate paralysis, as evidenced by the similar mean SB scores.This finding is meaningful, because one can expect a complete recovery of FP in cases of mild neural injury, while complete recovery is less constant when severe facial palsy occurs after surgery, and some degree of synkinesis could be expected in these patients in the long The occurrence of complete or almost complete recovery at 18 and 24 months in the moderate to severe FP group, supports the efficacy of NMR in reducing or preventing synkinesis, as showed by other studies It was found that the portion of the face most affected by paralysis is the one innervated by the cervical-facial branch of the nerve, at the level of the marginalis mandibulae nerve.The preponderance of paralysis in this site is probably due to the anatomical peculiarities of this nerve branch (thin diameter, long course, and lack of anastomotic arches with other branches), according to literature data [13][14][15][16].Among the other characteristics considered, the patient's gender and the age at time of surgery proved to be variables not significantly correlated with recovery of the facial nerve palsy according to the SB scale.Advanced age is commonly a risk factor for the development of post-operative complications, while young patients, even in post-parotidectomy complications, have better functional reserves and healing abilities than older patients [17].From this evidence it might seem that NMR exploits patterns of recovery of nerve function that are not strictly dependent on gender and age, even if further studies are required to confirm such results.It could be observed a more significant improvement in young patients, and in this case, a modulation of NMR intensity could be proposed according to age.Analysis of the trend of paralysis during follow-up did not even demonstrate a direct correlation with the type of intervention performed.This suggests that the pattern of improvement could be influenced more by the degree of the paralysis at onset than by the extension of surgery.Finally, all patients included in this study were surgical monitored by using the NIM.Recovery during follow-up was investigated by comparing responsive and nonresponsive patients to NIM at the end of surgery; however, no statistically significant correlations were found between these two groups of patients.
Our study has even some limitations.Firstly, the lack of a control cohort, which would have allowed to compare the results obtained with a group of patients who did not undergo NMR, in order to determine the real added value of this treatment in the selected patients.Thus, a comparison between NMR with other rehabilitation methods may be desirable in future studies.In the present form, this study represents a snapshot of the improvement of FNP in a cohort [11].Through the rehabilitation of NMR, based on sensory stimulation, passive and active muscle exercises, and biofeedback, it is possible to progressively promote the reorientation of neural connections and the development of new ways of control of facial muscles, by strengthening existing synapses or synaptogenesis.The therapeutic intervention strategy for acute FNP includes four basic therapeutic phases: (i) initiation; (ii) facilitation; (iii) movement control; (iv) relaxation.This retrospective multicentre study evaluates the evolution of FNP in patients who underwent parotid surgery with intraoperative nerve preservation, rehabilitated with NMR methodology.During the follow-up, a trend of improvement in both HBs and SBs values was observed.First, different grades of paralysis according to HBs scale were recorded (as shown in Fig. 2) while after 6 months, only mild paralysis (grades II and III HBs) were identified, and at the end of the follow-up, complete resolution of the paralysis was obtained in 89,1% of cases.A statistically significant difference in the values recorded between paired follow-up sessions was found during the first year after surgery, for both facial grading systems.In addition, a statistically significant improvement in FNP even in the following interval (12-18 months), while in the last interval considered (18-24 months) no improvement was shown for both facial grading systems.These results allow some interesting considerations regarding the timing and the percentage of recovery to be expected in the rehabilitation from FNP.According to different studies, in most cases of mild FNP, spontaneous improvement in the first year after the onset of the paralysis is possible [12].Then, spontaneous improvements are limited, and other therapeutic strategies should be offered to the patients [13].Data regarding the pattern of recovery could enhance preoperative counselling: assessing the impact of facial nerve rehabilitation (FNR) on different degrees of facial nerve paralysis (FNP) could elucidate if such treatment is appropriate even in mild cases and how the frequency of the rehabilitation sessions could be targeted to the patient [14,15].According to the literature, this study confirms that even with NMR rehabilitation, maximum improvement is achieved within the first year, but with the application of NMR rehabilitation treatment there is a possibility of improvement even up to 18 months.Beyond 18 months, it is appropriate to evaluate the grading of paralysis achieved by the single patient to define the prosecution of treatment and to consider some surgical approaches (dynamic or static techniques) or chemodenervation.Another important consideration can be done by comparing mild and severe paralysis trends.From the analysis of the trends of mean values according to SB, a substantial difference between the two patient groups existed only until the first follow-up performed at 2 months.This finding suggests how during NMR rehabilitation treatment, of strictly selected patients who underwent the same type of rehabilitation, across a 24-month period.Furthermore, the limited number of patients included in the study, as well as the slight heterogeneity related to the different follow-up of each patient, and the lack of a validated objective tool for the assessment of facial nerve palsy, represent other important limitations.In the future, controlled studies including larger cohorts of patients may elucidate how NMR could be tailored to the different patients affected by iatrogenic FNP.
Conclusion
In our cohort NMR was associated with a complete recovery rate as high as 89,1% and a near-complete recovery in the remaining patients.The negligible rate of synkinesis in the long term, even in patients with severe post-operative FP may support the use of NMR in patients with FP after parotid surgery.The greater rate of functional recovery is achieved within the first 6 months of rehabilitation, but significant improvement is observed until 18 months.Thus, a longer rehabilitation time may be beneficial in patients with incomplete recovery after 12 months.Further controlled studies are warranted to assess the real impact of NMR on these patients.Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made.The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material.If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.To view a copy of this licence, visit http://creativecommons. org/licenses/by/4.0/.
Fig. 2 Fig. 1
Fig. 2 Evolution of facial palsy according to the House-Brackman scale (HBs) during the follow-up
Fig. 3
Fig. 3 Evolution of facial nerve palsy according to the Sunnybrook scale, during the follow up
Funding
Open access funding provided by Università degli Studi di Modena e Reggio Emilia within the CRUI-CARE Agreement.
Table 1
Surgical data regarding the operated patients.ESGS: The European Salivary Gland Society
Table 2
Paired T-test according to paired facial nerve function evaluations during follow-up, using House-Brackman scale and Sunnybrook
Table 3
Independent-samples t-test of facial nerve function using the
|
v3-fos-license
|
2022-02-11T16:13:22.687Z
|
2022-02-09T00:00:00.000
|
248849474
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1002/ece3.8921",
"pdf_hash": "081879ce9cc948415b6a6408b4818c8946955bb4",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44335",
"s2fieldsofstudy": [
"Environmental Science",
"Biology"
],
"sha1": "c1faea4224b6859e9017129916bc1652e7cc89e6",
"year": 2022
}
|
pes2o/s2orc
|
Dracula’s ménagerie: A multispecies occupancy analysis of lynx, wildcat, and wolf in the Romanian Carpathians
Abstract The recovery of terrestrial carnivores in Europe is a conservation success story. Initiatives focused on restoring top predators require information on how resident species may interact with the re‐introduced species as their interactions have the potential to alter food webs, yet such data are scarce for Europe. In this study, we assessed patterns of occupancy and interactions between three carnivore species in the Romanian Carpathians. Romania houses one of the few intact carnivore guilds in Europe, making it an ideal system to assess intraguild interactions and serve as a guide for reintroductions elsewhere. We used camera trap data from two seasons in Transylvanian forests to assess occupancy and co‐occurrence of carnivores using multispecies occupancy models. Mean occupancy in the study area was highest for lynx (Ψwinter = 0.76 95% CI: 0.42–0.92; Ψautumn = 0.71 CI: 0.38–0.84) and wolf (Ψwinter = 0.60 CI: 0.34–0.78; Ψautumn = 0.81 CI: 0.25–0.95) and lowest for wildcat (Ψwinter = 0.40 CI: 0.19–0.63; Ψautumn = 0.52 CI: 0.17–0.78) We found that marginal occupancy predictors for carnivores varied between seasons. We also found differences in predictors of co‐occurrence between seasons for both lynx‐wolf and wildcat‐wolf co‐occurrence. For both seasons, we found that conditional occupancy probabilities of all three species were higher when another species was present. Our results indicate that while there are seasonal differences in predictors of occupancy and co‐occurrence of the three species, co‐occurrence in our study area is high. Terrestrial carnivore recovery efforts are ongoing worldwide. Insights into interspecific relations between carnivore species are critical when considering the depauperate communities they are introduced in. Our work showcases that apex carnivore coexistence is possible, but dependent on protection afforded to forest habitats and their prey base.
| INTRODUC TI ON
Terrestrial carnivores are some of the most imperiled species today due to their large home range requirements, high metabolic demands, sensitivity to habitat fragmentation, and persecution by humans (Crooks, 2002;Palomares & Caro, 1999;Ripple et al., 2014;Woodroffe & Ginsberg, 1998). Carnivores can also be important top-down regulators in ecological communities (Beschta & Ripple, 2009;Ripple & Beschta, 2006;Ripple & Beschta, 2012). The loss of key carnivore species can have devastating ecosystem effects (Effiom et al., 2013;Ripple et al., 2014) and changes in abundance or occurrence of carnivores can trigger trophic cascades (Ripple & Beschta, 2012). As such, the recovery of apex predators as a conservation tool to restore ecosystem functions (termed trophic rewilding) has become increasingly popular (Jørgensen, 2015;Seddon et al., 2014). Trophic rewilding is an ecological restoration strategy used to promote self-regulating ecosystems (Svenning et al., 2016).
Rewilding efforts in the context of apex predators requires not only an understanding of their ecological interactions within the carnivore guild but also the broader context of these interactions including sources of anthropogenic impacts. Many apex predators readily reestablish in human-dominated landscapes and exhibit potential coexistence with humans (Chapron et al., 2014;Lamb et al., 2020). Although the effects of apex predator recovery in natural landscapes are relatively well understood, there are significant knowledge gaps regarding the effects of their recovery in shaping species interactions (both intraguild and across trophic levels) in human-dominated landscapes (Dorresteijn et al., 2015;Kuijper et al., 2016). Interactions between carnivores are complex in nature and are integral to shaping the ecology and structure of wildlife communities. Therefore, examining such interactions in landscapes that harbor viable carnivore populations may provide important insights into the effects of carnivore recovery on the mesocarnivore communities that often dominate landscapes where apex predators have been eliminated.
Grey wolf (Canis lupus) and Eurasian lynx (Lynx lynx) are top predators in many temperate ecosystems in Europe and Asia, but their co-occurrence has been severely limited by extirpation of one species (most often wolf). This is particularly the case for most of Western and Central Europe due to a long history of human habitation and persecution of carnivore species. Both wolves and Eurasian lynx are recovering in Europe's landscapes (Chapron et al., 2014;Kaczensky et al., 2013) either through natural range expansion (wolf) or reintroductions and population augmentation (lynx). The European wildcat (Felis silvestris) is a mesocarnivore that was once common in Europe and has also been extirpated and currently at the core of reintroduction programs in some European Union states. In this context, the Romanian Carpathians represent one of the few natural areas in Europe that still harbor intact viable populations of all three species and serve as a stronghold for carnivore populations in Europe, despite anthropogenic influences common (hunting, forestry, farming, and livestock production) (Popescu et al., 2016;Salvatori et al., 2002).
While no work has been conducted on understanding the spatial relations and interactions between these three species simultaneously, research exists on pairwise interactions between species, particularly for lynx and wolf. Lynx and wolf are sympatric across most of their range and there is some diet overlap between them.
Research addressing coexistence between these species differ in their findings, but recent studies looking at spatial interactions between these species in Europe found that these two apex predators coexist and competition between them is low Wikenros et al., 2010). In Poland, lynx and wolf territories overlap and researchers concluded that the co-occurrence of these two species was facilitated by heterogeneous habitat and specialization on different prey . These predictors, habitat heterogeneity and diet, are also explaining competitive interactions between canids and felids in North America, with a lack of interference competition in heterogeneous habitat . Therefore, we expect to observe similar co-existence (high co-occurrence) and little evidence of interference competition (neutral or positive conditional occupancy values) between lynx and wolf in our study area.
Additionally, we expect to observe differences in co-occurrence based on seasonal changes in these species' behaviors. For example, the daily movement distances of male lynx are greater during the mating season (January-March) and for female lynx are greater during periods of extensive kitten care (May-August) (Jedrzejewski et al., 2002), which could cause increased interactions with wolves as lynx cover a larger geographic area during these periods. Research on wildcats is scarce, but a study conducted in the Jura Mountains of central Europe found no evidence of avoidance between lynx and wildcat (Hercé, 2011). No published research examines interactions between wildcats and wolf. Given the size difference between wolf and wildcats and their different diets, it is likely that the relationship between wildcats and wolf will be similar to that of wildcats and lynx.
In this study, we aimed to address these knowledge gaps by studying the intraguild interactions of two apex carnivores, the Eurasian lynx and the grey wolf, and a mesocarnivore, the wildcat in the Romanian Carpathians using multispecies occupancy models (Rota et al., 2016). Unlike traditional occupancy modeling, multispecies occupancy models allow for the estimation of K E Y W O R D S carnivores, coexistence, human-dominated, interactions, landscapes, multispecies occupancy
T A X O N O M Y C L A S S I F I C A T I O N
Conservation ecology; Population ecology; Spatial ecology; Trophic interactions co-occurrence probabilities for more than two species and do not assume asymmetric interactions (i.e., dominant and subordinate species). This is useful for estimating co-occurrence probabilities between species for which there is not a priori knowledge about interspecific relationships or for which there is not an obvious dominant or subordinate species. Multispecies occupancy models also allow for the estimation of marginal occupancy (occupancy of a single species irrelative of other species) and conditional occupancy (occupancy of a single species based on the presence or absence of another species) probabilities in relation to variables of interest (e.g., altitude). This approach has been used effectively to assess habitat use, interspecific interactions of carnivores in a variety of landscapes (Dechner et al., 2018;Lombardi et al., 2020;Van der Weyde et al., 2018). Previous research on lynx-wolf and lynx-wildcat interactions suggests a high capacity for coexistence, low interspecific competition, and little to no intraguild killing.
However, this research is limited and there has been no work on wolf-wildcat dynamics or interactions of lynx, wildcat, and wolf in the same region. Additionally, none of the published literature in Europe has been conducted in an area with a fully intact carnivore guild, whereas the Romanian Carpathians have viable, reproducing populations of many large carnivores and meso-carnivores that have not been extirpated (see study area). This information is crucial to understanding the effects of apex predators on mesocarnivores and the carnivore guild. By using a multispecies occupancy approach, we can analyze complex intraguild interactions and better understand competition and coexistence patterns. Results can elucidate variables and thresholds important for occurrence and coexistence of elusive species and help inform management or reintroduction efforts. Our specific objectives were as follows: (1) evaluate seasonal predictors for occupancy of each species, (2) characterize the spatial relationships (co-occurrence) of each species in winter and autumn, and (3) identify predictors that facilitate co-occurrence. Specifically, we analyzed the effects of potentially dominant apex carnivores on the occupancy and detection of a mesocarnivore to understand potential impacts reintroductions of apex predators may have on smaller carnivores. We also evaluated seasonal changes is marginal and co-occurrence probabilities to better understand how species persist and interact under different environmental conditions.
| Study area
The study area is situated in the Southern Carpathians, Romania, covering 1200 km 2 in the eastern part of the Făgăraș Mountains, Piatra Craiului, and parts of Leaota Mountains (Figure 1). The altitude of the study area ranges from 600 to 2400 m; forests cover most of the area (62%), along with a mosaic of urban-rural landscape and agriculture with significant areas of natural vegetation (22%), and alpine grasslands and subalpine shrubs (16%) (Iosif et al., 2022).
| Camera trapping and environmental variables
We divided the study area into a grid of 2.7 × 2.7 km cells ( Figure 1) and removed cells with more than ⅔ of their area exceeding 1800 m altitude and cells more than ½ of their area covered by urban landscape features. From the remaining cells, we sampled every other cell, when it was not possible to reach a selected cell, we used an adjacent cell. Each sampled cell contained a trap station, randomly located within the cell. We conducted two seasons of monitoring: (1) December 17th, 2018, to March 31st, 2019 (winter) and (2) October 9th, 2019, to January 15th, 2020 (autumn). We installed 64 camera trap stations during winter, and 76 during autumn, with high spatial overlap between seasons ( Figure 1). Each trap station had two opposite cameras installed at a height of 40 to 60 cm positioned toward animal paths. We used two camera models per trap station, a CuddeBack C1 Model 1279 with white flash for highquality color pictures in night conditions, and a Bushnell Trophy infrared camera. Camera traps were installed on animal trails along mountain ridges, mid-slopes, upper valleys, and bottom of slopes to detect carnivores at various altitudes/habitats. Camera traps were installed 1-2 weeks prior to the start of monitoring to account for additional anthropogenic disturbance from the camera installation process. We checked camera trap stations every two weeks to replace batteries and SD cards.
At each camera trap location, we recorded the presence or absences of anthropogenic disturbance (i.e., logging or settlements) as a binary variable for species detection and occurrence. We also recorded altitude (m) via GPS and extracted distance to stream (m), distance to settlement (m), and distance to roads (m) from the camera trap location using Geographic Information Systems (ArcGIS 10.7, ESRI, Redlands CA). Within a 500-meter buffer around each camera trap location (Lombardi et al., 2020), we calculated the density of local roads (km/km 2 ), the proportion of forested area and a terrain ruggedness index (TRI) (Riley et al., 1999). Full covariate descriptions and summaries are available in Table 1.
| Occupancy modeling
We implemented a multispecies occupancy model of two or more interacting species (Rota et al., 2016) in program R 3.5.1 (R Core Team, 2021) via package unmarked (Fiske & Chandler, 2011) to explore how environmental and anthropogenic variables affect the marginal occupancy (occupancy without accounting for interactions with other species), co-occurrence (overlap in marginal occupancy between species), and conditional occupancy (effects of each species presence on other species detection and occupancy) of lynx, wildcat, and wolf in the Romanian Carpathians. Unlike traditional cooccurrence models, multispecies occupancy models do not require a priori assumptions of asymmetric interactions; therefore, species were not considered dominant or subordinate to one another (Rota et al., 2016). Data from the two seasons were analyzed separately, and sessions were divided into 14-day sampling occasions, with the winter and autumn seasons having eight and seven sampling occasions respectively. Camera trap photos were cataloged by FCC staff and volunteers, and the date, time, location, and species identification were recorded for each animal detection (Iosif et al., 2022).
Covariates were checked for correlation using Pearson's correlation tests and Pearson's chi-squared test (for numerical and factors respectively), those with high correlations r > .7 were not included in the same models for the same parameter. We first explored combinations of five detection covariates for species-specific detection probabilities (Table 1) by comparing models with the same marginal occupancy parameterization for each species. Detection covariates were kept the same for all three species as we did not have a biological reason to vary them between species. We also included the latent presence/absence of every other species as species-specific detection covariates (e.g., lynx detection predicted by the presence/ absence of wildcat and wolf). Although multispecies occupancy models do not assume asymmetric interactions between species, we wanted to explore the possibility that dominant species could exist in our system and affect the presence of other species. Therefore, we also included species-specific detections of lynx as a function of the latent presence/absence of potentially dominant wolf, and wildcat as a function of lynx and wolf. From these models, we determined a best model for each season based on Akaike information criterion (AIC), using R package MuMIn (Bartoń, 2020). We included the top detection covariates in the models exploring marginal occupancy and co-occurrence. We then ran a series of models to assess the marginal occupancy of our three species using environmental and anthropogenic variables ( Table 1) that were determined a priori and we hypothesized it would affect the marginal occupancy of each species. The candidate set of marginal occupancy models was similar for both seasons, models were only removed if variation in covariates was not great enough to allow estimation (i.e., models produced NAs or unreasonable estimates and standard errors). We compared the marginal occupancy models for each season using AIC to identify the best covariates explaining occupancy of each individual species. Using the top covariates from the marginal occupancy analysis, we ran a series of additional candidate models that reflected a priori hypotheses regarding pairwise co-occurrence between lynx and wildcat, lynx and wolf, and wildcat and wolf, and compared the models using AIC and biological relevance (Table S1). Due to data limitations (small sample size), we did not implement a three-species co-occurrence parameterization.
| Marginal occupancy
Mean occupancy for both seasons was highest for lynx (winter However, in autumn, marginal occupancy of wolf decreased with terrain ruggedness (Figure 2f), and lynx occupancy increased with forest cover (Figure 2b) while wildcat occupancy decreased with forest cover (Figure 2d).
| Co-occurrence
We also found differences in predictors of co-occurrence between seasons for both lynx-wolf and wildcat-wolf co-occupancies.
In winter, lynx-wolf and wildcat-wolf co-occurrence were predicted by forest cover (Figure 3b,c), but in autumn, co-occurrence for both pairs were predicted by terrain ruggedness (Figure 3e,f).
Lynx-wildcat co-occurrence was predicted by terrain ruggedness
for both winter and autumn seasons and was positively associated with terrain ruggedness in both winter and autumn (Figure 3a,d), but in autumn, the relationship was less linear (Figure 3d). In contrast, both lynx-wolf and wildcat-wolf co-occurrence were negatively associated with terrain ruggedness in autumn (Figure 3e,f). In winter, wildcat-wolf co-occurrence was negatively associated with forest cover while lynx-wolf co-occurrence was positively associated with forest cover, but only at >75% forest cover (Figure 3e,f).
| Conditional occupancy
In the winter season, we found that occupancy probabilities of all three species were higher when another species was present, regardless of the species (Figure 4). However, the occupancy probability of wildcat, decreased with increasing forest cover when either lynx or wolf were present (Figure 4), potentially a signal for mesopredator exclusion by apex predators in area of higher suitability. Similarly, in autumn, all species tended to co-occur, but this relationship was dependent on terrain ruggedness. Occupancy probabilities for both felids, lynx and wildcat, increased with terrain ruggedness when the other felid species was present and decreased when the other species was absent ( Figure 5). We observed the inverse relationship for both felids when considering the presence/absences of wolf, such that occupancy probabilities for lynx and wildcat decreased with increased terrain ruggedness when wolf were present and showed a positive relationship with terrain ruggedness when wolf were absent ( Figure 5). The presence of lynx and wildcat appeared to have no effect on wolf occupancy.
| Detection probabilities
For both seasons, the models that included that latent presence/absence of a potentially dominant species as a detection covariate performed significantly better than those that did not (∆AIC > 5). The top models for each season did not vary in their detection covariates; both models included distance to stream and the latent presence/absence of all species as species-specific detection covariates.
For both seasons, lynx, wildcat, and wolf detections were positively associated with the presence of the other two species (Table S1).
| DISCUSS ION
The results from our multispecies occupancy model of lynx, wildcat, and wolves in the Romanian Carpathians indicate that while there are seasonal differences in predictors of occupancy and co-occurrence of the three species, co-occurrence of the three species in our study area is high during both seasons. We identified useful predictors of marginal occupancy for each species; in winter were local road density (lynx and wolf) and altitude (wildcat). While in autumn, the best predictors of marginal occupancy were, forest cover (lynx and wildcat) and terrain ruggedness (wolf). We found that co-occurrence was influenced by environmental variables, forest cover, and terrain roughness, for both winter and autumn. Overall in this heavily forested landscape results from our study indicate that these species coexist but shift patterns of habitat use and co-occurrence seasonally.
| Determinants of occupancy
In winter, local road density was the most important predictor of occupancy for wolf, with higher road density associated with a lower probability of wolf occupancy (Figure 2e). Higher local road density in our study area is associated with higher human disturbance (e.g., limited logging) and habitat fragmentation; this corroborates findings from Jedrzejewski et al. (2004) forests. In our study area, the proportion of forest was not an important predictor of wolf occupancy in either season, even though multiple studies have found it to be an important habitat characteristic for wolf (Jedrzejewski et al., 2004;Zlatanova & Popova, 2013) This may be due to the characteristics of our study area which is heavily forested (mean proportion forest =0.78 and 0.75 for winter and autumn monitoring sessions, respectively); thus, forest cover is not a limitation to wolf occurrence. In autumn, terrain ruggedness was the most important predictor of wolf occupancy; when terrain ruggedness index was >200 (moderately to highly rugged areas) the probability of wolf occupancy declined steeply ( Figure 2f). This can be explained by the fact that wolf's main prey source in Romania, wild boar (Sin et al., 2019), was documented to prefer less fragmented areas with large beech forest stands in autumn and early winter (Fonseca, 2008). Additionally, red and roe deer, which are also important prey for wolves, are known to move corridors and for hunting and movement within their home range (Bailey, 1993;Bragin, 1986;Gordon & Stewart, 2007;Kerley et al., 2002;Matyushkin, 1977;Rabinowitz et al., 1987). Our results suggest that, in winter, Eurasian lynx are more likely to occupy areas with higher densities of local logging roads; these roads, which in our area are mostly unpaved, dirt roads, may provide easier access to resources within lynx home ranges due to decreased complexity of terrain and decreased snow depth/harder snowpack from vehicle travel. We did not observe this relationship with wildcat, however.
Rather, there was a negative relationship between density of local roads and wildcat occupancy in autumn (Figure 2d), which could be an artifact of body size; most documented examples of felids utilizing roads for movement within their home ranges was with larger bodied species (>11 kg). We also did not observe this relationship in winter; however, this is likely an outcome of the importance of altitude for wildcat occupancy, which has a negative relationship ( Figure 2c). Higher altitudes are associated with greater snow depth, and while lynx are well adapted to move in deep snow and altitude was not important for lynx occupancy, wildcats have physical limitations that make travel through deep snow more difficult. A study in Switzerland had similar findings whereby wildcats moved to areas free of snow in winter and spring and moved back to high elevations in summer (Mermod & Liberek, 2002). Similarly, in North America, the relationship between Canadian lynx (Lynx canadensis) and bobcat (Lynx rufus) is mediated by snowpack, with the distribution of the less snow-adapted, the bobcat, being limited by snow depth at the northern edge of its range (Morin et al., 2020;Reed et al., 2017). Our results for marginal occupancy of lynx, wildcats, and wolf provide insights into both habitat selection and spatial relations for these elusive carnivores in Romania. Our results suggest lynx may use roads for movement, a practice common for other felids of similar body size, but not described in this species. Additionally, we provide further support for previous findings on habitat selection and occupancy for these three European terrestrial predators.
| Determinants of co-occurrence
In winter and autumn, co-occurrence for lynx and wolf was high indicating that both species have similar habitat requirements. In winter, we found an effect of forest cover on the co-occurrence of lynx and wolf; co-occurrence increased with proportion of forest cover >0.75. prey of lynx and wolf. Roe deer abundance is also lower in areas with high forest cover (Melis et al., 2009). Higher lynx-wolf co-occurrence in areas expected to have lower roe deer abundance indicates that lynx and wolf are likely partitioning prey resources which would reduce competition. In our study area, wolf also prey on wild boar and red deer (Sin et al., 2019). In autumn, terrain ruggedness was a negative predictor of co-occurrence for lynx and wolf, such that predicted co-occurrence was ~0 for the highest values of terrain ruggedness.
This relationship is driven by the negative relationship between marginal occupancy for wolf and terrain ruggedness, which is related to prey movements and availability as explained above (Fonseca, 2008;Sin et al., 2019) (Figure 2c). Because marginal occupancy for wolf is ~0 at high terrain ruggedness, co-occurrence for lynx and wolf is low as well. Additionally, co-occurrence between wolf and wildcat decreased with terrain ruggedness in autumn (Figure 3f) due to the low marginal occupancy for wolf at high terrain ruggedness. In winter however, co-occurrence of wolf and wildcat was predicted by proportion of forest such that increasing forest cover resulted in lower co-occurrence (Figure 3c). In both seasons, the co-occurrence of lynx and wildcat increased with terrain ruggedness, but the relationship was stronger in winter (Figure 3a,d). This relationship also provides further evidence that the negative relationship observed for lynx and wolf co-occurrence and terrain ruggedness was driven by wolf marginal occupancy.
| Management and conservation implications
The positive effect of wolf and lynx presences on detection of one another, high levels of co-occurrence in winter, and high levels of conditional occupancy in both seasons (higher occupancy probability when other species is present), for lynx and wolf provide little evidence of interference competition between these apex predators. This suggest that carnivore species may aggregate in certain habitats during winter, potentially driven by prey availability. This corroborates findings from other studies assessing interactions between co-occurring felids and canids that overlap in resource use. not affect the introduction efforts given that prey base can support both species, and releases occur in highly forested but less topographically fragmented areas. Additionally, our findings also suggest that apex predators have little negative effects on the mesocarnivore, wildcat. This information is useful for management given that wolves are recolonizing their former range in Europe (Chapron et al., 2014). Our findings suggest that wolf would not have negative impacts on wildcat given enough suitable habitat is available. In summary, studying intraguild interactions in an intact system has enabled us to observe and quantify intraspecific interactions among carnivores where they have co-existed and co-evolved for centuries. This provides insight into their potential long-term dynamics for areas where they are recovering naturally or recovering through rewilding efforts. While our study did not include the summer season, our results from two separate and partially overlapping autumn and winter seasons suggest that competition between lynx, wildcat, and wolf is low. However, additional information on the richness and abundance of the prey base, and the spatial and temporal relations between predators and their prey can augment these findings and provide additional management insights in the context of rewilding.
ACK N OWLED G M ENTS
We thank Piatra Craiului National Park Administration and the Hunting Associations Bârsa, Jderul, and GTS Muntenia, for permissions to undertake fieldwork. We thank Liviu Ungureanu, Călin Șerban, and rangers of the Foundation Conservation Carpathia for help with camera deployment and checking. We thank Ken Kellner for continued support with R code for the multispecies occupancy Travel for MD to Romania was provided by the Ohio University College of Arts and Sciences.
|
v3-fos-license
|
2021-07-03T06:17:01.270Z
|
2021-06-22T00:00:00.000
|
235709491
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GREEN",
"oa_url": "https://www.mdpi.com/2304-8158/10/7/1447/pdf",
"pdf_hash": "cb48775f35860fcdf978d8820379b3a0cce8f987",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44336",
"s2fieldsofstudy": [
"Agricultural And Food Sciences"
],
"sha1": "f5edb228ad635cba832fb2d72be792f5ba2f2cb0",
"year": 2021
}
|
pes2o/s2orc
|
Non-Alcoholic Pearl Millet Beverage Innovation with Own Bioburden: Leuconostoc mesenteroides, Pediococcus pentosaceus and Enterococcus gallinarum
The appropriate solution to the problem of quality variability and microbial stability of traditional non-alcoholic pearl millet fermented beverages (NAPMFB) is the use of starter cultures. However, potential starter cultures need to be tested in the production process. We aimed to identify and purify bioburden lactic acid bacteria from naturally fermented pearl millet slurry (PMS) and assess their effectiveness as cultures for the production of NAPMFB. Following the traditional Kunun-zaki process, the PMS was naturally fermented at 37 °C for 36 h. The pH, total titratable acidity (TTA), lactic acid bacteria (LAB), total viable count (TVC) and the soluble sugar were determined at 3 h interval. The presumptive LAB bacteria were characterized using a scanning electron microscope, biochemical tests and identified using the VITEK 2 Advanced Expert System for microbial identification. The changes in pH and TTA followed a non-linear exponential model with the rate of significant pH decrease of 0.071 h−1, and TTA was inversely proportional to the pH at the rate of 0.042 h−1. The Gompertz model with the mean relative deviation modulus, 0.7% for LAB and 2.01% for TVC explained the variability in microbial growth during fermentation. The LAB increased significantly from 6.97 to 7.68 log cfu/mL being dominated by Leuconostoc, Pediococcus, Streptococcus and Enterococcus with an optimum fermentation time of 18 h at 37 °C and 4.06 pH. L. mesenteroides and P. pentosaceus created an acidic environment while E. gallinarum increased the pH of the pearl millet extract (PME). Innovative NAPMFB was produced through assessment of LAB from PMS to PME fermented with L. mesentoroides (0.05%) and P. pentosaceus (0.025%) for 18 h, thereby reducing the production time from the traditional 24 h.
Introduction
Fermentation is an ancient method of food preservation and due to its nutritional value as well as a variety of sensory attributes, it is popular in many cultures [1]. Furthermore, fermentation destroys undesirable components, resulting in food safety, extension of product shelf life, protein and carbohydrate digestibility, dietary fibre modification and enhancement of vitamins and phenolic compounds [1,2]. However, the traditional fermentation process is spontaneous and uncontrolled while the products are obtained under local climatic conditions, resulting in variable sensory characteristics and quality [1]. Innovative fermentation technology of the traditional production processes could solve the problem of food safety and malnutrition in some countries where poverty, malnutrition and infant mortality are common.
Production of Pearl Millet Slurry and Fermentation
The production process for pearl millet slurry is detailed in Figure 1. The pearl millet flour (200 g) was hand mixed with 250 g water and left to hydrate for 3 h at ambient temperature (approximately 25 °C). The hydrated paste was divided into two unequal portions (¼ and ¾). The ¾ paste was gelatinised with 1000 mL boiling water and cooled to 40 °C. The ¼ paste was hand mixed with 10 g ground ginger, 30 g sprouted rice flour and 50 mL cold water. The two portions (¼ and ¾) were mixed. Aliquots (45 mL) of the slurry were distributed into sterilized 100 mL Schott bottles and left to ferment at 37 °C for 36 h in a water bath with a shaker set at 32 rpm. Samples were drawn at 3 h interval during the fermentation and analysed for pH, total titratable acidity, total soluble sugar and microbial population.
Physicochemical Analysis of Pearl Millet Slurry during Fermentation
The pH of the pearl millet slurry (PMS) (10 mL) was measured in triplicates using Hanna Edge glass electrode pH meter standardised with pH buffer solution of 4, 7 and 10. The total titratable acidity (TTA) was determined in triplicates by titrating 10 mL of the fermenting pearl millet slurry with 0.1 M NaOH using phenolphthalein as an indicator until a light pink colour appears. The TTA was expressed as percent lactic acid [13]. Equation (1) was used to calculate the acidity, 0.1 M NaOH equivalent to 90.08 mg lactic acid.
Physicochemical Analysis of Pearl Millet Slurry during Fermentation
The pH of the pearl millet slurry (PMS) (10 mL) was measured in triplicates using Hanna Edge glass electrode pH meter standardised with pH buffer solution of 4, 7 and 10. The total titratable acidity (TTA) was determined in triplicates by titrating 10 mL of the fermenting pearl millet slurry with 0.1 M NaOH using phenolphthalein as an indicator until a light pink colour appears. The TTA was expressed as percent lactic acid [13]. Equation (1) was used to calculate the acidity, 0.1 M NaOH equivalent to 90.08 mg lactic acid. TTA (% lactic acid) = mL NaOH × N NaOH × M.E volume of sample × 1000 × 100 (1) where ml NaOH = volume of NaOH (mL), N NaOH = molarity of NaOH, M.E = the equivalent factor of lactic acid being 90.08 mg, 1000 = factor used to convert the M.E which is normally in mg to grams, and 100 used to express the lactic acid concentration in percentage. The method of AOAC 982.14 as described by [14] was used to determine the total soluble sugars in pearl millet slurry (PMS) during fermentation.
Enumeration of Bacteria in Pearl Millet Slurry during Fermentation
Pearl millet slurry (PMS) (45 mL) was added into 100 mL Schott bottles and thoroughly mixed by shaking for 1 min. Dilutions of PMS were carried out by transferring 10 mL to a bottle containing 90 mL sterile 1 4 strength of Ringer solution [14,15] to give 10:100 dilutions followed by a 10 fold serial dilution from 10-1 to 10-10. Each dilution was sub-cultured in triplicate. A portion of the sample dilution (1 mL) was added into 15 × 100 mm plastic Petri plates containing cooled molten agar, mixed and left to solidify. Lactic acid bacteria (LAB) were plated on deMan Rogosa and Sharpe (MRS) agar (Merck HG00C107.500) [13,16] under anaerobic condition using Anaerobic Gas-Pack system and anaerobic indicator strips at 30 • C for 48 h [13,17,18]. The total viable count (TVC) was enumerated on plate count agar (PCA) [Merck HG 0000C6.500] and incubated aerobically at 37 • C for 48 h. After incubation, Petri plates with colonies between 30 and 300 were counted. All microbiological data were expressed in the logarithm of colony-forming unit per ml (log CFU/mL).
Isolation and Identification of Lactic Acid Bacteria in Pearl Millet Slurry during Fermentation
Pearl millet slurry (PMS) (45 mL) was homogenized in centrifuge tubes using vortex at 5 speeds for 30 s and 1 mL was transferred aseptically into a 9 mL of 1 4 strength Ringer solution and mixed thoroughly. Serial dilutions (10-1 to 10-4) were carried out and a 0.1 mL portion of the appropriate dilutions spread onto deMan Rogosa and Sharpe (MRS) agar plates. Besides, 1 mL of the serial dilution (10-1 to 10-4) was pour-plated into MRS agar. Each dilution was cultured in triplicate. The plates were incubated anaerobically for 48 h at 30 • C. Distinct colonies grown on and/or in MRS plates with 30-300 colonies were harvested and sub-cultured on to fresh MRS agar and incubated for 48 h at 30 • C. Presumptive lactic acid bacteria (LAB) colonies were further sub-cultured in triplicates on MRS agar plates and anaerobically incubated for 48 h at 37 • C.
Presumptive LAB isolates on MRS agar were examined for Gram reaction, catalase reaction, production of CO 2 from glucose using hot loop test and gas production using 3% H 2 O 2 [19]. Cell morphology was examined by a compound microscope and scanning electron microscope (SEM). The growth of isolates at 4, 10, 45 • C and 6.5% NaCl concentration in MRS agar were evaluated after 48 h. The colonies were identified using Vitek 2 compact system. The VITEK 2 Advanced Expert System gram-positive (GP) cards for microbial identification were used to identify the isolates (Enterococcus, Lactococcus, Leuconostoc, Pediococcus, Streptococcus and Vagococcus) to species while anaerobic cards were used to identify Lactobacillus to species. Vitek 2 compact system uses the principle of flurogenic method for microbial identification using a 64 well cards. The test cards used for microbial identification are divided into Gram-negative (GN) cards, Gram-positive (GP) cards, anaerobic (ANC) cards, Neisseria and Haemophilus (NH) and yeast (Yst). An inoculum from isolated pure cultures were homogenised in a 3 mL of 0.45% NaCl saline of pH 4.5-7.0 using a sterile swab to the density equivalent of 0.5-0.63 McFarland standard. The turbidity was verified using Vitek 2 DensiCheck Plus equipment calibrated with 0, 0.5, 2 and 3 McF standards. The homogenised specimens in test tubes together with the selected GP/ANC cards were placed into cassette, scanned and loaded into the Vitek 2 compact system and run following the manufacturer operating procedure. The cards were filled with the homogenised specimens by vacuum created within the equipment, sealed and placed into the machine incubator (35 • C) [20,21]. The cards were exposed to a kinetic fluorescence measurement every 15 min for 2-8 h and the results read against GP and ANC Foods 2021, 10, 1447 5 of 21 database in the equipment, and the results were made available automatically while the cards ejected into waste container.
The isolates were grown in 500 mL deMan Rogosa and Sharpe broth at 30 • C for 60 h. The broths were hand-mixed thoroughly and 2 mL of the broth mixed with 1 mL of 10% skim milk [22] in 5 mL bench-top freeze-dryer vials. The samples were frozen in an ultra-freezer (Glacier, −86 • C ultralow temperature freezer) at −76 • C for 12 h then freeze-dried using BenchTop-Pro with Omnitronics (VirTis SP Scientific) freeze dryer. The dried samples were sealed under vacuum and stored in the freezer at −18 • C.
Lactic Acid Bacteria Preparation for Scanning Electron Microscope Imaging
The scanning electron microscope images were used to verify the identified lactic acid bacteria based on morphology. The methods of [23,24] were used to obtain images of lactic acid bacteria (LAB) using a scanning electron microscope (SEM). LAB colonies were grown in MRS broths at 30 • C for 36 h. The broth was mixed thoroughly for 1 min and few drops (4-5) placed on 0.45 µm filters and then left to air dry at room temperature for 30 min. The specimens were then fixed using 2.5% glutaraldehyde in phosphate-buffered saline (PBS) with a pH of 7.2 for 30 min at 4 • C. The specimens were fixed using osmium tetroxide (OsO 4 ) for 1 to 2 h before dehydration in a series of ascending different ethanol concentration (30,50,70, 80 and 100%) for 15 min at each concentration. The final stage in 100% ethanol was repeated twice. The specimens were then critically-point dried at 1072 psi and 31 • C, then coated with gold before viewing under Zeiss MERLIN FE-SEM. Beam conditions during imaging were 5 kV accelerating voltage, 250 pA probe current, with a working distance of approximately 4 mm.
Experimental Design for the Effect of Bioburden Lactic Acid Bacteria on Pearl Millet Extract
The three isolated lactic acid bacteria (L. mesenteroides, P. pentosaceus and E. gallinarum) from the pearl millet slurry were assessed for their effect on pearl millet extract (PME), a modification of the pearl millet slurry extraction method to reduce production time. Pearl millet extract (produced by hydrating pearl millet flour with water (1:10)), with 15% sprouted rice flour, 10% ground ginger and 0.6% pectin were pasteurized, cooled to 40 • C and inoculated with L. mesenteroides, P. pentosaceus and E. gallinarum. The inoculation followed a randomized three-level augmented factorial design (19 runs) each culture at two levels (0.05, 0.1%) with three center points to determine the optimum culture. Each design run experiment was conducted in triplicate. The inoculum was fermented for 18 h at 37 • C. The generalized linear model (Equation (2)) was used to determine the effect of the purified cultures of lactic acid bacteria (LAB) on the pH, total titratable acidity (TTA) and viscosity of the beverage. The model obtained was used to simulate 1000 cases using Monte Carlo simulation to establish the influence of the LAB on the pH, TTA and viscosity. Y = ß 0 +ß 1 X 1 +ß 2 X 2 +ß 3 X 3 +ß 12 X 1 X 2 +ß 13 X 1 X 3 +ß 23 X 2 X 3 (2) where Y represents the estimated parameter response (pH, lactic acid or viscosity). ß 0 represents the overall mean (intercept), β 1 , β 2 and β 3 are the main effects for L. mesenteroides, P. pentosaceus and E. gallinarum, respectively. β 12 , β 13 and β 23 are the interactive effect of the lactic acid bacteria. X1, X2 and X3 represent independent factors, L. mesenteroides, P. pentosaceus and E. gallinarum, respectively.
2.8. Effect of L. mesenteroides and P. pentosaceus on the pH, Total Titratable Acidity (TTA) and
Viscosity of the Pearl Millet Extract
A two-level factorial design for L. mesenteroides and P. pentosaceus each at two levels (0.05 and 0.10%) augmented with a centre point was used to evaluate their effects on the acceptability (benchtop sensory), pH, total titratable acidity and viscosity of the PME. The experimental design was run randomly in triplicate. A benchtop sensory revealed that the taste of the beverage was not acceptable. Thereafter, L. mesenteroides and P. pentosaceus were used in combination at 0.05% each and combination at 0.05 and 0.025%, respectively. A benchtop sensory was used to evaluate the taste of the beverage.
Production of Optimal Non-Alcoholic Pearl Millet Beverage
Pearl millet extract (PME) (pearl millet flour with water (1:10)), (1000 mL) was weighed into a 5 L plastic beaker and blended with 0.6% pectin, 0.1% sodium citrate, 1% sunflower lecithin and 5% white sugar at 6600 rpm for 7 min using a Silverson L4RT homogenizer while slowly adding the dry ingredients. The mixture was pasteurised in a pot at 85 • C for 15 min and hot-filled into 100 mL Schott bottles. The bottles were rapidly cooled to 25 • C in ice blocks and tap water. The extract was then aseptically inoculated with a mixture of L. mesenteroides (0.05%) and P. pentosaceus (0.025%); fermented for 18 h at 37 • C. The resulting non-alcoholic pear millet beverage (NAPMB) was then chilled at 4 • C until required.
Determination of the Viscosity of Non-Alcoholic Pearl Millet Beverage
The change in viscosity of pearl millet beverage over time was determined using Rheolab QC (Anton Paar) with temperature device C-PTD 180/AIR/QC and measuring system CC27. The beverage (18 mL) was poured into an upward projected sample cup and analyzed following the manufacturer's instruction at 5 • C and 22 • C for 5 min. In all runs, the shear stress (τ) was set at 20 Pascal. The average of the triplicates was used.
Data Analysis
The results were reported as mean ± standard deviation of three triplicate runs. Multivariate Analysis Of Variance (MANOVA) was used to determine the mean difference between treatments at p = 0.05. Duncan's multiple range test was used to separate means where differences exist using version 23 of IBM SPSS (IBM, 2015). The statistical relationships between the dependent variables pH, total titratable acidity, total viable count, yeast and mould during the fermentation of pearl millet slurry for the production of non-alcoholic pearl millet beverage were determined using Pearson correlation
Effect of Fermentation Time on the pH and Total Titratable Acidity (TTA) of Pearl Millet Slurry
The physicochemical and microbial characteristics of the pearl millet slurry during fermentation is outlined in Table 1. The changes in pH and TTA followed a non-linear exponential model as in Equation (3).
where a = horizontal asymptote; b = a − y intercept or difference between the horizontal asymptote and the value of y when x = 0; c = the rate of constant (h −1 ) and t = fermentation time (h). The models for pH and TTA accounted for 97.1% and 98.1%, respectively, of the variability in the pH and TTA, respectively. The changes in pH and total titratable acidity of pearl millet slurry (PMS) over the 36-h fermentation are shown in Figure 2. There was a significant (p < 0.05) decrease in pH during the fermentation ranging from 6.37 to 3.77 in 36 h due to the increase in the population of lactic acid bacteria (LAB), which fermented glucose to lactic acid and carbon dioxide. The pH kinetics of the fermented millet slurry is indicated in Table 2. The rate of pH decrease during fermentation was 0.071 h −1 with a lower asymptote of 3.38. At the beginning of fermentation, the LAB were in the lag phase (0-3 h), thereafter the organisms exponentially produced significant acid until 21 h followed by a stationary phase (24-30 h). The decrease in pH could be due to the build-up of hydrogen ions as microorganisms break down starch. Meanwhile, the stationary phase could have been caused by the exhaustion of nutrient and the build-up of waste by LAB. 3 6.09 ± 0.13 0.14 ± 0.04 0.80 ± 0.07 6.73 ± 0.46 7.38 ± 0.40 6 5.59 ± 0.09 0.18 ± 0.01 1.12 ± 0.10 7.74 ± 0.47 7.92 ± 0.14 9 5.41 ± 0.07 0. The models for pH and TTA accounted for 97.1% and 98.1%, respectively, of the variability in the pH and TTA, respectively. The changes in pH and total titratable acidity of pearl millet slurry (PMS) over the 36-h fermentation are shown in Figure 2. There was a significant (p ˂ 0.05) decrease in pH during the fermentation ranging from 6.37 to 3.77 in 36 h due to the increase in the population of lactic acid bacteria (LAB), which fermented glucose to lactic acid and carbon dioxide. The pH kinetics of the fermented millet slurry is indicated in Table 2. The rate of pH decrease during fermentation was 0.071 h −1 with a lower asymptote of 3.38. At the beginning of fermentation, the LAB were in the lag phase (0-3 h), thereafter the organisms exponentially produced significant acid until 21 h followed by a stationary phase (24-30 h). The decrease in pH could be due to the build-up of hydrogen ions as microorganisms break down starch. Meanwhile, the stationary phase could have been caused by the exhaustion of nutrient and the build-up of waste by LAB. These results were in agreement with the report [9] that a decrease in pH during the fermentation of Kunun-zaki was caused by the formation of organic acid from carbohy- These results were in agreement with the report [9] that a decrease in pH during the fermentation of Kunun-zaki was caused by the formation of organic acid from carbohydrates and other food nutrients.
The total titratable acidity (TTA) [expressed as % lactic acid] increased from 0.12% at the start of fermentation to 0.53% at the end of 36 h. The TTA kinetics of the fermented millet slurry is indicated in Table 1. The rate of TTA increase during fermentation was 0.042 h −1 with a horizontal asymptote of 0.663%. There was a significant (p < 0.05) change in TTA over the 36 h fermentation time. This could be attributed to the decrease in pH as the concentration of acid increased. The increase in LAB produced more lactic acid from the fermentation of sugars. The increase in acidity could be the cause of a sweet-sour taste of non-alcoholic pearl millet beverage in agreement with [25] during the fermentation of Masvusvu and Mangisi. Also, after 18 h of fermentation, there was no significant change in the pH and TTA. This is in agreement with the LAB growth curve which peaked after 18 h. Thus, the optimum fermentation time for the slurry could be 18 h at 37 • C with the pH expected to be 4.06.
Soluble Sugar Kinetics of Pearl Millet Slurry during Fermentation
The main soluble sugar identified in pearl millet slurry (PMS) was glucose which ranged from 0.54 to 2.05%. Figure 3 shows a significant (p < 0.05) increase in glucose content up to 27 h, after which there was a quadratic drop due to the breakdown of the pearl millet starch. The glucose kinetics (R 2 = 0.947) could be expressed as Glucose (%) = 0.584 + 0.107t − 0.002t 2 , where t = fermentation time in h. The quadratic model accounted for 94.7% of the variability in glucose. The increase in glucose during PMS fermentation could be attributed to the decrease in starch caused by the action of α-and β-amylase activities. During fermentation, enzymes hydrolyse starch to produce monomeric sugar glucose. Although there was an increase in glucose content from the onset of fermentation, the glucose did not significantly increase after 20 h. This could be due to the acidification (low pH) of the slurry which terminates the activity of alpha-amylase by the build-up of tannins [26]. Tannins are natural polyphenols found in most cereal grains. They can act as antioxidants together with phytic acid and phenols [27]. Similarly, [26] reported glucose as the main soluble sugar which gradually increased in the first 20 h during the fermentation of pearl millet flour for the production of Lohoh bread. [28] also identified 0.5% glucose during the fermentation of Kunun-zaki. Therefore, fermentation time affected the glucose content of pearl millet slurry.
Kinetics of Lactic Acid Bacteria and Total Viable Microbes in Pearl Millet Slurry during Fermentation
The lactic acid bacteria (LAB) and total viable count (TVC) were modelled using the The quadratic model accounted for 94.7% of the variability in glucose. The increase in glucose during PMS fermentation could be attributed to the decrease in starch caused by the action of αand β-amylase activities. During fermentation, enzymes hydrolyse starch to produce monomeric sugar glucose. Although there was an increase in glucose content from the onset of fermentation, the glucose did not significantly increase after 20 h. This could be due to the acidification (low pH) of the slurry which terminates the activity of alpha-amylase by the build-up of tannins [26]. Tannins are natural polyphenols found in most cereal grains. They can act as antioxidants together with phytic acid and phenols [27]. Similarly [26] reported glucose as the main soluble sugar which gradually increased in the first 20 h during the fermentation of pearl millet flour for the production of Lohoh bread [28] also identified 0.5% glucose during the fermentation of Kunun-zaki. Therefore, fermentation time affected the glucose content of pearl millet slurry.
Kinetics of Lactic Acid Bacteria and Total Viable Microbes in Pearl Millet Slurry during Fermentation
The lactic acid bacteria (LAB) and total viable count (TVC) were modelled using the Gompertz equation (Equation (4) as modified by [29].
where K = initial level of bacterial count (log CFU/mL), A = increase in log CFU/mL between time = 0 and the maximum population density at the stationary phase, µ max = maximum growth rate (∆log (CFU/mL)/h, λ = lag time (h) and t = fermentation time (h). The model parameters are detailed in Table 3. The goodness-of-fit was evaluated by the mean relative deviation modulus, 0.7% for LAB and 2.01% for TVC. The model explained the variability in microbial growth and could be used to explain the trend in growth during fermentation. The growth pattern of pearl millet slurry during fermentation is shown in Figure 4. K = initial level of bacterial count (log CFU/mL), A = increase in log CFU/mL between time =0 and the maximum population density at the stationary phase (log CFU/mL), µmax = maximum growth rate (Δlog (CFU/mL)/h), λ = lag time (h) and t = fermentation time (h); E% = relative percent difference between experimental (O) and predicted (P) values ∑ | | × 100 There was an apparent lag time of 3.9 h, after which the growth of LAB significantly (p < 0.05) increased until 18 h. This trend is in agreement with those reported by [30] during the preparation of Chibwantu. This similar concept was explained by [31]. Although most LAB tolerates low pH, certain strains may have been retarded [32]. The growth of Leuconostoc and lactic streptococci rapidly drops the pH during fermentation to 4.0-4.5 and then retard their growth, thus giving way to subsequent bacteria. Lactobacilli spp. and pediococci spp. succeeded leuconostoc bacteria during fermentation, resulting in their growth retardation when the pH reached 3.5. These results are similar to those reported for spontaneous fermentation of millet by [16].
The lag phase was followed by the exponential increase of LAB from 6.76 (9 h) to 7.87 log CFU/mL (12 h) in 3 h and then accelerated to the highest count of 8.10 log CFU/mL after 15 h. During this growth phase, the cells surviving the acidic environment could be growing and dividing at the maximum rate. There was a slight decrease in the LAB to 7.79 log CFU/mL (18 h), then the organisms remained stationary for 9 h (18-27 h) with an average of 7.95 CFU/mL. Since this is a batch fermentation the growth of organisms could have been limited by depletion of nutrients, build-up of inhibitory metabolites or endproduct (lactic acid) and/or shortage of biological space. Thereafter, there was a death phase as the cells started to decrease from 7.97 (30 h) to 7.68 log CFU/mL (36 h). A similar trend of LAB growth was reported by [23] during the fermentation of Umqombothi.
There was a significant (p ˂ 0.05) growth in total viable count (TVC) over a 36 h fermentation period (Figure 4b). The TVC cells accelerated from 6.98 log CFU/mL at the onset to 7.38 log CFU/mL in 3 h. This was followed by a significant (p < 0.05) exponential increase to 7.92 log CFU/mL (6 h). The lag phase was not visible during the growth of TVC. This may be caused by the rapid growth of mixed microbes which dominated the spontaneous fermentation of the pearl millet slurry. At this stage, certain bacteria other than LAB could be growing at a faster rate. This could also have been caused by mixed microbes not taking long to adapt to the new environment. The growth went into a stationary phase which lasted for 6 h (6-12 h). The numbers of cells during the death phase were reduced from 7.88 to 7.51 0.04 log CFU/mL (12-15 h). The decrease in cells may be due to the build-up of lactic acid caused by mostly LAB. There was a significant (p < 0.05) reduction in cells (death phase) after 27 h for 3 h followed by an acceleration phase for 3 h (30 h). Bacteria not tolerating low pH could have caused a decrease in TVC. The shift-up and shift-down could also be caused by the environmental conditions resulting in competition for survival among different species of LAB. In particular, the extended stationary phase could have led to cell reduction (death) with no new nutrients fed into the system. These results are similar to those reported by [25] during the fermentation of Masvusvu and Mangisi.
There was a very strong, negative linear relationship between TTA and pH of the beverage during fermentation (r = −0.975, p < 0.05). Meanwhile, the TTA had a moderate positive relationship between lactic acid bacteria (LAB) count (r = 0.440, p < 0.05). The pH had a negative moderate relationship with the LAB count (r = −0.535, p < 0.05). The LAB count had a positive weak and very weak relationship with the TVC and YM, respectively. Meanwhile, the TVC had a positive weak relationship with the YM. A moderate-strong linear relationship was between the TTA and pH, LAB and pH. These results further indicated that during succession fermentation of glucose by LAB to lactic acid, the pH dropped due to the built-up of hydrogen ion. The decrease in pH thus increased TTA.
Lactic Acid Bacteria Associated with Pearl Millet Slurry Fermentation
The isolates identified from pearl millet slurry (PMS) during fermentation over 36 h are shown in Table 4. Lactic acid bacteria (LAB) from the genera Leuconostoc, Pediococcus and Enterococcus were the main species involved in the fermentation. The Leuconostoc mesenteroides ssp. dextranicum (Figure 5a) is characterized by a lenticular coccoid cells in chains and Leuconostoc pseudomesenteroides were identified at the beginning of fermentation between the pH of 5.59 (0 h) and 6.37 (6 h). Leuconostoc's presence at the beginning of fermentation may be attributed to their growth condition at pH 6.0-6.5. This is identical to the study by [33] who identified L. pseudomesenteroides from Oshashikwa, traditionally fermented milk in Namibia. The organisms were responsible for the initiation of lactic acid fermentation. These heterolactic organisms produce carbon dioxide and organic acids which rapidly lower the pH of the beverage to 4.0 or 4.5 and inhibit the development of undesirable microorganisms. The carbon dioxide produced replaces the oxygen, making the environment anaerobic [34] and suitable for the growth of subsequent organisms such as Lactobacillus. Besides, the anaerobic environment created by the CO2 has a preservative effect on the beverage since it inhibits the growth of unwanted bacteria contaminants [35]. There was an apparent lag time of 3.9 h, after which the growth of LAB significantly (p < 0.05) increased until 18 h. This trend is in agreement with those reported by [30] during the preparation of Chibwantu. This similar concept was explained by [31]. Although most LAB tolerates low pH, certain strains may have been retarded [32]. The growth of Leuconostoc and lactic streptococci rapidly drops the pH during fermentation to 4.0-4.5 and then retard their growth, thus giving way to subsequent bacteria. Lactobacilli spp. and pediococci spp. succeeded leuconostoc bacteria during fermentation, resulting in their growth retardation when the pH reached 3.5. These results are similar to those reported for spontaneous fermentation of millet by [16].
The lag phase was followed by the exponential increase of LAB from 6.76 (9 h) to 7.87 log CFU/mL (12 h) in 3 h and then accelerated to the highest count of 8.10 log CFU/mL after 15 h. During this growth phase, the cells surviving the acidic environment could be growing and dividing at the maximum rate. There was a slight decrease in the LAB to 7.79 log CFU/mL (18 h), then the organisms remained stationary for 9 h (18-27 h) with an average of 7.95 CFU/mL. Since this is a batch fermentation the growth of organisms could have been limited by depletion of nutrients, build-up of inhibitory metabolites or end-product (lactic acid) and/or shortage of biological space. Thereafter, there was a death phase as the cells started to decrease from 7.97 (30 h) to 7.68 log CFU/mL (36 h). A similar trend of LAB growth was reported by [23] during the fermentation of Umqombothi.
There was a significant (p < 0.05) growth in total viable count (TVC) over a 36 h fermentation period (Figure 4b). The TVC cells accelerated from 6.98 log CFU/mL at the onset to 7.38 log CFU/mL in 3 h. This was followed by a significant (p < 0.05) exponential increase to 7.92 log CFU/mL (6 h). The lag phase was not visible during the growth of TVC. This may be caused by the rapid growth of mixed microbes which dominated the spontaneous fermentation of the pearl millet slurry. At this stage, certain bacteria other than LAB could be growing at a faster rate. This could also have been caused by mixed microbes not taking long to adapt to the new environment. The growth went into a stationary phase which lasted for 6 h (6-12 h). The numbers of cells during the death phase were reduced from 7.88 to 7.51 0.04 log CFU/mL (12-15 h).
The decrease in cells may be due to the build-up of lactic acid caused by mostly LAB. There was a significant (p < 0.05) reduction in cells (death phase) after 27 h for 3 h followed by an acceleration phase for 3 h (30 h). Bacteria not tolerating low pH could have caused a decrease in TVC. The shift-up and shift-down could also be caused by the environmental conditions resulting in competition for survival among different species of LAB. In particular, the extended stationary phase could have led to cell reduction (death) with no new nutrients fed into the system. These results are similar to those reported by [25] during the fermentation of Masvusvu and Mangisi.
There was a very strong, negative linear relationship between TTA and pH of the beverage during fermentation (r = −0.975, p < 0.05). Meanwhile, the TTA had a moderate positive relationship between lactic acid bacteria (LAB) count (r = 0.440, p < 0.05). The pH had a negative moderate relationship with the LAB count (r = −0.535, p < 0.05). The LAB count had a positive weak and very weak relationship with the TVC and YM, respectively. Meanwhile, the TVC had a positive weak relationship with the YM. A moderate-strong linear relationship was between the TTA and pH, LAB and pH. These results further indicated that during succession fermentation of glucose by LAB to lactic acid, the pH dropped due to the built-up of hydrogen ion. The decrease in pH thus increased TTA.
Lactic Acid Bacteria Associated with Pearl Millet Slurry Fermentation
The isolates identified from pearl millet slurry (PMS) during fermentation over 36 h are shown in Table 4. Lactic acid bacteria (LAB) from the genera Leuconostoc, Pediococcus and Enterococcus were the main species involved in the fermentation. The Leuconostoc mesenteroides ssp. dextranicum (Figure 5a) is characterized by a lenticular coccoid cells in chains and Leuconostoc pseudomesenteroides were identified at the beginning of fermentation between the pH of 5.59 (0 h) and 6.37 (6 h). Leuconostoc's presence at the beginning of fermentation may be attributed to their growth condition at pH 6.0-6.5. This is identical to the study by [33] who identified L. pseudomesenteroides from Oshashikwa, traditionally fermented milk in Namibia. The organisms were responsible for the initiation of lactic acid fermentation. These heterolactic organisms produce carbon dioxide and organic acids which rapidly lower the pH of the beverage to 4.0 or 4.5 and inhibit the development of undesirable microorganisms. The carbon dioxide produced replaces the oxygen, making the environment anaerobic [34] and suitable for the growth of subsequent organisms such as Lactobacillus. Besides, the anaerobic environment created by the CO 2 has a preservative effect on the beverage since it inhibits the growth of unwanted bacteria contaminants [35].
As reported by [36], L. pseudomesenteroides is widely present in many fermented foods such as dairy, wine and beans while L. mesenteroides is associated with sauerkraut and pickled fermented products [34]. The organism produces dextrans and aromatic compounds (diacetyl, acetaldehyde, and acetoin) which could contribute to the taste and aromatic profile. These organisms were isolated by [37] from fermented Greek table olive. Pediococcus pentosaceus were tetra-cocci and smooth as shown in Figure 5b isolated at 0, 9, 18 and 36 h of fermentation, similar to the report by [14,38]. Although it grows between pH 4.5-8, the optimum growth is between pH 5.0 and 6.5. MRS agar was developed for LAB growth but selective for Lactobacilli especially Leuconostoc spp. which may or may not grow. Pediococcus spp. similar to Leuconostoc spp. and Streptococcus spp. growth is enhanced considerably in a microaerobic environment with 5% CO 2 . Anaerobic Gas-Pack system produces between 4 to 10% CO 2 of which if it was above 5% could have slowed or inhibited the growth of Pediococcus. Microaerobic organisms also require oxygen content typically between 2-10%, whereas the Anaerobic Gas-Pack system usually creates an oxygen content of <1%. The low level of oxygen could have affected their growth or reduced their number at certain times. The genus Pediococcus belongs to the family Lactobacillaceae in the order Lactobacillales growing at optimum pH of 4.5-8.0. They can produce bacteriocins (antimicrobial agent), which are used as a food preservative. The bacteriocins produced inhibit the growth of Gram-positive bacteria since they attack the cytoplasmic membranes of the cell which is protected by the polysaccharide protective layer in Gram-negative cells [35]. Streptococcus thoraltensis cells were cocci and and also present after 6 h of fermentation (Figure 5d). The presence of S. thoraltensis could be through contamination of pearl millet grains and/or utensils. The organism was isolated from animal intestinal tracts of swine [39]. Several enterococci ( Table 2) were isolated throughout the fermentation at different times (3-30 h) between pH 3.81 and 6.09. They became active between 12 and 30 h and are known to grow well between pH 4 and 9.6 [40]. All enterococci identified were ovoid and appeared in pairs or long chains ( Figure 5). The organisms are responsible for the development of flavours due to their glycolytic, proteolytic and lipolytic activities. They have probiotic activities and have the potential as bio-preservatives. In general, enterococci are ubiquitous and are found in the environment and gastrointestinal tract of healthy animals and humans [40]. These organisms are used as starter cultures in the fermentation of food since they create unique sensory properties [40] and contribute to texture and safety [41]. Enterococcus casseliflavus shown in Figure 5f and Enterococcus gallinarum in Figure 5c were identified after 3 and 15 h. E. casseliflavus have been isolated from olive brines and traditional fermented food and used as starter culture [42,43]. E. gallinarum was isolated between 12 and 30 h at pH 3.81 to 4.68, similar to [43] who identified the organisms in Nigerian traditional fermented foods. They have lipolysis, proteolysis, bile-tolerating and low pH tolerating properties. They have hydrophobic properties and produce bacteriocins that will inhibit food pathogens and spoilage microorganisms [43]. E. faecium (Figure 5g) was detected at 12, 15, 18 and 21 h, while E. faecalis was detected after 30 h. E. faecium and E. faecalis are reported to be probiotics but their source may be through contamination.
The author [33] also reported the isolation of E. faecium from traditionally fermented milk Omashikwa. However, as reported by [40], they are suspected to be pathogenic to humans and are resistant to antibiotics.
The biochemical properties of presumptive lactic acid bacteria (LAB) isolates are shown in Table 5. All the isolates were Gram-positive, catalase-negative and did not produce gas from glucose. All cells were cocci and cocci-oval in morphology and showed no growth at 4 • C. At 45 • C there was no growth of the isolates except E. faecium. The inability of all the LAB isolates to grow at 4 • C could demonstrate increased glycolytic activity which could lead to the increased production of lactic acid [44]. However, the same report by [44] stated that the growth of Lactococcus at low temperature resulted in reduced production of lactic acid due to the reduced glycolytic activity. The inability to grow at high temperature could mean that the LAB strain has a high growth rate and lactic acid production. Their inability to grow at 45 • C are in disagreement with the report in [44]. The differences could be due to the period of incubation which was 2-4 h, whereas this study was incubated for 48 h. The isolates grew at 10 • C and 6.5% NaCl concentration except for E. avium. The growth of all LAB isolates except E. avium in 6.5% salt concentration indicated that the LAB strain could be used as a commercial starter culture. During the commercial production of lactic acid by LAB strain, alkali could be added to increase the pH and reduce an excess decrease in pH [44]. However, the same report mentioned that LAB strains grown in the presence of salt could lead to the loss of turgor pressure, leading to an effect on the physiology, enzyme activity, water activity and metabolism of the cell. These physiological properties could be used to confirm the ability of the LAB isolates to be used as starter cultures. The LAB of importance during fermentation of pearl millet slurry was from the genera Leuconostoc, Pediococcus and Enterococcus. Thus, further studies involved the utilisation of L. mesenteroides, P. pentosaceus and E. gallinarum were chosen as starter cultures to ferment pearl millet extract. The pH, titratable acidity and viscosity of fermented pearl millet extract (PME) as affected by the isolated bioburden lactic acid bacteria is detailed in Table 6. The generalized linear model (GLM) for the effect of L. mesenteroides, P. pentosaceus and E. gallinarum is shown in Table 7. L. mesenteroides, P. pentosaceus, E. gallinarum, the interaction between L. mesenteroides and P. pentosaceus and that between L. mesenteroides and E. gallinarum had a significant effect (p ≤ 0.05) on the pH except for the interaction between P. pentosaceus and E. gallinarum. There was a significant (p < 0.05) increase in the pH by L. mesenteroides, P. pentosaceus and the interaction between L. mesenteroides and E. gallinarum. E. gallinarum and interaction effects of L. mesenteroides and P. pentosaceus had a significant (p < 0.05) decrease on the pH. The interaction effects of P. pentosaceus and E. gallinarum caused a non-significant decrease in the pH of the beverage. Monte Carlo simulation of 1000 cases with input uniform distribution using the GLM indicated that E. gallinarum consistently increased the pH of the extract. The pH of the 95% cases were below 3.66. The sensitivity analysis indicates the degree to which the pH is sensitive to the lactic acid bacteria. The correlation tornado chart shows that pH is most strongly positively correlated with E. gallinarum. Overall, all the pure cultures had a significant (p < 0.05) effect on the pH of the pearl millet extract with P. pentosaceus having the highest contribution (973.9%), followed by E. gallinarum (655.5%) and lastly L. mesenteroides (132.7%). Lactic acid bacteria (LAB) in general can tolerate a wide range of pH in the presence of organic acid such as lactic acid. L. mesenteroides grows early during food fermentation and then superseded by the growth of other LAB. During LAB fermentation, carbohydrates are broken into lactic acid which allow for the growth of acidophilic bacteria such as P. pentosaceus and E. gallinarum [45]. L. mesenteroides and P. pentosaceus were responsible for the creation of an acidic environment while E. gallinarum increased the pH of the beverage. The increase in the pH could be caused by the autolysis of E. gallinarum as a result of the unfavorable acidic (pH 3.32 to 3.90) growth environment.
L. mesenteroides (heterolactic bacteria) produced the least acid unlike the homolactic bacteria P. pentosaceus. The heterolactic bacteria produce about 50% lactic acid, 25% acetic acid and ethyl alcohol and 25% CO 2 . In contrast, homolactic produces mainly lactic acid [46]. The CO 2 produced replaces oxygen present in the beverage and create an anaerobic environment which gave growth to subsequent anaerobic bacteria [46]. This is in agreement with [47] who reported the growth of Pediococcus spp. dominated the latter stages of fermentation of maize. P. pentosaceus was responsible for the rapid acidification of dough. The addition of amylase-rich sprouted rice flour was necessary since chance LAB fermentation requires the enzymes to saccharify the grain starch [47]. The pH of the PME is expected to decrease as more lactic acid accumulates during fermentation, but E. gallinarum increased the pH based on the Monte Carlo simulation. Thus E. gallinarum is not a promising culture for fermenting PME.
The generalized linear model for the main effect of L. mesenteroides, P. pentosaceus and E. gallinarum and their interactions on the total titratable acidity (TTA) of pearl millet extract are shown in Table 8. All the cultures had a significant influence (p ≤ 0.05) on the TTA of the pearl millet extract. The interaction between L. mesenteroides and P. pentosaceus caused a significant (p < 0.05) increase in the TTA. The TTA of the 95% cases was below 0.60%. The correlation tornado chart shows that TTA is most strongly negatively correlated with L. mesenteroides. P. pentosaceus had a high contribution (526.3%) of influence on TTA followed by E. gallinarum (137.3%), then L. mesenteroides (72.54%). The TTA was measured as the total lactic acid produced from the fermentation of starch and sugars by LAB. During fermentation, homolactic bacteria P. pentosaceus and E. gallinarum produced mainly lactic acid whereas L. mesenteroides produced lactic acid, CO 2 and acetic acid/ethyl alcohol. Thus, P. pentosaceus and E. gallinarum contributed highly to the production of lactic acid. This was in agreement with [48] who reported an increase in TTA during the fermentation of non-alcoholic beverages from cereals. However, based on the generalised linear model and Monte Carlo simulation, E. gallinarum caused a significant increase and decrease in the pH and TTA, respectively. This is not desired for a beverage fermentation, hence the culture was eliminated. Table 9 shows the generalized linear model for the main effect of L. mesenteroides, P. pentosaceus and E. gallinarum on the viscosity of PME. L. mesenteroides, P. pentosaceus, E. gallinarum, the interaction between L. mesenteroides and P. pentosaceus, the interaction between L. mesenteroides and E. gallinarum, the interaction between P. pentosaceus and E. gallinarum and the interaction between L. mesenteroides, P. pentosaceus and E. gallinarum had a significant influence (p ≤ 0.05) on the viscosity of the beverage. The interaction between L. mesenteroides and P. pentosaceus, the interaction between L. mesenteroides and E. gallinarum, the interaction between P. pentosaceus and E. gallinarum and the interaction between L. mesenteroides, P. pentosaceus and E. gallinarum significantly (p < 0.05) increased the viscosity of the PME. Meanwhile, the decrease in the viscosity was caused by L. mesenteroides, P. pentosaceus and E. gallinarum. The interaction between L. mesenteroides, P. pentosaceus and E. gallinarum caused a thicker beverage than the effect of all other lactic acid bacteria (LAB). This similar concept was seen in the beverage with L. mesenteroides whereas P. pentosaceus caused the increase in viscosity. Monte Carlo simulation indicated that 95% of the cases have viscosity less than 7.80 mPa.s. The correlation tornado chart shows that viscosity is most strongly positively correlated with P. pentosaceus. P. pentosaceus had the highest contribution (74.01%) on the viscosity followed by E. gallinarum (50.41%) and L. mesenteroides (45.98%).
Effect of Different Purified Lactic Acid Bacteria on the Viscosity of Pearl Millet Extract
During cereal fermentation LAB break down starch into simpler sugars resulting in a decrease in viscosity. The viscosity of the beverage is affected by factors such as the pH, type of microorganisms and if the type of microorganisms involved in fermentation has amylase enzymes to hydrolyze starch into dextrins and sugars. In this study, high amylase sprouted rice flour (SRF) enhanced the decrease in the viscosity of the beverage. Thus, the breakdown of starch by SRF and LAB had several desirable effects on the viscosity and nutritional quality. These results are in agreement with [49] who reported a decrease in viscosity after fermentation of a traditional fermented beverage (Boza) at 20 • C. L. mesenteroides, P. pentosaceus and E. gallinarum and the interaction between P. pentosaceus and E. gallinarum caused a decrease in the viscosity of the beverage which is desired for a beverage. However, E. gallinarum could not be used since it causes an increase in the pH and a decrease on the TTA of the beverage. Therefore, going forward L. mesenteroides and P. pentosaceus were selected in the production of the beverage.
Non-Alcoholic Pearl Millet Beverage (NAPMB) Produced Using Pure Cultures of Lactic Acid Bacteria (LAB)
When cultures (L. mesenteroides and P. pentosaceus) were used individually to ferment PME at 0.05% each and in combination at 0.05%, L. mesenteroides alone produced a beverage with better taste compared to P. pentosaceus. The beverage with L. mesenteroides (0.05%) and P. pentosaceus (0.025%) produced an acceptable beverage. [50] reported that Pediococcus spp. are responsible for the production of diacetyl which results in 'buttery' aroma hence the reduction of P. pentosaceus to 0.025% produced acceptable beverage. The pearl millet extract [1:10 (flour:water)] mixed with pectin (0.6%), sunflower lecithin (0.1%), sodium citrate (0.1%) and fermented with L. mesenteroides (0.05%) and P. pentosaceus (0.025%) for 18 h produced a stable non-alcoholic pearl millet beverage. The next article will report on the physicochemical and nutritional quality of the beverage. These plant-originated Lactobacillus and Pediococcus strains have been reported to display in vitro probiotic effects including acid and bile tolerance, high levels of antioxidant activity, and strong adhesion to HT-29 cell [51].
Conclusions
The natural fermentation of pearl millet slurry was dominated by lactic acid bacteria (LAB) and contaminants, and their survival were in succession due to the increase in lactic acid. L. pseudomesenteroides, L. mesenteroides ssp. dextranicum, E. gallinarum and P. penotosaceus were the main fermenting LAB. Optimal non-alcoholic pearl millet beverage could be produced by fermenting the slurry for 18 h at 37 • C with expected pH of 4.06. Lactic acid bacteria (LAB) are associated with total titratable acidity (TTA) which could be used as an indicator for the survival of LAB. The pearl millet extract (1:10 (flour:water)) mixed with pectin (0.6%), sunflower lecithin (0.1%), sodium citrate (0.1%) and fermented with L. mesenteroides (0.05%) and P. pentosaceus (0.025%) for 18 h produced a stable non-alcoholic pearl millet beverage. The study has shown that a similar beverage to a traditionally prepared beverage can be produced under controlled conditions with controlled quality. The identified LAB could be developed as starter cultures for industrial production. The beverage could be industrialised and made available to urban and semi-urban dwellers. More work could be done to confirm these strains at a molecular level and to improve the taste to their desired profile. Institutional Review Board Statement: Not applicable as the study did not involve human or animal subjects.
Informed Consent Statement: Not applicable.
Data Availability Statement:
No new data were created or analysed in this study. Data sharing does not apply to this article.
|
v3-fos-license
|
2017-09-12T18:48:55.169Z
|
2015-12-23T00:00:00.000
|
30445893
|
{
"extfieldsofstudy": [
"Engineering"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "http://iopscience.iop.org/article/10.1088/1742-6596/664/8/082017/pdf",
"pdf_hash": "2b9ee1996d84db608bd836e3d8d5889838cee7b5",
"pdf_src": "IOP",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44340",
"s2fieldsofstudy": [
"Physics"
],
"sha1": "932a8af7226a79fa9ede5001ae58c5e8792204d5",
"year": 2015
}
|
pes2o/s2orc
|
The ATLAS Trigger Core Configuration and Execution System in Light of the ATLAS Upgrade for LHC Run 2
During the 2013/14 shutdown of the Large Hadron Collider (LHC) the ATLAS first level trigger (L1) and the data acquisition system (DAQ) were substantially upgraded to cope with the increase in luminosity and collision multiplicity, expected to be delivered by the LHC in 2015. Upgrades were performed at both the L1 stage and the single combined subsequent high level trigger (HLT) stage that has been introduced to replace the two-tiered HLT stage used from 2009 to 2012 (Run 1). Because of these changes, the HLT execution framework and the trigger configuration system had to be upgraded. Also, tools and data content were adapted to the new ATLAS analysis model.
Introduction
The ATLAS experiment is one of four major experiments at the Large Hadron Collider (LHC) [2] at CERN. The ATLAS detector [3] relies on high-precision tracking and calorimetry subdetectors with many million read-out channels to precisely capture collision events. Due to constraints both in the bandwidth of the read-out system as well as the permanent (offline) storage capacity, it is not feasible to record all collisions, most of which are from well-understood physics processes. Hence, ATLAS employs a multi-staged trigger system to select events of particular interest.
During the first run of the LHC from 2009-2012 (Run 1), the LHC provided proton-proton collisions at a rate of 20 MHz and the ATLAS trigger was configured as a three-stage system. The first stage, the Level-1 trigger (L1), was implemented using custom electronics and read out low-granularity calorimeter and muon spectrometer data. It reached a decision within 2.5 µs to reduce the rate to 75 kHz. In addition, L1 also identified the location in the detector (Regionsof-Interest, RoIs), where the interesting activity occured that led to accepting the event. The RoIs guided the second stage, the Level-2 trigger (L2), which read out full-granularity data only from the RoIs to perform a partial reconstruction and further reduce the event rate to about 4 kHz within 40 ms. In case of acceptance by L2, the entire event data was read out by the Event Filter (EF), and a final decision on the basis of a fully assembled event was made within 4 s. The L2 and EF stages, which ran on a commodity PC farm, were also collectively referred to as the High Level Trigger (HLT) and used similar software as the offline reconstruction.
After a two-year-long shutdown (LS1) in 2013-2014, the LHC will resume operation in summer 2015 at an increased center-of-mass energy of 13 TeV and increased instantaneous luminosity of up to 1.6 × 10 34 cm −2 s −1 . The resulting higher trigger rates and event data sizes, the latter stemming from a larger number of simultaneous proton-proton collisions referred to as pileup, pose a challenge to the trigger and data acquisition system (TDAQ) of the experiment, which operated in some areas well beyond its design values already in Run 1. The design of the trigger for Run 2 was simplified to a two-stage system by merging L2 and EF into a unified HLT stage. Basic figures of merit for this system are a L1 output rate of 100 kHz and a HLT event writing rate of 1 kHz. During the shutdown a number of upgrades to the TDAQ system have been untertaken to meet these new requirements, a selection of which is presented in this document. A schematic view of the Run 2 trigger system is shown in Figure 1.
Level 1 Trigger
For Run 2 the Level 1 trigger design has been extended by a topological trigger module (L1Topo) [4]. While the L1 strategy in Run 1 could only use the multiplicity of candidate trigger objects identified by the Level 1 calorimeter and muon trigger hardware, this module is capable of selecting events based on topological relationships between these candidate objects. Interesting selection criteria include the invariant mass of multiple trigger objects, the scalar transverse momentum sum and angles between L1 trigger objects. The module receives RoI data at a rate of 1 Tb/s which is then processed in less than 100 ns using algorithmic firmware, which is loaded onto on-board FPGAs. Data indicating which algorithms have passed are then passed on to the Central Trigger Processor (CTP).
The increase in pileup in Run 2 will result in higher occupancies in the calorimeter system. Consequently, the L1 calorimeter trigger (L1Calo) [5] has been upgraded with a new preprocessing module. The new module will be capable of correcting for pileup using a dynamic pedestal correction. Additionally, the output data format has been extended from simple hit counts to more descriptive Trigger Objects (TOBs) which provide candidate η, φ, and E T information to the topological trigger. Similarly the Level 1 muon trigger (L1Muon) received a firmware upgrade to send coarse η, φ and p T information to the L1Topo modules. 1 The central trigger processor (CTP) [6] receives inputs from all other L1 components and ultimately decides whether an event passes the first trigger stage. The internal bus has been overclocked to allow twice as many thresholds from the L1Calo system to be included in the selection. The L1 decision is made using a number of predefined trigger items, logical combinations of the input signals that describe criteria for accepting events such as thresholds, multiplicities, and flags set by the topological trigger. For Run 2 the number of trigger items has been increased from 256 to 512 to allow a more refined selection. The CTP has also been upgraded to receive inputs from the new L1Topo module. Also, it can now be run with up to three partitions, concurrently running instances of the CTP software. Only one of the partitions is interfaced to the HLT and DAQ systems, while the other two are intended for commisioning and calibration. To support this partitioning, a new control software architecture was developed.
High Level Trigger
The high level trigger and data aqusition system have been thoroughly upgraded for Run 2. Most notably, the L2 and EF stages were merged into a single HLT stage resulting in greater flexibility on how the event read-out and triggering is organized. During Run 1, resources in the HLT computer farm have been dedicated specifically to either the L2 or the EF stage. In Run 2, all computing resources will act as unified HLT nodes that will execute both the RoI-based limited read-out decision making as well as the full event assembly and decision. In such a scheme the event-building stage can be scheduled dynamically at any point in the data processing as shown in Figure 3(a). During Run 1 a stringent limiting factor was the rate of data-requests from processing nodes to the read-out system PCs (ROS). First the Level 2 process would request full-granularity data from the RoIs and after L2 acceptance the event-building process would request the entire event data (including the RoI data already requested by L2). With merged HLT processing nodes, data needs to be requested only once from the ROS hence saving network bandwidth and decreasing the ROS data request rate. Finally, by having only a single kind of node, the load-balancing capabilities of the computer farm are significantly improved.
The network providing the nodes with the read-out data has been restructured to reflect the merged HLT design. In Run 1 each ROS had links into two separate networks. The data collection network provided the L2 processing units with the read-out data. Upon accept the entire event was read out by dedicated event building nodes and the fully built events were distributed via a second back-end network to the EF nodes. For Run 2, the design was simplified into a single network with a 6 Tb/s bandwidth. A new generation of ROS PC was equipped with two 10 Gb/s links into the data collection network via which the HLT computing nodes can request event data. Also, the new ROS machines now hold new read-out buffer input cards (ROBIN) which will be able to sustain higher access rates.
Trigger Configuration
The trigger configuration is a system to describe both the hardware and software trigger components. The trigger menu is a high level description of the physics signatures that are to be recorded and is compiled in close collaboration with the physics working groups. The menu relates physics objects and multiplicities to specific algorithms in the trigger software. For the L1 stage, the menu lists the 512 trigger items that have been built from the L1 trigger objects as well as the L1Topo algorithms. For the HLT the menu is organized into approximately 2000 trigger chains (twice the Run 1 value) that each describe a sequence of algorithms that need to be executed in order to test for a certain physics signature. HLT algorithms are classified as being either of feature extracting (FEX) or hypothesis testing (HYPO) type. While the former attempt to reconstruct physical objects such as tracks or calorimeter energy clusters, the latter evaluate the quality of the reconstructed object to mark it as satisfying the chain's selection criterion. In an electron chain, for example, FEX algorithms might reconstruct track and calorimeter clusters, while a subsequent HYPO algorithms might check if the cluster and track are consistent with the electron hypothesis based on e.g. the cluster shape or transition radiation detected in the tracker. For certain chains, the trigger rate can be too high to write every passing event to disk, therefore a prescale factor p may be applied so that a passing event is recorded only with a probability of 1/p reducing the output rate by a factor of p. As prescaling is applied before the algorithms are executed, the execution time is also lowered. The trigger configurations for L1 and HLT are stored in a relational Oracle database (TrigDb) which can be viewed and modified via the TriggerTool, a graphical user interface. For Run 2 the database schema, shown in Figure 2, has been amongst other things upgraded to incorporate the L1Topo configuration. Also, the architecture and interface to the TriggerTool have been significantly improved. Based on the database, any trigger configuration can be uniquely identified by three keys. The supermaster key (SMK) uniquely identifies the menu, while the L1 and HLT prescale keys (L1PSK/HLTPSK) identify the prescale sets for their respective trigger stage. While each ATLAS run (typically corresponding to one LHC proton beam fill) uses a single SMK, multiple prescale keys are applied to optimize bandwidth usage as the beam intensity drops over the course of the run. In Run 2 the prescale application process will be simplified and automated in order to maximize the data taking efficiency.
HLT Steering
When processed by the HLT, the execution of the algorithms in the correct order is driven by the HLT Steering software framework. After applying the prescales for the event at hand, the remaining chains are evaluated in a data-driven manner. Starting from the L1 items as seeds, for each chain the next available step is executed. A step consists of a list of FEX algorithms and and a final HYPO algorithm and is marked as passing based on the latter's response. As soon as a step of a chain fails, that chain is marked as not passing and is not further considered. Chains for which all steps have succeeded are marked as passed and the event is recorded in the case of one or more passing chains. Several chains may share individual steps and the steering system caches results in order to avoid multiple passes over the same data. The first chain passing also triggers the retrieval of all remaining data from the ROSes. Upon acceptance of the event by the HLT, the trigger information including the decision as well as all objects reconstructed by the trigger algorithms are serialized into binary form and included in the event data which is written to offline storage. For Run 2 the HLT algorithms have been rewritten to make use of the merged HLT design. Generally, L2-type algorithms requesting only partial full-granularity along with more comprehensive EF-type algorithms still exist. The corresponding steps, however, are now considered part of the same chain.
Multiprocessing at the HLT
With clock speeds of processors stagnating in recent years, computational power has been gained mostly from increasing the number of cores on a single die. Many-core processors such as new generations of Intel's Xeon family of processors, which the HLT uses, already have up to 24 real and 48 virtual cores. Volatile memory capacities, however, have not been increasing at a similar rate. This poses a challenge to the execution model in the HLT which has relied on running multiple independent processes each with their own memory address space to exploit the fact that every event can be processed independently -largely, the HLT is an embarrassingly parallel problem. In this scheme the total amount of memory is the limiting factor in determining the number of concurrent processes that can be run on a machine each of which would require around 2 GB. Fortunately much of the memory needed by the HLT processes is not unique to the event. Examples of such common data are the magnetic field maps, the detector geometry, and run conditions data. The amount of data unique to the event is a moderate 300 MB. To make optimal use of this, ATLAS has reworked its multiprocessing setup for Run 2 to make use of the copy-on-write feature of the Linux kernel which allows forked processes to use common memory as long as they do not modify it (when they do, the modified memory pages are copied and only then use additional memory). In the implementation at the HLT, shown in Figure 3(b), a single process, HLT 0 , will be started and undergo various initialization procedures to set up the shared data. After initialization, the process will fork into many copies of itself (HLT 1 . . . HLT N ). All these processes may now share the common memory and will then independently process events. Using this feature, the memory available in the node is utilized in a much more efficient manner allowing an increased number of HLT processes to run on a single node.
Data Scouting
The reconstruction algorithms running in the HLT are very similar to the offline reconstruction algorithms used to process the events once they are written to disk. In fact, in many cases a large amount of code is shared between online and offline algorithms. Consequently, the efficiency and resolution with which physics objects such as jets, electrons and muons are reconstructed online is almost as good as when done offline. This opens up an intruiging opportunity for triggers with inhibitively large rates and hence large prescale factors. Normally, an HLT-accepted event is classified into one or more data streams and the entire event data is written to disk in its raw form awaiting offline reconstruction. For Run 2, a new data stream type, the data scouting stream, has been added. Instead of recording the entire event data, in this stream only collections of the physics objects reconstructed by the trigger are written to disk. Writing only a fraction of the event data enables the experiment to run high-rate triggers in a unprescaled configuration. The data-scouting streams will provide highstatistics samples that can be used both for calibration purposes as well as certain searches for new physics.
Offline Analysis Tools
An essential component of the core trigger software is to provide tools to access the trigger information associated to reconstructed offline data. With the increased instantaneous luminosity in Run 2 it is expected that ATLAS will record data of the order of 100 fb −1 per year. To prepare for these unprecedented data sizes, the event data model (EDM) of the offline software has been adapted. The main objective of the new EDM was to make the ouput data format of the reconstruction, the xAOD, natively readable in the data analysis framework ROOT [7]. As this was not possible during Run 1, ROOT-readable derivative datasets were produced using significant amounts of computing and storage resources which would not be feasible in Run 2. The trigger is the only online software component that has been affected by the EDM change, since the HLT reconstruction algorithms also use the offline EDM. Therefore, the online software was adapted in order to enable serialization of xAOD data.
For offline analysis, the main access to trigger information is provided by the TrigDecisionTool. This tool has been adapted to enable use in both the reconstruction software framework (Athena) as well as a ROOT-only analysis environment.
Trigger Cost Monitoring
To ensure an efficient operation of the trigger, the computational cost of the trigger system must be well understood. Characteristics such as execution time or the number of data requests are crucial pieces of information when preparing new trigger configurations, during data-taking as well as for later analysis. The Trigger Cost Monitoring framework was developed to record this information for the entire trigger system and calculate the cost of each trigger alone and in combination with others, taking into account data access and algorithm execution caching. A web-based interface, shown in Figure 4, is available to quickly access the results of the cost calculation. An important use case for the cost monitoring is the prediction for the rates of individual trigger chains which in turn can inform the composition of the trigger menu. For Run 2 the cost monitoring has also been updated to reflect the merged HLT design.
Conclusion
The trigger of the ATLAS experiment is a crucial component to select the most interesting collision events provided by the LHC in light of limited bandwidth and storage capacities. It has performed with remarkable success during the first round of data-taking from 2009 to 2012. To ensure a similar performance under the even more challenging conditions of Run 2, the system has been thoroughly upgraded. The L1 trigger now is able to select a larger variety of physics signatures on the basis of a wider set of characteristics, notably topological information. The HLT architecture has been simplified to a single streamlined trigger stage. This change has also been reflected in a new and improved data acquisition system ready to sustain higher access rates and larger bandwidth demands. The online trigger configuration and execution software was rewritten to adapt to and make use of the merged design. Finally, the offline software b-Farm Output (SFO), nodes buffer the events accepted by the Event Filter, and rt them to mass storage in the CERN computer centre. ow software components known as the Level-2 Processing Unit (L2PU) and ocessing Task (EFPT) provide interfaces between the offline Athena framework, he HLT steering and algorithms run, and the dataflow components they need and the EF. me of work to evolve the system during LS1 is under way. The main ideas are age of rolling replacements to hardware explained in section 7 to remove the mplify the software and gain flexibility (as explained below), and to prepare signature strategies for the challenges posed by higher energy and luminosity ut system and network are being replaced with faster equipment which will ed increase in bandwidth through this critical part of the system. Although the acity was not a limiting factor in Run 1, the increase in pile-up and need to use construction code in the HLT will require significantly more computing power placement of part of the HLT farm with faster processors and the possibility al racks with extra computing resources will address this. Estimates of the uting capacity for Run 3 are presented in Section 7.3. a simplified design of the dataflow software has been implemented, and the and algorithms are being adapted to take advantage of it. Figure 2 shows the system design and Figure 54 shows how the corresponding software design has functions of event building (SFI), L2 and EF will be merged into a single HLT event building decision (EB) taken internally. The functionality of the Dataflow ) and Event Filter Dataflow (EFD) components and interaction with the ROS by the Data Collection Manager (DCM). r of the HLT will save processing time and network bandwidth: it removes ack, transfer and unpack L2 result data from L2 to EF; data for L2-accepted 6 Online and High-Level Trigger Software outing that e lings, er tored.
atistics rposes es for trigger algorithms. This is a crucial tool for the optimization of the trigger. It has been redesigned in Run 2 to work with the merged HLT. Data is accessible via a new web interface. now supports the new ATLAS event data model and data analysis outside of the reconstruction software framework.
|
v3-fos-license
|
2024-04-03T15:16:47.477Z
|
2024-04-01T00:00:00.000
|
268869390
|
{
"extfieldsofstudy": [],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://zookeys.pensoft.net/article/115260/download/pdf/",
"pdf_hash": "8f769f38a706cd73522ca9072ed0e6f20e314213",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44341",
"s2fieldsofstudy": [
"Environmental Science",
"Biology"
],
"sha1": "2a9c4bf2b5d038e6b8a2b17064745f91383b02d8",
"year": 2024
}
|
pes2o/s2orc
|
Two new records and description of a new Perinereis (Annelida, Nereididae) species for the Saudi Arabian Red Sea region
Abstract Annelid biodiversity studies in the Red Sea are limited and integrative taxonomy is needed to accurately improve reference libraries in the region. As part of the bioblitz effort in Saudi Arabia to assess the invertebrate biodiversity in the northern Red Sea and Gulf of Aqaba, Perinereis specimens from intertidal marine and lagoon-like rocky environments were selected for an independent assessment, given the known taxonomic ambiguities in this genus. This study used an integrative approach, combining molecular with morphological and geographic data. Our results demonstrate that specimens found mainly in the Gulf of Aqaba are not only morphologically different from other five similar Perinereis Group I species reported in the region, but phylogenetic analysis using available COI sequences from GenBank revealed different molecular operational taxonomic units, suggesting an undescribed species, P.kaustianasp. nov. The new species is genetically close and shares a similar paragnath pattern to the Indo-Pacific distributed P.helleri, in particular in Area III and Areas VII–VIII. Therefore, we suggest it may belong to the same species complex. However, P.kaustianasp. nov. differs from the latter mainly in the shorter length of the postero-dorsal tentacular cirri, median parapodia with much longer dorsal Tentacular cirri, posteriormost parapodia with much wider and greatly expanded dorsal ligules. Additionally, two new records are reported for the Saudi Neom area belonging to P.damietta and P.suezensis, previously described only for the Egyptian coast (Suez Canal) and are distributed sympatrically with the new species, but apparently not sympatric with each other.
Introduction
Based on genetic databases (i.e., BOLD and GenBank), and despite the recent advances in integrative studies focused on polychaetes (i.e., Nygren et al. 2010;Villalobos-Guerrero et al. 2021;Teixeira et al. 2023), there are still many taxonomic ambiguities and unidentified annelid species in some groups of Nereididae (i.e., Martin et al. 2021;Elgetany et al. 2022).Perinereis Kinberg, 1865 is one of the most diverse genera in this family, currently including between 97 (Wilson et al. 2023) to 106 (WoRMS Editorial Board 2024) valid species distributed worldwide.From these, approximately 16 species are reported for the Arabian Peninsula (Ocean Biodiversity Information System, OBIS ;Mohammad 1971;Wehe and Fiege 2002).Due to apparent similar paragnath patterns, overall body features and lack of detailed systematic studies, Perinereis species are often problematic to identify to the species level (Bakken and Wilson 2005;Yousefi et al. 2011).This has led to informal denomination of species complexes and recognition of geographic morphs and varieties such as P. cultrifera (Grube, 1840) species group (type locality: Naples, Italy; Scaps et al. 2000) and the P. nuntia (Lamarck, 1818) species (type locality: Gulf of Suez, Egypt) group (Wilson and Glasby 1993;Glasby and Hsieh 2006;Sampértegui et al. 2013), both reported for the Red Sea (OBIS).Thanks to molecular data, it is now easier to screen for potential new species with apparent similar morphotypes.Recent evidence comparing populations from different regions has shown that when specimens differ genetically, further analysis of the diagnostic morphological features often leads to the recognition of distinct features that were previously overlooked (i.e., Sampértegui et al. 2013;Teixeira et al. 2022b).A recent review on meiofauna (Cerca et al. 2018) and recent polychaete studies (i.e., Abe et al. 2019;Tilic et al. 2019;Martin et al. 2020), including from Nereididae (Glasby et al. 2013;Sampieri et al. 2021;Teixeira et al. 2022a, b) also demonstrate that cryptic and pseudo-cryptic species often have geographically restricted distributions, with the range of cryptic species being smaller than the parent morphospecies.
The Egyptian side of the Red Sea has been the focus of an increasing amount of polychaete studies either reviewing existing species groups (i.e., Villalobos-Guerrero 2019) or describing new species that were previously considered cryptic (i.e., Elgetany et al. 2022).The northern Saudi Arabian Red Sea and Gulf of Aqaba, despite being expected to host a large biodiversity (Roberts et al. 2002;DiBattista et al. 2016), has seen comparatively few biodiversity studies involving molecular techniques, particularly for polychaetes.To address this gap, and document the invertebrate biodiversity of the region, a bioblitz was conducted in the Neom region (northern Saudi Arabian Red Sea and Gulf of Aqaba) to document the local biodiversity, with emphasis on mobile invertebrates and cryptobenthic fish.As part of this effort, this study used a molecular approach, combined with morphological and geographic data, to investigate Perinereis samples collected from marine intertidal and lagoon-like rocky environments of the northern Red Sea.In particular, we aimed to assess species distributions and to investigate whether specimens collected belonged to existing P. cultrifera group, P. nuntia group, to other similar Perinereis species reported for the region, or if new species were undescribed.
Sampling effort
The NEOM bioblitz sampling campaign surveyed 38 shallow and coral reef sites up to 25 meters depth and some intertidal habitats, along the northern region of the Saudi Arabian Red Sea and Gulf of Aqaba (Neom area).This initiative aims to initiate a biodiversity inventory of marine benthic invertebrates (mainly mobile) and cryptobenthic fish in the Red Sea using DNA barcoding and metabarcoding.Only intertidal marine and lagoon-like rocky environments were considered for the purpose of this study, in order to perform an independent assessment within Perinereis, given the known taxonomic ambiguities in several species within the genus from this particular habitat.
Table 1 details the number of original specimens collected for each sampling location, which correspond to the same number of COI sequences analysed.The number of COI sequences from Perinereis species publicly available in GenBank, respective sampling area and references are also detailed in Table 1 and were used for comparison purposes.The collected Red Sea Perinereis specimens were deposited at NTNU University Museum, Trondheim, Norway (NTNU-VM, Bakken et al. 2024; vouchers: NTNU-VM-86010-NTNU-VM-86044).Perinereis oliveirae specimens are deposited at Biological Research Collection of the Department of Biology of the University of Aveiro (CoBI at DBUA; curated by Ascensão Ravara: aravara@ua.pt;vouchers: DBUA0002494.02.v01 and DBUA0002494.02.v02),Portugal.Specimens that were exhausted in the DNA analysis were assigned only with the Process ID from the BOLD systems (http://v4.boldsystems.org/),corresponding to MTPNO009-23 (Gulf of Aqaba, Magna).Some specimens were preserved in 96% ethanol and others in formalin with a respective sample tissue preserved in ethanol for molecular work (detailed in Suppl.material 1).
DNA extraction, PCR amplification, and alignments
DNA sequences of the 5' end of the mitochondrial cytochrome oxidase subunit I (mtCOI-5P) were obtained for all the collected Perinereis specimens and used for the main analysis.A representative number of specimens per location for the new species were also sequenced using the mitochondrial 16S rRNA and D2 region of nuclear 28S rRNA, for future reference purposes.
DNA extraction was performed using QuickExtract DNA Extraction Solution (Lucigen) with 50 µl of the reagent per Eppendorf.The tubes were then transferred to a heat block at 65 °C for 30 min and then an additional 2 min at 98 °C.Depending on the specimen size, only a small amount of tissue (i.e., a single parapodium) or the posterior end of the worm was used.
PCR reactions were performed using a premade PCR mix from VWR containing 10 µl per tube of Red Taq DNA polymerase Master Kit (2 mM, 1.1×), 0.5 µl of each primer (10 mM) and 1 µl of DNA template in a total 12 µl volume reaction.Table 2 displays the PCR conditions, primers and sequence lengths for the different markers.Amplification success was screened in a 1% agarose gel, using 1 μl of PCR product.Successful PCR products were then purified using the Exonuclease I and Shrimp Alkaline Phosphatase (ExoSAP-IT, Applied biosystems) protocol, according to manufacturer instructions.Cleaned up amplicons were sent to KAUST Sanger sequencing service for forward sequencing.
Phylogenetic analysis and MOTU clustering
For comparison purposes, GenBank COI sequence data from P. marionii (Audouin & Milne Edwards, 1833); P. vallata (Grube, 1857); P. helleri (Grube, 1878) and the outgroup Alitta virens (M.Sars, 1835) completed the final dataset (Table 1, Suppl.material 1).The phylogenetic analysis was performed through maximum likelihood (ML) for the entire dataset.Best-fit models were selected using the Akaike Information Criterion in MEGA.The phylogenetic relationship analysis was executed with 500 bootstrap runs using the General Time-Reversible model with gamma distributed rates and a portion of the sites invariable (GTR+G+I).The final version of the tree was edited with the software Inkscape v. 1.2 (https://www.inkscape.org).
Three delimitation methods were applied to obtain Molecular Operational Taxonomic Units (MOTUs): The Barcode Index Number (BIN), which makes use of the Refined Single Linkage (RESL) algorithm available only in BOLD (Ratnasingham and Hebert 2013); the Assemble Species by Automatic Partitioning (ASAP, Puillandre et al. 2021), implemented in a web interface (https://bioinfo.mnhn.fr/abi/public/asap/asapweb.html) with default settings using the Kimura-2-Parameter (K2P) distance matrix; lastly, the Poisson Tree Processes (bPTP; Zhang et al. 2013) performed in a dedicated web interface (https:// species.h-its.org/),using the ML phylogeny obtained above, for 500000 MCMC generations and twenty-five percent of the samples discarded as burn-in.
The mean genetic distances for mtCOI (K2P; Kimura 1980) within and between MOTUs were calculated in MEGA.
Morphological analysis
Specimens were studied using a Leica stereo microscope (model M205 C).Stereo microscope images were taken with a Flexacam C3 camera.Compound microscope images of parapodia and chaetae were obtained with a Leica DM2000 LED imaging light microscope, equipped with a Flexacam C3 camera, after mounting the parapodia on a slide preparation using Aqueous Permanent Mounting Medium (Supermount).Parapodial and chaetal terminology in the taxonomic section follows Bakken and Wilson (2005) with the modifications made by Villalobos-Guerrero and Bakken (2018).The final figure plates were edited with the software Inkscape v. 1.2.
For measuring length of dorsal ligules, not only the lengths of the tips were considered, but the proximal part of the ligules was also included (e.g., Conde-Vela and Salazar-Vallejo 2015; Villalobos-Guerrero and Carrera-Parra 2015; Teixeira et al. 2022b).Like Hutchings et al. (1991), a specimen is described as having a greatly expanded dorsal notopodial ligule posteriorly only if the dorsal ligule is more than two times as long as the ventral ligule.For analysis of variation, only complete specimens were considered; total length (TL), length up to chaetiger 15 (L15), width at chaetiger 15 (W15) were measured with a millimetre rule under the stereomicroscope.Number of chaetigers (NC) were also taken into consideration.TL was measured from anterior margin of prostomium to the end of the pygidium, and W15 were measured excluding parapodia.Measurements of the length of the antennae (AL), palps (PL), dorsal cirri (DCL), dorsal ligule (DLL), ventral cirri (VCL), ventral ligule (VLL), median ligule, the length and width of the head (HL and HW, respectively), and the length of all four tentacular cirri, including the longest one (postero-dorsal cirri, DPCL), were also retrieved.Heterogomph falciger blade size comparison (short, long, and extra-long) based on Wilson et al (2023).Spiniger serration based on the comparison between P. cultrifera (lightly serrated) and P. rullieri (coarsely serrated) from Pilato (1974).
Paragnath counts were performed to compare patterns with other morphologically similar Group I Perinereis species (Hutchings et al. 1991).Pharynx paragnath terminology follows Bakken et al. (2009) and paragnath description of areas VII and VIII follow Conde-Vela (2018).
Terminology for molecular vouchers follows Pleijel et al. (2008) and Astrin et al. (2013).Overall description follows a similar structure to those of Villalobos-Guerrero (2019).Dates of sample collection follow the DD/MM/YY format.
Phylogenetic analyses
The phylogenetic reconstruction recovered ten MOTUs of Perinereis (Fig. 1A), the delimitation of which are cohesively supported by the three species-delimitation tests applied, except for MOTU 1 and GB1, which are clustered together with the ASAP method.Sequences from P. fayedensis and P. anderssoni are not present in BOLD and have no associated BIN.
Taxonomic account
Distribution and habitat.Confined to the northeastern Red Sea (Duba, Shushah Island) and Gulf of Aqaba (Magna) so far.Type locality: Saudi Arabia, Gulf of Aqaba: Magna region (marine site), 28°26'57.3"N,34°45'35.4"E.Specimens collected both in lagoon-like environments and fully marine sites in rocky areas, usually among coarse-grained sand under rocks.Apparently more abundant and easier to find in marine sites from the Gulf of Aqaba.Can be found in sympatry with P. damietta (Fig. 1B, C) and P. suezensis (Fig. 1B, D).The latter two species as described by Elgetany et al. (2022).
Etymology.The species designation pays tribute to the King Abdullah University of Science and Technology (KAUST) in Saudi Arabia, a globally recognized graduate-level research institution.This naming honours KAUST's substantial and enduring contributions to marine science, particularly in advancing our understanding of the Red Sea over the course of more than a decade.Through its dedicated research efforts, KAUST has significantly enriched the scientific community's knowledge of this unique marine environment.Description.Specimens used: NTNU-VM-86011 (holotype) and NT-NU-VM-86015 (paratype), both preserved in ethanol 96%, stored at NTNU University Museum (Norway, NTNU-VM).
Head (Fig. 2A, B, E, J): Prostomium pyriform, 1.2× wider than long; 2.5× longer than antennae.Palps with a round or conical palpostyle (Fig. 2A); palpophore longer than wide, subequal to the entire length of prostomium.Antennae separated, gap half of antennal diameter (Fig. 2E); tapered, less than half the length of the palpophore.Eyes black, anterior and posterior pairs well separated (Fig. 2J).Anterior pair of eyes oval shaped, as wide as antennal diameter; posterior pair of eyes round or oval shaped, subequal width to anterior pair.Distance between the anterior eyes 1.25× longer than posterior ones.Nuchal organs covered by the tentacular belt.
Pharynx: Pair of dark brown curved jaws with 7-8 denticles; two longitudinal canals emerging from the pulp cavity, both in the mid-section of the jaw (Fig. 2C).Pharynx consisting of maxillary and oral rings with conical shaped paragnaths (Fig. 2A, B).Maxillary ring: Area I = two small paragnaths arranged in a longitudinal line (Fig. 2F).Area II = Cluster of 5-7 small paragnaths (Fig. 2F).Area III = central patch of nine small paragnaths, lateral patches with two small paragnaths each (Fig. 2D).Area IV = 13 small paragnaths arranged in wedge shape without any bars (Fig. 2D).Oral ring: Area V = a triangle of three large paragnaths (Fig. 2E).Area VI (a+b) = two narrow bar-shaped paragnaths, one on each side, displayed as a straight line (Fig. 2E).Areas VII-VIII = 20-24 small paragnaths in total; Area VII, ridge region with two transverse paragnaths, furrow regions with two longitudinal paragnaths each (Fig. 2G); Area VIII, ridge regions with one paragnath each, furrow regions with two longitudinal paragnaths each (Fig. 2G).
Remarks.Some nereidid species groups can have similar morphological features, including paragnath patterns, that may cause misidentifications.The new species COI clade revealed no GenBank match based on the BLAST tool.Perinereis kaustiana sp.nov.and a sequence belonging to a specimen from Malaysia identified as P. helleri (type locality: Bohol, Philippines) not only are sister to each other and phylogenetically close (Fig. 1A; 19.9 ± 2.4% K2P COI distance), but they also seem to share the same paragnath sizes, shapes and patterns (Park and Kim 2017: 255, fig. 4e; sampled in South Korea; no molecular data available), including in Area III, with the presence of lateral patches with two paragnaths each (Fig. 2D) and the same paragnath arrangements in the furrow and ridge regions of Areas VII-VIII (Fig. 2G).This makes them morphologically very similar and possibly belonging to the same cryptic complex, which could range from the Red Sea to the Indo-Pacific based on the available COI data.However, P. kaustiana sp.nov.seems to differ from P. helleri in some key features: shorter postero-dorsal tentacular cirri, reaching up to chaetiger 9, instead of the reported chaetiger 16 for P. helleri; median parapodia with much longer dorsal cirri (3×) compared to ventral one; posteriormost parapodia with much wider dorsal ligule (2.5-3.0×)than the median ligule (Fig. 3C, I) and dorsal ligule greatly expanded (3× longer than ventral ligule).Based on parapodia drawings from Hutchings et al. (1991: 255, fig. 9; Syntype ZMB Q3464), the ratio between dorsal and ventral cirri in P. helleri is subequal to slightly longer than ventral cirri throughout the body and posteriormost dorsal ligules with double the width of median ones and slightly expanded (up to 2× the length of Table 4. Comparison between selected characters in the most morphologically similar species to P. kaustiana sp.nov., reported for the Arabian Peninsula and Mediterranean Sea and lacking DNA data.The Indo-Pacific P. helleri is also included.Morphological details of paragnath patterns for P. cultrifera and P. rullieri species complexes also includes partial data from topotypical specimens belonging to the private collection of the first author, to be published in the forthcoming future.Mohammad 1971;Hutchings et al. 1991Hutchings et al. 1991;Pilato 1974Pilato 1974 the ventral ligules; Table 4).Furthermore, P. helleri from Hutchings et al. (1991) does not seem to possess ligules with finger-like ending tips.
Characters
Other species with similar paragnath patterns are Perinereis anderssoni (Kinberg 1865: 167-179; Park and Kim 2017: 255, fig. 4d) and Perinereis rullieri (Pilato 1974: 25-36, figs 1-4), which share the same small sized paragnaths as P. kaustiana sp.nov., but instead the former two species possess only one paragnath in each lateral patch of Area III and paragnaths in Areas VII and VIII are usually arranged in two regular rows, without any discernible pattern in the furrow or ridge regions.Perinereis anderssoni is reported in the Atlantic region of the American continent (type locality: Rio de Janeiro, Brazil), while P. rullieri is apparently restricted to the Mediterranean Sea (type locality: between Aci Trezza and Augusta, eastern coast of Sicily, Italy).Moreover, the morphological similar lineages found within the Perinereis cultrifera (Grube 1840: 74, fig. 6;Hutchings et al. 1991: 253-254, fig.8a-c) species complex, including P. euiini (Park and Kim 2017: 252-260, figs 1, 2, 4a, b, 5, tables 1, 4, described for South Korea), are different from P. kaustiana sp.nov.due to the overall larger paragnath sizes, lack of any lateral patches in Area III, and the presence of shorter heterogomph falcigers (Park and Kim 2017: 254, fig. 2L).Specimens of Perinereis cultrifera from Lobo et al. (2016) were misidentified and are in fact P. oliveirae (Horst 1889: 38-45, plate 3;Fauvel 1923: 354, fig.138 e-k), the latter characterised by the presence of three paragnaths in lateral patches in Area III, while this feature is absent in P. cultrifera.Perinereis oliveirae is described for the northern Iberian Peninsula, having also very long bar-shaped paragnaths in Areas VI and very short tentacular cirri compared to length of the head (reaching chaetigers 1 and 2).features were confirmed based on the two P. oliveirae specimens from this study and samples from the private collection of the first author of this study.
Discussion
Our molecular data provides compelling evidence for the existence of a new, deeply divergent, and completely sorted species within the Perinereis species Group I in the Red Sea.At first glance, P. kaustiana sp.nov.can be easily misidentified as the well-known and allegedly cosmopolitan P. cultrifera, due to the classic two bar shaped paragnaths in Areas VI and proximity with the Mediterranean Sea.This might be the reason the latter is usually reported for the Red Sea (Wehe and Fiege 2002;Bonyadi-Naeini et al. 2018;OBIS), but a greater sampling effort in the central and southern Red Sea regions are needed to confirm this.Morphological features, such as the paragnath arrangement, as well as the length of tentacular cirri and ratios within the parapodia also allowed the distinction of P. kaustiana sp.nov.from other similar species (see taxonomic key and Tables 4, 5).Upon careful morphological examination, P. kaustiana sp.nov. is morphologically closer to the Indo-Pacific P. helleri, than it is to the European P. cultrifera, based mainly on paragnath patterns, particularly in Areas III (Fig. 2D) and VII and VIII (Fig. 2G), and similar length of the falciger blades.Paragnath features in Areas VII and VIII lends support to the taxonomic importance of highlighting faint ridges and furrows in the ventral oral ring for certain Perinereis species (Conde-Vela 2018), which usually are not accounted in species descriptions due to no apparent pattern being found (i.e., Teixeira et al. 2022a).Perinereis kaustiana sp.nov.and P. helleri are also phylogenetically closely related (Fig. 1A), despite being divergent lineages, with genetic distances that are in the range used for delimitating polychaete species (i.e., Kvist 2016;Lobo et al. 2016;Nygren et al. 2018).This situ-* No available chaetae data for P. striolata.
ation, together with the absence or subtle morphological differences previously overlooked, resembles cryptic lineages within a species complex (Teixeira et al. 2022b(Teixeira et al. , 2023)), and further sampling efforts between the Red Sea to the Indo-Pacific region are needed to assess this.
The new species is so far unique to the northern Red Sea and apparently easy to find in the rocky beaches of the Gulf of Aqaba.Considering the high rate of endemism in the Red Sea (DiBattista et al. 2016), this species may indeed be endemic to this Sea, although further sampling across this region and the Indo-Pacific area might prove it to be more widespread.In the remaining sampling sites further south, along the northern Saudi coast, P. kaustiana sp.nov. is outcompeted by the sympatric distributed Perinereis nuntia species group, which seems to be the dominant coastal annelid in the region (Fig. 1B).The latter is also a species complex with several different species recently revised by Villalobos-Guerrero (2019).Our specimens initially identified as belonging to the P. nuntia complex revealed at least two different morphotypes, which after further morphological (mainly based on paragnath patterns, Fig. 1C, D) and molecular review corresponded to the new species recently described by Elgetany et al. (2022) for the neighbouring Egyptian coast (Suez Canal), namely P. damietta (Fig. 1C) and P. suezensis (Fig. 1D).These species are sympatric with P. kaustiana sp.nov., but apparently not sympatric with each other in the studied region (Fig. 1B).Perinereis damietta (which is morphologically more similar to P. heterodonta Gravier, 1899 than to P. nuntia according to Elgetany et al. (2022)), was found mainly in lagoon-like environments, whereas P. suezensis only in fully marine areas.Perinereis kaustiana sp.nov.shared both marine and lagoon-like habitats, with all the three sampled species found in intertidal coarse-grained sand, under rocks or cobles.As speculated by Elgetany et al. (2022), P. damietta seems to have a slightly wider habitat preference, since some of our specimens (from Al Muwaileh lagoon) also occurred sub-tidally, attached to small rocks at approximately 1 meter depth.
Figure 1 .
Figure 1.Phylogenetic tree and MOTU distribution for the three sampled Red Sea Perinereis species A maximum likelihood phylogeny based on COI sequences, with information regarding the different MOTU delineation methods.Numbered MOTUs (1-4) contain original sequences from Perinereis specimens analysed in this study; MOTUs "GB" are based on Perinereis sequences mined from GenBank; MOTU "OUTG" correspond to the rooted outgroup, Alitta virens.Bootstrap values lower than 80% not displayed B Red Sea MOTU distribution; each coloured pie corresponds to a unique species and respective abundance proportion; larger pie charts indicate higher number of sympatric species.Species from the Suez Canal based on mined GenBank sequences from Elgetany et al. (2022); abundance proportion based on type material C Perinereis damietta, focus on prostomium and pharynx, dorsal view, specimen NTNU-VM-86031 D Perinereis suezensis, focus on prostomium and pharynx, dorsal view, specimen NTNU-VM-86032 E Perinereis kaustiana sp.nov., focus on prostomium and pharynx, dorsal view, specimen NTNU-VM-86011.Scale bars: 500 μm (C-E).
Table 1 .
Species, number of sequences (n), geographic location, and their respective GenBank COI accession numbers for the original material and sequence data used from other studies.
Table 2 .
Primers and PCR conditions used in this study.
|
v3-fos-license
|
2020-10-29T09:02:25.330Z
|
2020-10-26T00:00:00.000
|
226046526
|
{
"extfieldsofstudy": [
"Biology",
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.mdpi.com/1422-0067/21/21/7941/pdf",
"pdf_hash": "12b7610467d483a22e428e75bbed6681964f21da",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44343",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"sha1": "c00754cb4278da6232e420db02662f3f469e24b7",
"year": 2020
}
|
pes2o/s2orc
|
Mitocans Revisited: Mitochondrial Targeting as Efficient Anti-Cancer Therapy
Mitochondria are essential cellular organelles, controlling multiple signalling pathways critical for cell survival and cell death. Increasing evidence suggests that mitochondrial metabolism and functions are indispensable in tumorigenesis and cancer progression, rendering mitochondria and mitochondrial functions as plausible targets for anti-cancer therapeutics. In this review, we summarised the major strategies of selective targeting of mitochondria and their functions to combat cancer, including targeting mitochondrial metabolism, the electron transport chain and tricarboxylic acid cycle, mitochondrial redox signalling pathways, and ROS homeostasis. We highlight that delivering anti-cancer drugs into mitochondria exhibits enormous potential for future cancer therapeutic strategies, with a great advantage of potentially overcoming drug resistance. Mitocans, exemplified by mitochondrially targeted vitamin E succinate and tamoxifen (MitoTam), selectively target cancer cell mitochondria and efficiently kill multiple types of cancer cells by disrupting mitochondrial function, with MitoTam currently undergoing a clinical trial.
Introduction
Mitochondria are dynamic intracellular organelles with their own DNA (mitochondrial DNA, mtDNA). They have multiple important functions, including controlling adenosine triphosphate (ATP) generation, metabolic signalling, proliferation, redox homeostasis, and promotion/suppression of apoptotic signalling pathways. Genetic and/or metabolic alterations in mitochondria contribute to many human diseases, including cancer [1]. Although glycolysis was traditionally considered as the major source of energy in cancer cells, consistent with the so-called "Warburg effect" first suggested almost a century ago, referring to the elevated uptake of glucose that characterizes the majority of cancers, the mitochondrial function known as oxidative phosphorylation (OXPHOS) has been recently recognized to play a key role in oncogenesis [2,3]. In addition, cancer cells uniquely reprogram their cellular activities to support their rapid proliferation and migration, as well as to counteract metabolic and genotoxic stress during cancer progression [4]. Thus, mitochondria can switch their metabolic phenotypes to meet the challenges of high energy demand and macromolecular synthesis [5]. Moreover, cancer cell mitochondria have the ability to flexibly switch between glycolysis and OXPHOS to improve survival [2]. Furthermore, the electron transport chain (ETC) function is pivotal for mitochondrial respiration, and that ETC function is also necessary for dihydroorotate dehydrogenase (DHODH) activity that is essential for de novo pyrimidine synthesis [6]. Recently, the importance of mitochondria in intercellular communication has been further supported by observations that mtDNA within whole mitochondria are mobile and can undergo horizontal transfer between cells. Our group discovered that cancer cells devoid of their mtDNA and therefore lacking their tumorigenic potential could re-gain this In mtDNA deficient ρ0 cancer cells, signalling between mitochondria and nucleus is dampened. Reduced levels of the transcription coactivator PGC1α/β leads to the low transcriptional activity of nuclear respiratory factor-1 (NRF1), resulting in the low level of nuclear-encoded proteins imported into the mitochondria and mitochondrial dysfunction. (C) Mitochondrial transfer from host cells leads to increased PGC1α/β levels with an increased NRF1 transcriptional activity. This allows appropriate levels of nuclearencoded mitochondrial proteins to be imported into mitochondria and to recover mitochondrial function.
Targeting Mitochondrial Metabolism
Mitochondrial metabolism is highly complex and involves multiple functions and signalling pathways. The major functions of mitochondria are the production of ATP via OXPHOS and formation of metabolites needed to meet the bioenergetic and biosynthetic demands of the cell. Mitochondria are also central to a wide variety of vital cellular processes including apoptosis, maintenance of calcium homeostasis, redox signalling, steroid synthesis, and lipid metabolism. In addition, mitochondria have the ability to alter their bioenergetic and biosynthetic functions to meet the metabolic demands of a cell via a cross-talk with other sub-cellular organelles, in particular the In mtDNA deficient ρ0 cancer cells, signalling between mitochondria and nucleus is dampened. Reduced levels of the transcription coactivator PGC1α/β leads to the low transcriptional activity of nuclear respiratory factor-1 (NRF1), resulting in the low level of nuclear-encoded proteins imported into the mitochondria and mitochondrial dysfunction. (C) Mitochondrial transfer from host cells leads to increased PGC1α/β levels with an increased NRF1 transcriptional activity. This allows appropriate levels of nuclear-encoded mitochondrial proteins to be imported into mitochondria and to recover mitochondrial function.
We have recently proposed the term 'mitocans', an acronym derived from the terms mitochondria and cancer, a group of compounds with anti-cancer activity exerted via their molecular targets within mitochondria, some mitocans being selective for malignant tissues [10]. This classification has been used by others, as exemplified by a recent paper [11]. These various agents targeting mitochondria and their various functions contribute to novel anti-cancer strategies with high therapeutic potential. These strategies include agents that target ETC and OXPHOS, glycolysis, the tricarboxylic acid (TCA) cycle, apoptotic pathways, reactive oxygen species (ROS) homeostasis, the permeability transition pore complex, mtDNA as well as DHODH-linked pyrimidine synthesis [12,13]. Increasing numbers of studies focus on delivering anti-cancer drugs to mitochondria to treat cancers, and this innovative approach holds great hope for the development of new efficient anti-cancer therapeutics [14][15][16][17].
Targeting Mitochondrial Metabolism
Mitochondrial metabolism is highly complex and involves multiple functions and signalling pathways. The major functions of mitochondria are the production of ATP via OXPHOS and formation of metabolites needed to meet the bioenergetic and biosynthetic demands of the cell. Mitochondria are also central to a wide variety of vital cellular processes including apoptosis, maintenance of calcium homeostasis, redox signalling, steroid synthesis, and lipid metabolism. In addition, mitochondria have the ability to alter their bioenergetic and biosynthetic functions to meet the metabolic demands of a cell via a cross-talk with other sub-cellular organelles, in particular the nucleus, but also the endoplasmatic reticulum [18]. Accumulating evidence suggests that mitochondrial functions, including bioenergetics, biosynthesis, and signalling are essential for tumorigenesis. Therefore, targeting mitochondrial metabolism presents a broad spectrum of strategies to fight cancer. There are two basic modes of communication between mitochondria and the rest of the cell: anterograde and retrograde signalling. Anterograde signalling denotes signal transduction from the cytosol (and the various components contained by it) to mitochondria, retrograde signalling refers to signal transduction from mitochondria to the cytosol. There are multiple mechanisms of retrograde signalling, including the release of metabolites and ROS. In many types of cancer cells, large amounts of ROS are produced by mitochondrial ETC through oxidative metabolism. Mitochondrial ROS then activate signalling pathways proximal to mitochondria to promote cancer cell proliferation and tumorigenesis [19].
Targeting Mitochondrial Electron Transport Chain Function
Mitochondrial ETC comprises four complexes (I-IV) that transfer electrons and engage in redox reactions. Transfer of electrons by means of ETC is coupled to pumping of protons from the matrix to the intermembrane space by complexes I, III, and IV. Functional ETC supports OXPHOS activity and ATP generation that is essential for tumorigenesis (as well as normal cell function). Since the majority of ATP in tumour cells is produced by mitochondria [20,21], targeting mitochondrial ETC function and ATP production might be an effective strategy for cancer therapy. It was reported that drugs blocking mitochondrial ATP production induce cell death in poorly perfused tumours in nutrient-poor environments with limited glucose and oxygen [22]. These tumours show a strong dependence on OXPHOS for ATP generation [12,13]. The natural product papuamine was shown to inhibit ATP production in non-small cell lung cancer (NSCLC) cells to deplete intracellular ATP by causing mitochondrial dysfunction, and to increase mitochondrial superoxide generation with ensuing induction of NSCLC cell apoptosis [23].
Many ETC inhibitors, including metformin, tamoxifen, α-tocopheryl succinate (α-TOS) and 3-bromopyruvate (3BP), act by disrupting the function of respiratory complexes of ETC and by inducing production/accumulation of high levels of ROS to kill cancer cells [24,25]. Metformin, an anti-diabetic drug, has been shown to possess an anti-cancer effect by targeting mitochondrial ATP production without invoking toxicity in normal tissues [18]. The anti-tumorigenic effect of metformin via the inhibition of the mitochondrial respiratory complex I (CI) was also demonstrated [26][27][28]. Tamoxifen is used for the treatment of both early and advanced estrogen receptor-positive (ER+) breast cancer in pre-and post-menopausal women [29]. Recent discoveries suggest that tamoxifen inhibits oxygen consumption via suppression of mitochondrial CI-dependent respiration, which is linked to its anti-cancer activity [30]. Lee and colleagues found that the combination of gossypol and phenformin showed an anti-cancer effect in NSCLC via inhibition of aldehyde dehydrogenase and CI function, and efficiently reduced OXPHOS [31]. Kurelac and colleagues reported that inhibition of mitochondrial CI as an anti-cancer approach yielded promising results in subcutaneous osteosarcoma xenografts [32].
Our group reported mitochondrial complex II (CII) as a novel target for cancer therapy. We showed that α-TOS, an efficient anti-cancer agent, inhibits succinate quinone oxidoreductase (SQR) and succinate dehydrogenase (SDH) activity of mitochondrial CII by interacting with the proximal and distal ubiquinone (UbQ)-binding site [33,34]. Gracillin, a mitochondria-targeted anti-tumour agent, was shown to have broad-spectrum inhibitory effects on the viability of a large panel of human cancer cell lines by disruption of CII function via abrogating SDH activity [35].
Other than inhibition of CI or CII function, some anti-cancer compounds affect mitochondrial complex IV (CIV) or ATPase (CV) activity, inhibiting cancer cell respiration and ATP production. A small molecule VLX600 was reported to be active against colon cancer by disrupting the function of CIV, by suppressing the expression of its subunit 1, the COX-1 [36]. Tigecycline, an FDA-approved agent targeting leukemic cells, significantly decreased the activity of CI and CIV of cancer cells and selectively killed leukaemia stem and progenitor cells, while sparing normal hematopoietic cells [37].
Gamitrinib, a small-molecule inhibitor of ATPase selectively accumulated in mitochondria, diminished mitochondrial ATP production and displayed anti-tumorigenic properties in experimental models of cancer [38]. Mitotane has been used for the treatment of adrenocortical cancer and elicits its anti-cancer effects via inhibition of mitochondrial respiration. It was also used to target mitochondria and to induce apoptosis in thyroid cancer treatment [39].
Targeting Tricarboxylic Acid (TCA) Cycle
The TCA cycle, also known as the Krebs cycle, is located in the mitochondrial matrix in eukaryotic cells. It comprises a series of chemical reactions used by aerobic organisms to release stored energy via oxidation of acetyl-CoA derived from carbohydrates, fats, and proteins. The TCA cycle is a source of electrons that feed into ETC to drive the electrochemical proton gradient required for ATP generation. Its intermediates are used for biosynthesis of various macromolecules. This is exemplified by glutamine, a major carbon source that replenishes the TCA cycle intermediates and sustains their utilization for biosynthesis in tumour cells [40]. It is converted into glutamate, which is further converted into α-ketoglutarate that is required in a range of processes, including generation of the TCA cycle reducing equivalents NADH and FADH2, which are used by ETC to generate ATP [16]. Many cancer cells exhibit addiction to glutamine, such that targeting glutamine catabolism could be a plausible anti-cancer strategy. Specific glutaminase inhibitors, such as tetrahydrobenzo derivative-968 and BPTES, inhibit glutamine catabolism and delay tumour growth in experimental cancer models [41,42]. Inhibiting the conversion of glutamate to α-ketoglutarate can also suppress tumour growth [43,44]. Isocitrate dehydrogenase 1 and 2 (IDH1, IDH2) catalyze the conversion of isocitrate to α-ketoglutarate, playing a critical role in tumorigenesis [11]. IDH1 and IDH2 have been found mutated in multiple human cancers [18], rendering them as promising targets for anti-cancer therapy. Inhibitors of IDHs including 3BP, dichloroacetate, AGI-5198, and AGI-6780, possess high anti-cancer potential in a broad range of cancer types [10,12,45,46].
Targeting Glycolysis and OXPHOS
Cancer cells efficiently use both glycolysis and OXPHOS for their energy needs. Moreover, malignant cells have the ability of flexibly switching between glycolysis and OXPHOS, and this feature plays a major role in multiple modes of resistance to oncogenic inhibition [3,12]. Agents that target both glycolysis and OXPHOS may be considered as potentially efficient anti-cancer therapeutics. Combining glycolytic inhibitors together with mitochondria-targeted agents synergistically suppresses tumour cell proliferation [13].
Hexokinase II (HKII) is a major isoform of enzyme overexpressed in cancer cells with an important role in maintaining glycolytic activity. It also associates with the voltage-dependent anion channel (VDAC) on the mitochondrial outer membrane that has a function in apoptosis. As such, the inhibition of HKII will not only inhibit glycolysis but may also suppress the anti-apoptotic effect of the HKII-VDAC interaction. FV-429, an inhibitor of hexokinase, strongly induced apoptosis in cancer cells by both inhibition of glycolysis via suppression of HKII and by impairing the mitochondrial function via interfering with the HKII-VDAC interaction, leading to activation of mitochondria-mediated apoptosis [2]. As mentioned previously, metformin is a drug commonly used to treat diabetes, but it also has the ability to suppress multiple types of cancer [47]. It was shown that metformin inhibits HKII in lung carcinoma cells, leading to decreased glucose uptake and its phosphorylation [48]. Combining metformin with 2-deoxyglucose (2-DG), a glycolysis inhibitor, depleted ATP in a synergistic manner and showed a strong synergy for the combined therapeutic effect in pancreatic cancer cells [13]. Mitochondria-targeted carboxy-proxyl (Mito-CP) in combination with 2-DG led to significant tumour regression, suggesting that the dual targeting of mitochondrial bioenergetic metabolism and glycolysis may offer a promising chemotherapeutic anti-cancer strategy [3]. When combined with 2-DG, the anti-cancer effect of the BH3 mimetic ABT737 was significantly potentiated in human ovarian cancer cells [49]. Inducing mitochondrial uncoupling, where mitochondrial membrane potential is dissociation from ATP formation, is a new strategy with potential anti-cancer activity, as it promotes pyruvate influx to mitochondria and reduces various anabolic pathway activities. Indeed, the induction of mitochondrial uncoupling inhibits cell proliferation and reduces the clonogenicity of cultured colon cancer cells [50].
In recent years, there has been an upsurge in research focusing on reprogramming cancer cells via the understanding of their metabolic 'signatures'. Alterations in mitochondrial bioenergetics and impaired mitochondrial function may serve as effective targeting strategies such as in triple-negative breast cancer (TNBC), where hormone receptors are absent and endocrine therapy inefficient. Glucose starvation of MDA-MB-231 and MCF-7 breast cancer cells provoked a decreased mitochondrial respiration. Glucose starvation also sensitized MDA-MB-231 cells to apoptosis and decreased their migratory potential [51].
Targeting Mitochondrial Redox Signalling Pathways and ROS Homeostasis
Tumour cells can alter their redox balance and deregulate redox signalling to support malignant progression and to gain resistance to treatment [51]. They increase their antioxidant capacity to counterbalance the increased production of ROS [52,53], which permits them to generate high levels of ROS to activate proximal signalling pathways that promote proliferation and would not otherwise induce cancer cell death or senescence. Mitochondria produce high levels of ROS that are functional for multiple signalling networks underlying tumour proliferation, survival, and metastatic process [54]. Disturbing redox signalling pathways and breaking up ROS homeostasis in cancer cells could be used in cancer therapy. Thus, strategies aimed at altering redox signalling events in tumour cells and intended to disable key antioxidant systems in the presence of ROS inducers may represent promising new anti-cancer treatments [55].
Targeting Redox-regulating Enzymes and ROS Production
ROS are short-lived molecules with unpaired electrons derived from partially reduced molecular oxygen that are constantly generated, transformed, and eliminated via a variety of cellular processes including metabolism, proliferation, differentiation, immune system regulation, and vascular remodelling [56]. The level of ROS is critical for cell survival and cell death. At moderate concentrations, ROS activates the cancer cell survival signalling, while a high level of ROS can cause damage and induce apoptosis in cells. ETC is the major site of ROS production, and high levels of ROS released due to interference with ECT complexes cause cellular damage. Promoting mitochondrial ROS production to induce cancer cell death could thus enhance the efficacy of chemotherapy [47,55]. Oxymatrine was reported to efficiently kill human melanoma cells by generating high levels of ROS [12]. Capsaicin, casticin, and myricetin display anti-cancer activity by increasing ROS generation, leading to the disruption of mitochondrial transmembrane potential in cancer cells [12]. A novel mitochondria-targeted fluorescent probe BODIPY-TPA (in which triphenylamine, TPA, is coupled to the fluorophore) was shown to induce apoptosis in gastric cancer via disruption of the mitochondrial redox balance and ROS accumulation [15].
Of high biological relevance, nicotinamide adenine dinucleotide phosphate (NADPH) oxidases are enzymes that catalyze the production of O 2 − ·or H 2 O 2 using NADPH as a reductant [57]. ETC uses NADH and FADH 2 to generate O 2 − by means of univalent reduction of molecular oxygen resulting in electron leakage during mitochondrial respiration [58][59][60]. NADPH generation occurs in mitochondria from one-carbon metabolism [61] that is initiated by serine hydroxymethyltransferase 2 (SHMT2). As a key enzyme in serine/glycine biosynthesis and one-carbon metabolism, SHMT2 was shown to play a role in tumour growth and progression in many cancer types [62]. Thus, lowering SHTM2 levels decreased tumour growth [63]. Another enzyme involved in mitochondrial one-carbon metabolism, methylenetetrahydrofolate dehydrogenase/cyclohydrolase (MTHFD2), may represent a viable therapeutic target in cancer, since the loss of MTHFD2 increases ROS levels and sensitizes cancer cells to oxidant-induced cell death [64]. Furthermore, targeting mitochondrial one-carbon metabolism enzymes together with other therapies known to increase ROS may have potential benefit in cancer treatment [65]. In addition, Wang and colleagues synthesized binuclear Re(I) tricarbonyl complexes ReN and ReS that accumulate in mitochondria and cause oxidative stress and mitochondrial dysfunction, which have been shown to slow down the bioenergetic rate to inhibit tumour growth [66].
Targeting Mitochondrial Apoptotic Signalling Pathways
The intrinsic apoptotic signalling pathway refers primarily to mitochondria-mediated apoptotic pathways, in which Bcl-2 family proteins (e.g., Bcl-2, Bcl-x L and Bax) play pivotal roles. The intrinsic apoptotic signalling pathway is mediated by insertion of pro-apoptotic proteins Bax/Bak into the outer mitochondrial membrane. Subsequently, cytochrome c is released from the mitochondrial intermembrane space into the cytosol [67]. Cytochrome c combines with Apaf-1 and procaspase-9 to form the apoptosome, which triggers caspase-9 activation followed by the activation of caspase-3, which leads to cell death [68,69]. Bcl-2 and Bcl-x L are anti-apoptotic proteins which prevent the release of cytochrome c and protect cells from apoptosis [70]. Targeting Bcl-2 family proteins can be therefore used as anti-cancer strategy via activation of the apoptotic signalling pathway in cancer cells. Navitoclax, TW-37, GX15-070, and BM-1197 are Bcl-2 or Bcl-x L inhibitors with anti-cancer activity in a broad range of cancer types [12]. Venetoclax, another Bcl-x L inhibitor (a BH3 mimetic), has been approved for use in patients with lymphoma and chronic lymphocytic leukaemia [16,71]. Other compounds such as Gossypol, Navitoclax, ABT-737 and α-TOS act as mimetics of the Bcl-2 homology-3 domain to kill cancer cells via activation of post-mitochondrial apoptotic signalling [10]. Matrine was used to treat acute lymphoblastic leukaemia by ROS generation, and the agent significantly up-regulates the pro-apoptotic protein Bax and down-regulates the anti-apoptotic Bcl-2 protein [72]. ECPU-0001, an efficient tumoricidal agent, exhibited impressive anti-cancer activity and translated to the treatment of lung adenosarcoma by targeting the Bcl-2-associated intrinsic pathway of apoptosis [73]. SWNH treatment was reported to alter the expression of multiple mitochondrial apoptotic pathway-associated proteins and induced apoptosis in hepatoblastoma cells [74]. Silver(I) phosphine acts as an effective chemotherapeutic drug, killing malignant esophageal cells by targeting the mitochondrial intrinsic cell death pathway via lowing levels of ATP, altering ROS activity, and depolarizing the mitochondrial membrane, which leads to a release of cytochrome c and activation of caspase-9 [75].
Since Akt/PKB can inactivate pro-apoptotic factors such as Bad and procaspase-9 [76], activation of the kinase has been related to increased resistance of prostate cancer cells to apoptosis [77]. Akt/PKB activates the IκB kinase (IKK), which is a positive regulator of the survival transcription factor NFκB, and it has been shown that Akt/PKB links NFκB to modulation of anti-apoptotic effects in lymphoma cells [78]. Recent research has focused on targeting the m-TOR/PI3K/Akt signalling pathway to induce cancer cell apoptosis. Zhu et al. found that Galangin increased expression of Bax and cytochrome c and decreased expression of Bcl-2, resulting in the demise of renal cancer cells. It may also inhibit migration and invasion of kidney cancer cells and suppress the expression of several important proteins of the PI3K/Akt/m-TOR signalling pathway [79]. Pterostilbene exerted potent anti-tumour effects in HeLa cervical cancer cells by disrupting mitochondrial membrane potential, apoptosis induction, and targeting the m-TOR/PI3K/Akt pathway [80]. In addition, icariin was shown to inhibit the growth of human cervical cancer cells by inducing apoptosis and autophagy via the m-TOR/PI3K/Akt pathway [81].
p53 Signalling Pathway
The tumour suppressor protein p53 has emerged as a key regulator of metabolic processes and metabolic reprogramming in cancer cells. p53 engages in the mitochondrial cell death machinery and plays an important role in cell survival and function [82]. p53 has been shown to modulate mitochondria-linked programmed cell death [83,84]. One of its 'targets' is the pro-apoptotic protein Bax, whose expression is controlled by p53 [85]. Proline dehydrogenase, a p53-inducible inner-mitochondrial membrane flavoprotein linked to electron transport for anaplerotic glutamate and ATP production, is a unique mitochondrial cancer target. N-PPG-like inhibitors of proline dehydrogenase could suppress multiple types of breast cancer cell growth [86]. Qin et al. reported that tacrine platinum(II) complexes exhibited cytotoxic activity in NCIeH460, Hep-G2, SK-OV-3, SK-OV-3/DDP and MGC80-3 cancer cells and induced cell apoptosis by means of activation of the p53 signalling pathway and dysfunction of mitochondria [87].
EGFR-Targeting via Mitochondria-Mediated Apoptosis
The novel recombinant EGFR-targeting β-defensin Ec-LDP-hBD1 displays both selectivity and enhanced cytotoxicity against cancer cells by inducing mitochondria-mediated apoptosis and exhibiting high therapeutic efficacy against EGFR-expressing carcinoma xenografts. This novel format of β-defensin, which induces mitochondrial-mediated apoptosis, is likely to play an active role in EGFR-targeting cancer therapy [88].
Mitochondrial Fission
Mitochondria are dynamic organelles frequently undergoing fission and fusion cycles to maintain their integrity. Disruption of mitochondrial dynamics plays a role in cancer progression. Therefore, proteins involved in regulating the homeostasis of fission and fusion are potential targets for cancer treatment. mDIVI1 is an inhibitor of the mitochondrial fission protein DRP1, which can be used for elimination of cancer stem cells [89]. IR-783, a near-infrared heptamethine cyanine dye, has been reported to exert anti-cancer effects; linked to this, IR-783 was shown to cause induction of mitochondrial fission in MDA-MB-231 and MCF-7 cells, and to lower the levels of ATP [90].
Targeting Mitochondrial DNA (mtDNA)
Cancer is characterised by altered energy metabolism involving not only genetic alternations in nDNA but also mtDNA mutations and changes in mtDNA copy number [91][92][93][94]. It has been shown that somatic mtDNA alterations or low mtDNA copy number promote cancer progression and metastasis via activation of mitochondrial retrograde signalling [95,96]. Eliminating mtDNA limits tumorigenesis [97]; emerging studies from our group have shown that mtDNA plays an essential role in cancer progression, such that mtDNA-depleted cancer cells fail to form tumours, and these cells have to acquire mtDNA from the host by means of horizontal transfer of whole mitochondria to regain their tumorigenic ability [7,8,98] (Figure 1). Importantly, mitochondrial transfer has also been found to occur following mitochondrial damage by chemotherapy and radiation treatment to better protect cancer cells from aberrant physiology [99][100][101]. Therefore, targeting mtDNA and/or blocking mitochondrial transfer presents a novel strategy that may overcome drug resistance and enhance cancer therapy.
It has been reported that cyclomethylated Ir(III) complexes can intercalate into mtDNA and induce mtDNA damage, followed by a decline of mitochondrial membrane potential, suppression of ATP generation, and disruption of mitochondrial energetics and metabolic status, eventually causing cancer cell apoptosis [102]. Additionally, using an mtDNA-depletion model, we found that DHODH-driven pyrimidine biosynthesis is an essential pathway which links respiration to tumorigenesis, demonstrating that DHODH could be a potential wide-spectrum target for cancer therapy [9].
Mitochondria-Specific Anti-Cancer Drug Delivery
As mentioned, mitochondria are plausible targets for anti-cancer strategies. Agents that target mitochondrial metabolism, the ETC, apoptotic pathways as well as other mitochondrial-linked signalling pathways, show efficient anti-cancer potential. Many anti-cancer drugs (doxorubicin, cisplatin, paclitaxel, resveratrol) are already known to act within the membrane and the matrix of mitochondria [103,104]. Delivering drugs directly to mitochondria greatly enhances their anti-cancer efficacy [105]. Therefore, mitochondria-oriented delivery of anti-cancer drugs has become a focus of recent research, with the expectation to improve anti-cancer efficiency of chemotherapeutics and to overcome drug resistance. Currently, there are two well-known approaches for mitochondrial drug delivery: direct conjugation of the targeting ligand/moiety to drugs and attachment of the targeting ligand to a nanocarrier [106].
Direct Conjugation of Mitochondria-Targeting Ligands to Drugs
A number of direct conjugates have been reported for mitochondrial delivery of anti-cancer drugs using various targeting moieties, including lipophilic cations (triphenylphosphonium; rhodamine 123; and dequalinium) and peptides (mitochondria-penetrating peptide (MPP), mitochondria-targeting sequence (MTS) peptide, and Szeto-Schiller (SS) peptides) [106]. In this paragraph, we will focus on the lipophilic cations targeting moieties.
There are multiple mechanisms and techniques to deliver drugs into mitochondria using the well-known approach based on a higher mitochondrial membrane potential of cancer cells compared to that of their cytosol and non-cancer cells, which allows selective targeting of cancer cell mitochondria [107]. Triphenylphosphonium (TPP), as a delocalized lipophilic cation, is a frequently used mitochondria-targeting molecule, and a number of studies have used this ligand to develop mitochondria-targeted anti-cancer drugs ( Figure 2). Our group developed and tested TPP-conjugated drugs due to TPP's strong mitochondrial targeting ability; these agents belong amongst 'mitocans' that we defined earlier [10]. More specifically, TPP-tagged mitocans are agents which selectively accumulate in mitochondria of cancer cells, mostly causing ROS generation with ensuing apoptotic cell death (Figure 3) [10]. Within this class of compounds, we have synthetized to date mitochondria-targeted vitamin-E succinate (MitoVES), mitochondria-targeted tamoxifen (MitoTam) and mitochondria-targeted metformin (MitoMet), all of which show superior anti-cancer activity compared to the parental compounds. MitoVES disturbs the function of mitochondrial CII, MitoTam and MitoMet target mitochondrial CI, and in all the cases this results in the formation of high levels of ROS that leads to cancer cells death [15,[108][109][110][111][112]. Of note, MitoTam has been tested in a Phase 1 trial (EudraCT 2017-004441-25) with promising outcomes, and we are currently extending this into a Phase 2 trial. delivery: direct conjugation of the targeting ligand/moiety to drugs and attachment of the targeting ligand to a nanocarrier [106].
Direct Conjugation of Mitochondria-Targeting Ligands to Drugs
A number of direct conjugates have been reported for mitochondrial delivery of anti-cancer drugs using various targeting moieties, including lipophilic cations (triphenylphosphonium; rhodamine 123; and dequalinium) and peptides (mitochondria-penetrating peptide (MPP), mitochondria-targeting sequence (MTS) peptide, and Szeto-Schiller (SS) peptides) [106]. In this paragraph, we will focus on the lipophilic cations targeting moieties.
There are multiple mechanisms and techniques to deliver drugs into mitochondria using the well-known approach based on a higher mitochondrial membrane potential of cancer cells compared to that of their cytosol and non-cancer cells, which allows selective targeting of cancer cell mitochondria [107]. Triphenylphosphonium (TPP), as a delocalized lipophilic cation, is a frequently used mitochondria-targeting molecule, and a number of studies have used this ligand to develop mitochondria-targeted anti-cancer drugs ( Figure 2). Our group developed and tested TPP-conjugated drugs due to TPP's strong mitochondrial targeting ability; these agents belong amongst 'mitocans' that we defined earlier [10]. More specifically, TPP-tagged mitocans are agents which selectively accumulate in mitochondria of cancer cells, mostly causing ROS generation with ensuing apoptotic cell death (Figure 3) [10]. Within this class of compounds, we have synthetized to date mitochondriatargeted vitamin-E succinate (MitoVES), mitochondria-targeted tamoxifen (MitoTam) and mitochondria-targeted metformin (MitoMet), all of which show superior anti-cancer activity compared to the parental compounds. MitoVES disturbs the function of mitochondrial CII, MitoTam and MitoMet target mitochondrial CI, and in all the cases this results in the formation of high levels of ROS that leads to cancer cells death [15,[108][109][110][111][112]. Of note, MitoTam has been tested in a Phase 1 trial (EudraCT 2017-004441-25) with promising outcomes, and we are currently extending this into a Phase 2 trial. The classes of mitocans comprise the following, as enumerated from the outside of the mitochondria towards the matrix. Class 1: hexokinase inhibitors; Class 2: BH3 mimetics and related agents that impair the function of the anti-apoptotic Bcl-2 family proteins; Class 3: thiol redox inhibitors; Class 4: agents targeting VDAC and ANT; Class 5: compounds targeting the mitochondrial electron transport chain; Class 6: hydrophobic cations targeting the MIM; Class 7: compounds that affect the TCA; and Class 8: agents that interfere with mtDNA. Class 9 (not shown) includes agents acting on mitochondria, whose molecular target has not been thus far described [10].
A number of researchers have used TPP+ conjugation to deliver anti-cancer drugs to mitochondria. Bryant and colleagues reported that Hsp90-TPP showed a 17-fold increase in mitochondrial accumulation than Hsp90 itself, and that "mitochondrial Hsp90" efficiently killed both primary and cultured acute myeloid leukaemia cells [113]. Han and colleagues synthesized TPPdoxorubicin (TPP-Dox) and found that it was taken up at a higher rate than free Dox by MDA-MB-435 Dox-resistant cells, indicating that TPP-Dox conjugate was able to overcome drug resistance [114]. Two phenol TPP-derivatives were shown to have remarkable cytotoxic activity against different cancer cell lines, with lower toxicity against normal cells [115]. Chlorambucil is an anti-cancer agent that damages DNA. Millard and colleagues synthesized a TPP-chlorambucil conjugate and found that it accumulated in mitochondria, leading to mtDNA damage and significant suppression of tumour progression. TPP-chlorambucil showed about an 80-fold enhancement of cancer cell-killing activity in a panel of breast and pancreatic cancer cell lines that are largely insensitive to the parent drug [116]. A dual fluorescent mitochondrial targeting F16-TPP analogues also showed a promising therapeutic effect in cancer cells [117]. A modification of a pro-apoptotic peptide with two mitochondria-targeting TPP moieties caused its efficient accumulation in mitochondria of cancer cells, inducing mitochondrial dysfunction and triggering mitochondria-dependent apoptosis to efficiently eliminate cancer cells [118]. Wang and Xu reported that TPP-coumarin as a novel mitochondria-targeted drug effectively inhibited HeLa cell proliferation and triggered apoptosis by promoting ROS generation and mitochondrial Ca2+ accumulation [119].
Recently, photodynamic therapy (PDT) has been proven to be a minimally invasive and highly efficient therapeutic strategy of cancer treatment. TPP was used in the development of a group of photosensitizers to enhance their cancer cell uptake efficacy and mitochondrial localization. Noh and colleagues developed MitDt, a mitochondrial targeting photodynamic therapeutic agent, by conjugating the heptamethine mesoposition of a cyanide dye with TPP. The PDT effects of MitDt are amplified after laser irradiation because mitochondria are susceptible to ROS which triggers anticancer effects [17]. The cationic TPP-octahedral molybdenum cluster complex was shown to rapidly The classes of mitocans comprise the following, as enumerated from the outside of the mitochondria towards the matrix. Class 1: hexokinase inhibitors; Class 2: BH3 mimetics and related agents that impair the function of the anti-apoptotic Bcl-2 family proteins; Class 3: thiol redox inhibitors; Class 4: agents targeting VDAC and ANT; Class 5: compounds targeting the mitochondrial electron transport chain; Class 6: hydrophobic cations targeting the MIM; Class 7: compounds that affect the TCA; and Class 8: agents that interfere with mtDNA. Class 9 (not shown) includes agents acting on mitochondria, whose molecular target has not been thus far described [10].
A number of researchers have used TPP+ conjugation to deliver anti-cancer drugs to mitochondria. Bryant and colleagues reported that Hsp90-TPP showed a 17-fold increase in mitochondrial accumulation than Hsp90 itself, and that "mitochondrial Hsp90" efficiently killed both primary and cultured acute myeloid leukaemia cells [113]. Han and colleagues synthesized TPP-doxorubicin (TPP-Dox) and found that it was taken up at a higher rate than free Dox by MDA-MB-435 Dox-resistant cells, indicating that TPP-Dox conjugate was able to overcome drug resistance [114]. Two phenol TPP-derivatives were shown to have remarkable cytotoxic activity against different cancer cell lines, with lower toxicity against normal cells [115]. Chlorambucil is an anti-cancer agent that damages DNA. Millard and colleagues synthesized a TPP-chlorambucil conjugate and found that it accumulated in mitochondria, leading to mtDNA damage and significant suppression of tumour progression. TPP-chlorambucil showed about an 80-fold enhancement of cancer cell-killing activity in a panel of breast and pancreatic cancer cell lines that are largely insensitive to the parent drug [116]. A dual fluorescent mitochondrial targeting F16-TPP analogues also showed a promising therapeutic effect in cancer cells [117]. A modification of a pro-apoptotic peptide with two mitochondria-targeting TPP moieties caused its efficient accumulation in mitochondria of cancer cells, inducing mitochondrial dysfunction and triggering mitochondria-dependent apoptosis to efficiently eliminate cancer cells [118]. Wang and Xu reported that TPP-coumarin as a novel mitochondria-targeted drug effectively inhibited HeLa cell proliferation and triggered apoptosis by promoting ROS generation and mitochondrial Ca2+ accumulation [119].
Recently, photodynamic therapy (PDT) has been proven to be a minimally invasive and highly efficient therapeutic strategy of cancer treatment. TPP was used in the development of a group of photosensitizers to enhance their cancer cell uptake efficacy and mitochondrial localization. Noh and colleagues developed MitDt, a mitochondrial targeting photodynamic therapeutic agent, by conjugating the heptamethine mesoposition of a cyanide dye with TPP. The PDT effects of MitDt are amplified after laser irradiation because mitochondria are susceptible to ROS which triggers anti-cancer effects [17]. The cationic TPP-octahedral molybdenum cluster complex was shown to rapidly internalize into HeLa cells and accumulate in their mitochondria, triggering intensive phototoxic effect from the 460 nm irradiation [120]. Lei and colleagues reported that TPP-porphyrin photosensitizers with photodynamic activity present significant phototoxicity at concentrations at which "dark toxicity" is negligible towards human breast cancer cells [121]. Also, the AgBiS2-TPP nanocomposite was reported as applicable in photothermal therapy, and it was demonstrated as an agent with high anti-cancer activity [122]. Besides TPP-conjugated drugs, rhodamine derivatives and guanidine-drug conjugates also present as mitochondria-targeting anti-cancer agents that accumulate in mitochondria based on their lipophilic and cationic properties [123][124][125][126][127].
Mitochondria-Targeting Ligands and Nanocarrier (Mitochondria-targeted Nanocarriers)
Nanocarriers have been considered to carry drugs and deliver them to the target areas of tissues to enhance drug efficiency and reduce toxicity. Nanocarriers include micelles, polymers, carbon-based materials, liposomes, metallic nanoparticles, and dendrimers that all have been developed for applications, particularly in the field of chemotherapeutic drug delivery [106,128]. The size of the nanocarriers should be small, ideally within the range of 10-200 nm in diameter, so they can deliver drugs to otherwise inaccessible sites within various tissues. For mitochondrial targeting, a nanoparticle needs to be tagged with a targeting ligand, which preferentially delivers drugs to mitochondria. However, in some cases, the nanoparticle itself can act as a mitochondria-targeting agent, based on its properties. Similar to the concept of direct targeting, cationic ligands such as dequalinium (DQA) and TPP are often attached to nanocarriers to generate mitochondria-targeted nanocarriers (MTNs). These MTNs can overcome solubility, selectivity, and resistance issues of individual drugs, and accumulate primarily in mitochondria to improve their therapeutic effect. The first nanomaterial applied for mitochondrial targeting was presented by DQA micelles (DQAsomes), which exhibit liposome-like self-assembly properties in aqueous solutions [129,130]. They have higher cell killing activity in cancer cells compared to normal cells, resulting from selectively enhanced ROS generation, disruption of mitochondrial transmembrane potential, and blockade of ATP synthesis [131,132].
Doxorubicin (Dox) is one of the first choices of chemotherapeutic drugs applied to the nanocarrier delivery system. Liu and colleagues prepared Dox-loaded TPP-lonidamine self-assembled nanoparticles (NPs), which contain polyethylene glycol groups to enhance their circulation in blood for more extended periods. The NPs showed greater cytotoxicity in both drug-sensitive and drug-resistant cancer cells compared to Dox [133]. Using a hydrazone bond, a hyaluronic acid-Dox-TPP conjugate was prepared to specifically deliver TPP-Dox to mitochondria. A cell uptake study showed more significant mitochondrial accumulation of the NPs in MCF/ADR (Adenocarcinoma) cells, and further cytotoxicity and anti-tumor studies confirmed their enhanced efficacy compared to free Dox and TPP-Dox conjugates [134].
Zhang et al. used glycyrrhetinic acid-attached graphene oxide with Dox as a model drug for dual targeting to mitochondria and the cell membrane due to its ability to interact with the mitochondrial respiratory chain and high binding affinity to protein kinase C (PKC) α, which is overexpressed in certain cancer types [135][136][137]. Carbon quantum dots (CQDs) have been used as fluorescent probes for bioimaging/biolabeling and biosensing due to their stable and robust fluorescence and low toxicity [138]. Mitochondria-targeting Dox-loaded CQD nanoparticles are expected to overcome drug resistance. D-α-Tocopherol polyethylene glycol succinate (TPGS), an inhibitor of the permeability glycoprotein (Pgp, a multidrug resistance protein), was included in the NP to inhibit Pgp expression in drug-resistant cancer cells. TPP was conjugated to TPGS and covered the CQDs. The cytotoxicity results revealed that Dox-loaded CQD NPs had a five-times lower IC50 value in drug-resistant MCF7 cells compared to free Dox [139]. Furthermore, it has been reported that DQA-Dox-containing micelle NPs have up to 5-fold greater tumour suppression effects than free Dox in a Dox-resistant tumour model [140].
Lee and colleagues reported the formation of aggregates off a TPP-tagged coumarin probe (TPP-C) in an aqueous solution. With the encapsulation of Dox into the TPP-C NPs, the anti-cancer drug was efficiently delivered to the mitochondria and exerted considerable cytotoxicity toward cancer cells [141]. Lonidamine (LND) can act on mitochondria and inhibit energy metabolism in cancer cells and therefore has been used together with chemotherapeutic drugs for synergistically enhanced therapeutic efficacy. However, its use is hindered by poor solubility and slow diffusion in the cytoplasm. Aqueous dispersible NPs containing TPP and LND plus Dox were prepared for synergistic cancer treatment and for overcoming drug resistance. TPP-LND-DOX NPs promote the mitochondrial apoptotic pathway and contribute to the overcoming of drug resistance in cancer therapy [133]. Similarly, TPP-linked lipid-polymer hybrid NPs (DOX-PLGA/CPT) were coated with an acidity-triggered cleavable polyanion (PD) and formed DOX-PLGA/CPT/PD structures. The surface negative charge of DOX-PLGA/CPT/PD prevented their rapid clearance from the circulation and improved their accumulation in tumour tissue via an enhanced effect on permeability and retention. Hydrolysis of amide bonds in PD in weakly acidic tumour tissue leads to the conversion of DOX-PLGA/CPT/PD to positively charged DOX-PLGA/CPT, the latter form eventually accumulated in tumour mitochondria. This results in targeting of mtDNA and induction of tumour cells apoptosis and in overcoming Dox resistance of MCF-7/ADR breast cancer [142].
Based on mesoporous silica nanoparticles (MSNs), a novel enzyme-responsive, multistage-targeted anti-cancer drug delivery system which possessed both CD44-targeting and mitochondrial-targeting properties was developed by Naz and colleagues [143]. First, TPP was attached to the surface of the MSNs, and Dox was then encapsulated into the pore of the MSNs followed by its capping with tumour-targeting molecules of hyaluronic acid (HA). The final product consists of Dox-loaded, TPP-attached, HA-capped mesoporous silica nanoparticles (MSN-DPH). MSN-DPH, preferentially taken up by cancer cells via CD44 receptor-mediated endocytosis, primarily accumulated in mitochondria and efficiently killed cancer cells while exhibiting much lower cytotoxicity to normal cells [143]. In addition, a novel delivery platform based on tetrahedral DNA nanostructures (TDNs) that enable mitochondrial import of Dox for cancer therapy was designed by Yan and colleagues. The peptide 3KLA was conjugated to TDNs to efficiently target mitochondria. The 3KLA-TDNs exhibited highly efficient Dox accumulation in mitochondria, leading to an effective release of cytochrome c and upregulated expression of pro-apoptotic proteins, as well as reduced expression of anti-apoptotic proteins, resulting in activation of mitochondria-mediated apoptotic pathway to enhance the anti-cancer efficacy of Dox [144].
Conclusions
Mitochondria, with their various functions, have become novel targets for anti-cancer strategies. Targeting mitochondrial metabolism, including the electron transport chain function, the redox signalling pathways and ROS homeostasis, as well as apoptotic signalling pathways, have become a major focus for researchers (Table 1). Mitochondrial DNA was also reported to play a critical role in tumorigenesis; therefore, targeting mtDNA has opened a new direction of anti-cancer therapy. Moreover, delivery of anti-cancer drugs to mitochondria is of high clinical relevance, since it can enhance drug selectivity for cancer cells, overcome drug resistance and considerably promote anti-cancer activity. A prime example is the CI-targeting MitoTam [12] currently under a clinical trial thus far showing excellent therapeutic and toxicity profile. We have finalised Phase 1/b clinical trial of MitoTam and are preparing for Phase 2 trial, likely combining MitoTam with another anti-cancer therapeutic. Overall, the development of mitocans, the mitochondrial-targeting treatments and strategies have great potential in future anti-cancer therapies.
|
v3-fos-license
|
2017-04-13T23:19:13.478Z
|
2013-05-04T00:00:00.000
|
15905202
|
{
"extfieldsofstudy": [
"Medicine",
"Biology"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://bmcgenomics.biomedcentral.com/track/pdf/10.1186/1471-2164-14-302",
"pdf_hash": "17f83aefc6d7760808ca72f51acbd83d79cb658c",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44344",
"s2fieldsofstudy": [
"Medicine",
"Biology",
"Computer Science"
],
"sha1": "8b19cc9e08eb145c807b730d8c42e7a1d94deed1",
"year": 2013
}
|
pes2o/s2orc
|
Identification of somatic mutations in cancer through Bayesian-based analysis of sequenced genome pairs
Background The field of cancer genomics has rapidly adopted next-generation sequencing (NGS) in order to study and characterize malignant tumors with unprecedented resolution. In particular for cancer, one is often trying to identify somatic mutations – changes specific to a tumor and not within an individual’s germline. However, false positive and false negative detections often result from lack of sufficient variant evidence, contamination of the biopsy by stromal tissue, sequencing errors, and the erroneous classification of germline variation as tumor-specific. Results We have developed a generalized Bayesian analysis framework for matched tumor/normal samples with the purpose of identifying tumor-specific alterations such as single nucleotide mutations, small insertions/deletions, and structural variation. We describe our methodology, and discuss its application to other types of paired-tissue analysis such as the detection of loss of heterozygosity as well as allelic imbalance. We also demonstrate the high level of sensitivity and specificity in discovering simulated somatic mutations, for various combinations of a) genomic coverage and b) emulated heterogeneity. Conclusion We present a Java-based implementation of our methods named Seurat, which is made available for free academic use. We have demonstrated and reported on the discovery of different types of somatic change by applying Seurat to an experimentally-derived cancer dataset using our methods; and have discussed considerations and practices regarding the accurate detection of somatic events in cancer genomes. Seurat is available at https://sites.google.com/site/seuratsomatic.
Prior Selection for Seurat Genotype Priors
The priors used for the genotype in the normal genome are the SNP frequencies for human diploid chromosomes, as calculated by (Li et al. 2009): π het = 0.001 π var = 0.0005 π ref = 1 -(π het + π var) = 0.9985 π somatic and πLOH are high-end estimates of the frequency of somatic events, given that that the mutation profile of each individual cancer can vary wildly even within subtypes. At 0.0001, they expect 300,000 events through the human genome.
Usage
Seurat is a command-line Java application, packaged as a stand-alone JAR file. It is compatible with any operating system platform which is support by the Sun Java 1.6 runtime (including Linux, Windows, and Mac OS X).
Seurat can be executed using a command prompt (terminal) window of the operating system, by moving to the directory of the JAR file and executing the following command:
-go <gene_out> Tab-delimited text output for non-focal events. Most large event analyses require the 'refseq' argument below.
Optional:
--indels Enable somatic insertion/deletion calling. Default = false -refseq <refseq_file> Name of RefSeq transcript annotation file. If specified, gene-wide events can be detected, and SNVs/LOH events will be annotated with the gene name.
-mbq <integer> Minimum base quality required to consider a base for calling. Default = 10.
-mmq <integer> Minimum mapping quality for reads to be considered in the pileup. Default = 10.
-ref <true/false> If true, only reference-matching homozygous positions are allowed on the normal, for SNV discovery. Reduces false positives due to faulty alignments. Default = true.
-alpha <integer> Alpha parameter of the beta-distribution used for evaluating homozygosity likelihood. Default = 1.
-beta <integer> Beta parameter of the beta-distribution used for evaluating homozygosity likelihood. Default = 701.
--both_strands Whether or not variant evidence needs to appear on both strands on the tumor in order to be considered. Default = false.
-coding_only Reduces full-genome and transcript analyses to coding regions of genes. Requires therefseq argument. Default = false.
-mm Maximum number of mismatches against the reference that are allowed in a read. Reads surpassing this number are filtered out. Can be used as an attempt to salvage 'dirty' BAMS containing large numbers of problematic and unlikely alignments, usually due to bugs in the aligner software.) Default =3.
-pileup Enable full pileup output for each call in the VCF file. Default = false -mcv <integer> The minimum per-sample coverage required to attempt a call at a locus. Default = 6 Usage notes / Known Issues -We recommend the use of the GATK Indel Realigner to jointly process the normal and tumor DNA BAMs for Seurat, as we have empirically found that it reduces indel false positive counts significantly.
-We have found that the Base Quality recalibrator provides a minimal accuracy improvement.
-We do not recommend the use of the base alignment quality (BAQ) (which can be enabled if needed with the '-baq' argument). BAQ currently appears to be causing a significant drop in sensitivity.
-We do not recommend the use of the Variant call recalibrator on Seurat results, as the tool was not designed for somatic calls.
-Seurat accepts most global GATK arguments that can affect its functions. For more information on the GATK framework, please visit http://www.broadinstitute.org/gsa/wiki/index.php/The_Genome_Analysis_Toolkit.
-Seurat does not support the '-nt' option for running multiple threads within GATK. However, the GATK interval option ("-L") can be used to split the data into "bins" that can run simultaneously.
-If Seurat runs without any errors, but the output files do not contain any calls, please check the following: a) Read group tags ("@RG") are required for all BAMs that are provided; BAMs without RG tags will be ignored (more accurately, any reads that are not assigned in a read group will not be used for analysis). If your BAMs were not generated with RG tags, you may use the Picard tool AddOrReplaceReadGroups to add them. Please note (b) below if you have to add read groups manually.
b) The same sample name ("SM") cannot be used on read groups belonging on both the normal and the tumor samples. GATK currently uses sample names to group alignments together, so if they are identical between datasets, they will be merged in-memory.
-BAM files for analysis must match exactly on their header's sequence names, sequence order, and sequence length.
-For more information on how GATK handles BAM files, please refer to http://gatkforums.broadinstitute.org/discussion/1317/collected-faqs-about-bam-files The TYPE tag in the INFO field describes the somatic event that was detected.
The ALT genotype describes either the variant detected in the tumor genome (in case of a somatic SNV event), or the variant allele that is lost in the tumor (in case of an LOH event). The strings "<INS>" and "<DEL>" represent indels.
Large event list (-go):
A simple two-field tab-delimited text file in the following format: [
|
v3-fos-license
|
2019-10-31T08:54:58.338Z
|
2019-10-25T00:00:00.000
|
208062304
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.mdpi.com/2075-4418/9/4/163/pdf",
"pdf_hash": "fa332cc27cf890004a0eb53df9ec51e2c99b0259",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44345",
"s2fieldsofstudy": [
"Medicine",
"Psychology"
],
"sha1": "da478462a1e3a8fd1947f74d0d082251cb43aea8",
"year": 2019
}
|
pes2o/s2orc
|
The Newly Normed SKT Reveals Differences in Neuropsychological Profiles of Patients with MCI, Mild Dementia and Depression
The SKT (Syndrom-Kurztest) is a short cognitive performance test assessing deficits of memory and attention in the sense of speed of information processing. The new standardization of the SKT (2015) aimed at improving its sensitivity for early cognitive decline due to dementia in subjects aged 60 or older. The goal of this article is to demonstrate how the neuropsychological test profile of the SKT can be used to provide valuable information for a differential diagnosis between MCI (mild cognitive impairment), dementia and depression. n = 549 patients attending a memory clinic (Nuremberg, Germany) were diagnosed according to ICD-10 and tested with the SKT. The SKT consists of nine subtests, three for the assessment of memory and six for measuring attention in the sense of speed of information processing. The result of the SKT test procedure is a total score, which indicates the severity of overall cognitive impairment. Besides the summary score, two subscores for memory and attention can be interpreted. Using the level of depression as a covariate, statistical comparisons of SKT test profiles between the three patient groups revealed that depressed patients showed more pronounced deficits than MCI patients in all six attention subtests. On the other hand, MCI patients displayed significantly greater mnestic impairment than the depressed group, which was indicated by significant differences in the memory subscore. MCI and dementia patients showed similar deficit patterns dominated by impairment of memory (delayed recall) with MCI patients demonstrating less overall impairment. In sum, the SKT neuropsychological test profiles provided indicators for a differential diagnosis between MCI and beginning dementia vs. depression.
Introduction
Dementia and depression are the most frequent psychiatric disorders of old age [1]. Both affect quality of life of patients in a more fundamental way and to a much greater extent than many somatic diseases [2]. Depression is also considered a serious risk factor for developing dementia [3,4]. In addition, dementia and depression share a diagnostic deficit. Dementia is often only diagnosed in more advanced stages showing higher degrees of functional impairment [5]. Worldwide, patients suffering from depression frequently are not correctly diagnosed; therefore, in many countries less than 10% of depressed subjects receive adequate treatment [6].
Due to an overlap in symptoms, a valid differential diagnosis between dementia and depression is sometimes difficult to establish: Depressive disorders in old age are associated with cognitive impairment in 40% to 60% of patients [7]. Conversely, about 40% of dementia patients develop depression symptoms [8,9]. Accordingly, among the differential diagnoses of dementia, in the first place the ICD-10 [10] lists depressive disorders, which can show characteristics of incipient dementia with memory impairment, slowed thinking and lack of spontaneity. In the same way, the DSM-5 [11] recommends inspecting the cognitive profiles of patients suggesting memory and executive impairment as typical for Alzheimer's disease, whereas nonspecific and more variable test performance could be expected in major depression. In accordance with this perspective, a number of reviews state a lack of clarity in the neuropsychological profiles of depressive disorders [12,13]. However, other authors consider impairment in speed of information processing, attention or executive functions as cognitive core features of depressed older patients [7,[14][15][16].
Since the cognitive deficits associated with depression are less pronounced than those found in dementia [17][18][19], making a differential diagnosis much more difficult when it is not a full-blown dementia, but "mild cognitive impairment" (MCI), which has to be differentiated from depression. For almost 30 years, MCI has been conceptualized as a transitional phase between normal aging and dementia; it is discussed as a clinical condition with a high prognostic value for future dementia development, mostly towards Alzheimer's dementia [20][21][22]. The diagnostic differentiation of MCI and depression is further complicated by the existence of several MCI subtypes (amnestic vs. non-amnestic, single vs. multiple domains) causing a potential variety of neuropsychological performance patterns. Furthermore, nearly one third of MCI patients also will develop depression symptoms [23]. Overall, a wide range of disturbed cognitive functions may be expected in both MCI and depressed subjects. Consequently, attempts to differentiate between MCI and depression by means of psychometric tests often have failed [18,19,24].
Against this background, the present study compared the neuropsychological profiles of patients with MCI, mild dementia and depression tested with the SKT according to Erzigkeit [25]. The SKT (acronym for Syndrom-Kurztest; however, this German term is outdated and not used anymore) is a short cognitive performance test assessing memory and attention, the latter in the sense of speed of information processing. Thus, the SKT addresses exactly those two cognitive domains that are considered to be primarily impaired in patients with mild dementia and depressive disorders, respectively. Furthermore, given the fact that amnestic MCI is the most frequent MCI subtype [24,26], it was expected that patients with MCI or mild dementia would show greater deficits in the memory section of the SKT, while depressed patients would be more impaired in subtests measuring speed of information processing.
Samples
The present study included all patients referred between 2000 and 2005 to the Memory Clinic of Nuremberg General Hospital fulfilling the following criteria: (1) age 60 years or older, (2) diagnosis of mild cognitive impairment (MCI, in accordance with the consensus criteria according to Winblad et al. 2004 [20]), mild dementia (Alzheimer type, mixed type or vascular dementia; ICD-10 codes F00 or F01) or depressive disorder (ICD-10 codes F32 or F33) and (3) complete assessment with all SKT subtests. As an indicator of the clinical severity of MCI and mild dementia, assignment to stages 3 (MCI) or 4 (mild dementia) of the Global Deterioration Scale (GDS, [27]) was required. Exclusion criteria were (1) age below 60 years, (2) all other diagnoses than the ones required for inclusion, e.g., other forms of dementia (dementia in Parkinson disease and amnesic syndromes due to substance use), other forms of depression (e.g., adjustment disorders or post-traumatic stress disorders) and (3) not being able to complete all SKT subtests (e.g., due to reduced motor abilities, due to not being able to understand the test instructions or being unfamiliar with numbers).
Measures
The SKT is a cognitive test developed and published in Germany [28] assessing impairment of memory and attention, the latter in the sense of speed of information processing. The SKT comprises nine subtests, three of which refer to visual memory (immediate and delayed recall and recognition memory), the remaining six subtests measure processing speed. An overview of the subtests and the tasks to be completed is given in Table 1, the test materials are shown in Figure 1. memory), the remaining six subtests measure processing speed. An overview of the subtests and the tasks to be completed is given in Table 1, the test materials are shown in Figure 1. The maximum performance time for each subtest is limited to 60 seconds, so that the total administration time will be approximately 10 to 15 minutes. In the attention/speed subtests, the patient is instructed to work as fast and accurately as possible. In the memory subtests, all correct answers given within 60 seconds will be scored. The test was developed in five parallel forms (A to E) for repeated test administration even within short time intervals. In addition to a total summary score, the evaluation also provides subscores for separately interpreting memory and attention performance.
Since its publication in 1977, the SKT has been revised three times. The last revision carried out in 2015 was undertaken to establish new test norms for age groups 60 years and older to improve the sensitivity of the SKT for early cognitive decline due to Alzheimer's disease or other neurocognitive The maximum performance time for each subtest is limited to 60 s, so that the total administration time will be approximately 10 to 15 min. In the attention/speed subtests, the patient is instructed to work as fast and accurately as possible. In the memory subtests, all correct answers given within 60 s will be scored. The test was developed in five parallel forms (A to E) for repeated test administration even within short time intervals. In addition to a total summary score, the evaluation also provides subscores for separately interpreting memory and attention performance.
Since its publication in 1977, the SKT has been revised three times. The last revision carried out in 2015 was undertaken to establish new test norms for age groups 60 years and older to improve the sensitivity of the SKT for early cognitive decline due to Alzheimer's disease or other neurocognitive disorders [25]. In a first step, more than 1000 non-demented community dwelling subjects aged between 60 and 91 years were tested with the SKT. On the basis of this data set, conditional expected values were calculated for each of the nine SKT subtests using multiple regressions taking into account age, gender and level of intelligence. Based on the deviations from the predicted performance, in a second step, norm scores of 0, 1 or 2 were defined depending on the size of the deviation of the actual performance from the predicted performance (higher scores indicating greater cognitive impairment). The SKT total summary score ranging between 0 and 18 is obtained by adding the deviation scores (i.e., norm scores) of the nine subtests and is visualized in a traffic light system. Total scores between 0 and 4 indicate "age-appropriate cognitive performance" (green), scores between 5 and 10 points suggest "mild cognitive impairment" (MCI, yellow) and values between 11 and 18 substantiate a "suspicion of beginning dementia" (red). It must be noted that the SKT total summary score can be reliably interpreted in case of a homogeneous test profile, i.e., the memory and attention domain are affected to a similar extent. In case of profile heterogeneity, the summary scores should be interpreted with caution and the severity of impairment should also be assessed separately for the two domains.
Besides the SKT, the standard test battery of the Nuremberg Memory Clinic comprised the CERAD-NP [29] and two different depression scales [30,31]. Furthermore, relatives rated the patient using the Bayer-ADL scale to stage functional capacities [32] and the Neuropsychiatric Interview [33] to assess behavioral disturbances occurring in dementia. The diagnostic classification of a given patient was made taking into account information from different sources (anamnesis, medical examination, neuropsychology, everyday functioning, and neuroimaging or laboratory results).
Statistics
The raw scores of the nine SKT subtests were converted into norm scores using the EXCEL program "SKT-Analyser-v10.xlsm" [25]. From these, the subscores for memory and attention as well as the SKT total summary score were calculated by adding the corresponding subtest scores (SKT subscores and total summary score are also included in the program printout). Using one-way analyses of variance, differences in the nine subtests, the two subscores and the SKT total summary score were checked for statistical significance between the three study groups. Pairwise group comparisons were based on the Tukey test. A Bonferroni correction for multiple comparisons was not carried out, as the focus of the analyses presented here was on the comparative examination of test profiles and less on the detection of robust group differences. Comparisons of SKT profiles across the nine subtests and the memory and attention subscores were performed using multivariate analyses of variance for repeated measurements. To assess the effect of depression on SKT scores, Pearson correlation coefficients were computed between depression scores [30,31] and the SKT summary score, the SKT memory and attention subscores and the norm scores of the nine SKT subtests. Moreover, we repeated the analyses of variance controlling for depression to establish a "pure" metric of cognitive impairment unbiased by affective disturbances. While we used two different depression scores, we calculated the mean of the transformed z-scores [30,31]. Furthermore, a receiver operator characteristics (ROC) analysis was employed to compute areas under the curve (AUC) for each of the three diagnostic groups using the SKT norming sample comprising 1053 non-demented community dwelling subjects aged between 60 and 91 as a reference group. All analyses were carried out with the statistics program IBM SPSS Statistics (Version 20, Armonk, NY, United States) and were based on a completely anonymized data set. The study was registered in the study centre of the Nuremberg General Hospital as a quality assurance measure according to § 27/4 of the Bavarian Hospital Law.
Results
Of the 1362 patients assessed between 2000 and 2005 in the Nuremberg Memory Clinic, a total sample of n = 549 fulfilled the inclusion and exclusion criteria (see Section 2.1). The patients were distributed among the three diagnostic groups as follows: 172 patients were diagnosed with MCI, 166 patients were diagnosed with dementia (F00.0 or F00.1: 89 patients, F00.2: 39 patients and F01: 38 patients), 211 patients suffered from first manifested or recurrent depression (F32: 150 patients and F33: 61 patients). Diagnoses were based on ICD-10 [10]. Sociodemographic data and SKT results (subtests, subscores and SKT total score) of the three study groups were compiled together with the results of the group comparisons in Table 2. The SKT test profiles of the three study samples are depicted in Figure 2. As Figure 2 illustrates, the MCI and dementia group show peaks in their SKT profiles in subtest VIII, which examines the delayed recall of objects. In contrast, depressed patients reveal their most striking performance deficits in the speed subtests IV, V and VII. Furthermore, Figure 2 indicates less overall cognitive impairment in MCI and depressed subjects when compared to dementia patients. SKT total summary scores for the MCI and depressed groups displayed values between 8 and 9 points; they do not differ statistically between both groups (see Table 2). However, striking differences can be detected in their subtest profiles. While mean scores in memory subtests II, VIII and IX of MCI patients are consistently lower than those of subjects with dementia (level significance was reached for subtest VIII), depressed patients show more pronounced deficits than MCI patients in all six speed tests (with only the difference in subtest VI turned out to be significant, statistical tendencies (p < 0.10) were found for subtests I and IV). Subsequently, the memory subscore indicated significantly greater cognitive impairment in the MCI group and the attention subscore in the depression group (p < 0.05 each).
The comparison of the SKT profiles between the three diagnostic groups included in the study across all nine subtests revealed a highly significant interaction effect 'diagnosis × subtest' (Pillai's Trace = 0.121 with F (16, 1080) = 4.34, p < 0.000) in a multivariate analysis of variance with repeated measures (MANOVA), which indicates an overall difference of test profiles. Subsequent pairwise comparisons performed to identify the source of this interaction effect revealed a marginally nonsignificant interaction (Pillai's Trace = 0.045 with F (8, 329) = 1.59, p = 0.054) for the comparison MCI vs. dementia, indicating a relative similarity of the subtest profiles between these two groups. The two remaining contrasts, MCI vs. depression and dementia vs. depression, were again significant with respect to the interaction term 'diagnosis × subtest' (MCI vs. depression: Pillai's Trace = 0.066; F (8, 329) = 3.31, p < 0.001; depression vs. dementia: Pillai's Trace = 0.0148; F (8, 329) = 8.01, p < 0.000) pointing towards the depression group as the source of the overall difference between profiles.
More clearly than the profile comparisons across subtests, the comparison of the SKT memory and attention subscores revealed the different impairment patterns between diagnostic groups MCI/mild dementia vs. depression. When comparing the three subsamples, the interaction 'diagnosis × subscore' reached significance (Pillai's Trace = 0.034; F (2, 546) = 2.55, p < 0.000). However, when comparing only MCI vs. dementia, the level of significance was missed more clearly for the SKT subscore profile than for the subtest profile (Pillai's Trace = 0.004 with F (1, 336) = 1.18, p = 0.277). This result demonstrates the similarity of the SKT subscore profiles between MCI and mild dementia. The As Figure 2 illustrates, the MCI and dementia group show peaks in their SKT profiles in subtest VIII, which examines the delayed recall of objects. In contrast, depressed patients reveal their most striking performance deficits in the speed subtests IV, V and VII. Furthermore, Figure 2 indicates less overall cognitive impairment in MCI and depressed subjects when compared to dementia patients. SKT total summary scores for the MCI and depressed groups displayed values between 8 and 9 points; they do not differ statistically between both groups (see Table 2). However, striking differences can be detected in their subtest profiles. While mean scores in memory subtests II, VIII and IX of MCI patients are consistently lower than those of subjects with dementia (level significance was reached for subtest VIII), depressed patients show more pronounced deficits than MCI patients in all six speed tests (with only the difference in subtest VI turned out to be significant, statistical tendencies (p < 0.10) were found for subtests I and IV). Subsequently, the memory subscore indicated significantly greater cognitive impairment in the MCI group and the attention subscore in the depression group (p < 0.05 each).
The comparison of the SKT profiles between the three diagnostic groups included in the study across all nine subtests revealed a highly significant interaction effect 'diagnosis × subtest' (Pillai's Trace = 0.121 with F (16, 1080) = 4.34, p < 0.000) in a multivariate analysis of variance with repeated measures (MANOVA), which indicates an overall difference of test profiles. Subsequent pairwise comparisons performed to identify the source of this interaction effect revealed a marginally non-significant interaction (Pillai's Trace = 0.045 with F (8, 329) = 1.59, p = 0.054) for the comparison MCI vs. dementia, indicating a relative similarity of the subtest profiles between these two groups. The two remaining contrasts, MCI vs. depression and dementia vs. depression, were again significant with respect to the interaction term 'diagnosis × subtest' (MCI vs. depression: Pillai's Trace = 0.066; F (8, 329) = 3.31, p < 0.001; depression vs. dementia: Pillai's Trace = 0.0148; F (8, 329) = 8.01, p < 0.000) pointing towards the depression group as the source of the overall difference between profiles.
More clearly than the profile comparisons across subtests, the comparison of the SKT memory and attention subscores revealed the different impairment patterns between diagnostic groups MCI/mild dementia vs. depression. When comparing the three subsamples, the interaction 'diagnosis × subscore' reached significance (Pillai's Trace = 0.034; F (2, 546) = 2.55, p < 0.000). However, when comparing only MCI vs. dementia, the level of significance was missed more clearly for the SKT subscore profile than for the subtest profile (Pillai's Trace = 0.004 with F (1, 336) = 1.18, p = 0.277). This result demonstrates the similarity of the SKT subscore profiles between MCI and mild dementia. The remaining comparisons (MCI vs. depression and dementia vs. depression) again showed significant interaction effects, which can be interpreted in terms of different subtest compositions in MCI/mild dementia vs. depression (MCI vs. depression: Pillai's Trace = 0.042; F (1, 381) = 16.52, p < 0.000; dementia vs. depression: Pillai's Trace = 0.024; F (1, 375) = 9.18, p < 0.01).
Pearson correlation coefficients between SKT and depression scores ranged between r = −0.15-0.20 in the total sample and hardly exceeded r = 0.20 in the three subsamples (MCI: range r = 0.02-0.20; DEM: range r = −0.22-0.15; DEP: range r = −0.03-0.17). Accordingly, introducing depression as a covariate into the analyses of variance did not fundamentally change the outcome. Regarding significance, seven out of eight comparisons remained significant, even though less pronounced. Noteworthy, the differences in SKT subtest and subscore profiles between MCI vs. depression outlasted the correction for depression. When comparing these two groups, the interaction terms remained significant (diagnosis × subtest: Pillai's Trace = 0.046; F (8,351) = 2.130, p = 0.033; diagnosis × subscore: Pillai's Trace = 0.017; F (1,358) = 6.29, p = 0.013). Finally, Table 3 displays the results of the ROC analyses examining the ability of the SKT sum score and the subscores to correctly classify MCI, dementia and depression. All SKT scores were based on the SKT norming sample used for developing the regression based norms [25].
Discussion
In the present analyses, the newly-normed SKT, a short cognitive performance test for assessing deficits of memory and attention, revealed different neuropsychological profiles for patients belonging to the MCI/mild dementia spectrum on the one hand, and patients suffering from depressive disorders on the other. In the MCI and dementia conditions, the deficit patterns displayed their peaks for the delayed memory recall of objects. Since amnestic MCI (isolated or in combination with other cognitive domains) is considered to be the most frequent MCI subtype [24,26] and impaired episodic memory is a prerequisite for a dementia diagnosis according to the ICD-10 criteria, this result is not really surprising. However, it can be taken as an indication of the construct validity of the SKT as a tool to support diagnosis in organic mental disorders. It may be expected that the assessment of patients with other forms of dementia, e.g., Lewy-Body, Frontal Lobe or Parkinson's, might have resulted in different test profiles. In the same vein, an exploratory investigation comparing SKT subtest patterns of patients diagnosed with Alzheimer's and Parkinson's dementia using the old test norms [34] indicated greater impairment of Parkinson patients in subtests assessing speed of information processing with subtest V (replacing blocks) reaching the level of statistical significance. Moreover, the slowing of speed of information processing, especially in tasks with a strong executive component, which could be observed in the depressed sample of the study has been described as a characteristic neuropsychological feature of depression [7,14,15].
To address a common misunderstanding, it must be pointed out that the SKT is not a test exclusively for the area of dementia. Originally, it was developed for usage with patients older than 17 years of age suffering from acute or chronic mental disorders irrespective of their aetiology. Therefore, there is ample experience with the SKT in the cross-sectional and longitudinal assessment of cognitive impairment resulting, e.g., from brain injury, substance abuse or anesthesia [35]. The misclassification of the SKT as a dementia test was surely supported by the fact that the SKT has been used as an outcome measure in more than 50 studies investigating the efficacy of various nootropic compounds, cognition enhancers or antidementia drugs, in the past years with a clear focus on the efficacy of Ginkgo biloba [35].
In line with this shift of test usage towards dementing disorders starting in the 1980s, all three test revisions of the SKT focused on older patients suffering from cognitive impairment. The first modification in 1989 aimed at making test materials more appealing [36]. The second revision suggested a finer classification of age norms beyond the age of 65 and included an option for separate assessment of memory and speed functions allowing for differential diagnostic considerations [37]. Finally, the new norming of 2015 [25] served the purpose of improving the sensitivity of the test for early recognition of dementia in persons aged 60 years or older. First data show the high sensitivity and specificity of the SKT for dementia being 0.83 and 0.84, respectively [38,39]. The results of the ROC analyses reported in the present study support these findings.
Of special interest in the present investigation is the finding that the analysis of the SKT subscores for memory and attention revealed statistically significant differences between MCI and depressed patients. Other working groups, e.g., Barth et al. (2005) [18] using the CERAD-NP test battery did not find significant differences between MCI and depression in any of the CERAD tasks. In the same way, Zihl et al. (2010) [24] analyzing neuropsychological test data of MCI and cognitively impaired depressed patients also applying the CERAD-NP and an additional series of other psychometric instruments did not receive a single significant difference between both diagnostic groups. Nevertheless, they identified a significant reduction in speed of information processing for their depressed patients when comparing the results to cognitively normal older controls. This may be taken as a further indication that processing speed is a core domain affected by depression, which is in full accordance with the present results. Accordingly, our ROC analyses for depression vs. controls revealed a higher discriminative power of the SKT speed subscore in comparison to the memory subscore. Furthermore, the fact that the differences in SKT subscores for memory and speed performance outlasted a correction for (self-rated) depression may cautiously be considered as a hint of reduced speed of information processing as a trait marker for depression. This interpretation is supported by results that speed and executive test performance of depressed patients who were successfully treated was improved, but not normalized [15]. Finally, in a study by Dierckx et al. (2007) [19] a cued recall paradigm discriminated well between Alzheimer patients and depressed subjects, but considerably lost diagnostic accuracy for separating MCI from depression. The authors explain this finding by the heterogeneity among MCI patients and a diagnostic uncertainty induced by misdiagnosing MCI in the presence of affective symptoms as depression.
Differential diagnosis between MCI/dementia and depression is not only complicated by an overlap in cognitive and affective symptoms. Meanwhile, there is evidence that MCI/dementia and depression share common pathophysiological pathways (e.g., [40,41]). On the one hand, depression seems to play a role in the pathogenesis of Alzheimer's disease via stress and a glucocorticoid increase that may cause amyloid-beta production or hippocampal atrophy resulting in an elevated dementia risk in depressed subjects. On the other hand, neurodegenerative and cerebrovascular alterations in the brain are discussed as etiological factors of depression [42,43]. Thus, in the future it is desirable that the clinical and psychometric assessment of patients suffering from cognitive and/or affective symptoms should be supplemented by information available from biomarkers reflecting neuronal or vascular damage. This could allow for defining MCI [21] and depression subgroups bearing a higher risk for cognitive decline towards a dementia syndrome. The next step for our working group will be an analysis of SKT follow-up data that might be available for MCI and depressed patients participating in the present investigation to validate their diagnostic classifications.
A final remark refers to the international validity of the SKT, which up to about 1990 was mainly used in German-speaking countries. However, in the following years an increasing number of international studies were performed, e.g., in the United States, the UK, Greece, Russia, Chile, Mexico, Brazil or South Korea [44][45][46][47][48][49][50]. Some of these studies specifically aimed at validating the SKT for the respective target language or culture. To summarize a few findings, the transcultural transfer of the (mostly nonverbal) SKT test materials only required minor adjustments of some objects shown in subtest I (because they were less familiar in the target countries) or the adaptation of letters to be read in subtest VII (especially for countries using non-Latin letters). In many of these studies, the SKT kept the psychometric properties or factor structure comparable to the original German test version. However, the dependency of test results on education becomes critical especially with patients from developing countries with very few years of formal school education [47,49].
In 2019, the German standardization study of 2015, which established the new testing norms was replicated in three testing centers in the USA, Australia and Ireland with a somewhat smaller sample of altogether 285 cognitively unimpaired persons aged between 60 and 96 years [35,51]. As in the German study, the most important predictors of the SKT performance were age, age-squared, gender and intelligence. The explained variance was comparable to that found in the German standardization sample suggesting that the regression-based German SKT norms from 2015 are well matched by those found in 2019 for English speaking subjects. This equivalence may be taken as evidence for the cross-cultural stability of the SKT in German and English speaking countries of the Western world (see also [45]). It indicates that the SKT in its present form may be used without any further adaptations of the testing material in these regions. Taken together, the results of the present study confirm that the SKT can be considered as a neuropsychological test instrument validly assessing impairment in two cognitive domains, i.e., memory and attention (speed of information processing), which should always be addressed for a comprehensive diagnostic work-up within the spectrum of neurodevelopmental (ICD-11) or neurocognitive disorders (DSM-5).
|
v3-fos-license
|
2017-09-08T22:47:28.626Z
|
2010-07-12T00:00:00.000
|
41992534
|
{
"extfieldsofstudy": [
"Psychology"
],
"oa_license": "CCBY",
"oa_status": "HYBRID",
"oa_url": "https://ccsenet.org/journal/index.php/ies/article/download/5874/5302",
"pdf_hash": "b28c9bc6f126592f8f8ee15b6dde21fc9f61323d",
"pdf_src": "Anansi",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44346",
"s2fieldsofstudy": [
"Business",
"Education"
],
"sha1": "a017e943ace85cae833c19fa164acc1b3ff22e02",
"year": 2010
}
|
pes2o/s2orc
|
Business Students ’ Views of Peer Assessment on Class Participation
The purpose of this project was to introduce peer and self assessment on tutorial class participation to a marketing unit at Curtin Sarawak. This assessment strategy was introduced with desire to improve class participation and increase student involvement in assessment. At the end of semester, a questionnaire was used to gather responses from a sample of 77 students about their opinions on the peer assessment practice. Students agreed that the practice promotes a sense of ownership, engagement and personal responsibility of the learning experience. But at the same time, many experienced some stress in the assessment process and found it not easy to evaluate their peers. The study found students do not reject peer assessment strategy.
Introduction
Despite the universal advice against grading class participation from assessment and measurement scholars (Davis, 1993), class participation remains an important item of student evaluation in business courses, especially where case discussions are an integral part of the course.A study of core curriculum syllabi at Seattle University discovered that 93 percent of courses included class participation as a component of course grades (Bean & Peterson, 1998).At Curtin University of Technology, Sarawak Campus, more than half of business programs building participation into course grading.Normally it constitutes a relatively small proportion of the course grade ranging from 5 percent to 20 percent.Majority of business students study at this campus are local Malaysians.Knowing the background of these students, many come from education systems where students are passive learners and they are not encouraged to speak up or ask question in class, it is a challenge for these students to participate actively in classroom discussion.The personality of some also inhibits them from speaking up in class, leading them to feel stressed to have this method of assessment.
Assigning a class participation mark is very complicated because of its subjective nature.Several evaluation tools have been published to assist teachers in assessing class participation (Bean & Peterson, 1998;Chapnick, 2009;Craven & Hogan, 2001;Maznevski 1996;Melvin, 1988).The use of published scales may assist in the process, but assigning a class participation grade remains difficult to objectify.The equivocal nature of evaluating class participation makes it an ideal area in which to share evaluation with students.Multiple evaluators may increase the accuracy of class participation grading.Student involvement in assessment typically takes the form of peer or self assessment.As research suggests, peer and self-assessment has been increasingly used as an alternative method of engaging students in the development of their own learning.It encourages for example student autonomy and higher order thinking skills, whilst in contrast potential weaknesses can be minimized with anonymity, multiple assessors and moderation by tutors (Van Den Berg, 2006).
A quick email survey indicated that peer assessment on class participation is a rare practice in the School of Business at Curtin Sarawak.The use of peer assessments is confined to assessing students' oral presentation and contribution to group work, which is mainly conducted in management and marketing courses only.Thus, the purpose of this project was to introduce peer and self assessment on tutorial class participation and to collate information on students' opinions of this assessment process.
Literature review
Class participation promotes active learning, critical thinking, development of listening and speaking skills needed for career success, and the ability to join a discipline's conversation (Bean & Peterson, 1998).When students see that their participation is being graded regularly and consistently, they adjust their study habits accordingly so as to be well prepared for active class participation.
To grade class participation fairly, the lecturer needs to create an environment that gives all students an opportunity to participate.According to Bean and Peterson (1998), the most common participatory classroom uses open or whole-class discussion, wherein the lecturer poses questions aimed at drawing all class members into conversation.Another is the "cold-calling" mode, where the lecturers poses a question and then calls on students at random to formulate their answers.Still another kind of participatory class employs collaborative learning, in which students work in small groups toward a consensus solution of problems designed by the lecturer and then report their solutions in a plenary session.
According to Topping (2009), peer assessment is an arrangement for learners to consider and specify the level, value, or quality of a product or performance of other equal-status learners.In simple terms, it is students grading the work or performance of their peers using relevant criteria (Falchikov, 2001).
The use of peer assessment in higher education has been advocated by many academics (for example Stefani, 1994;Boud, 1995;Topping, 1998Topping, & 2009;;Falchikov & Goldfinch, 2000;Sivan, 2000).The method has been tried out at different levels, across disciplines and with different types of assignments including writing, portfolios, oral presentations, test performance, and other skilled behaviors (Topping, 2009).The use of peer evaluation of class participation has also been previously reported (Bean & Peterson, 1998;Gopinath, 1999;Melvin, 1988;Ryan et al., 2007).
There is substantial evidence that peer assessment can result in improvements in the effectiveness and quality of learning, with gains for assessors, assessees, or both (Topping, 2009).Peer assessment involves students directly in learning, thus promoting a sense of autonomy and ownership of the assessment process which improves motivation.Peer assessments can lead teachers to scrutinize and clarify assessment objectives and purposes; criteria and grading scales.
On the other hand, peer assessment process can cause anxiety to both assessors and assessees.Social processes can influence and contaminate the reliability and validity of peer assessments.Peer assessments can be partly determined by friendship bonds, enmity, or other power processes, the popularity of individuals, perception of criticism as socially uncomfortable, or even collusion to submit average scores, leading to lack of differentiation (Topping, 2009).
The validity and reliability of peer evaluations is debatable.Over 70% of the studies find reliability and validity adequate (Sadler & Good, 2006); a minority find them variable (Falchikov & Goldfinch, 2000;Topping, 1998).Literature review regarding self and peer-assessment shows, in general students tend to overrate themselves.A tendency for peer marks to bunch around the median is sometimes noted.Student acceptance varies from high to low.Contradictory findings can be explained in part by differences in contexts, the level of the course, the product or performance being evaluated, the contingencies associated with those outcomes, clarity of judgment criteria, and the training and support provided (Topping, 2009).
Method
The study consisted of two cycles of action research which involves business students enrolled in Services Marketing unit over two semester periods in 2009.Forty-two and thirty-five students enrolled in Semester 1 and 2 respectively.Each semester has two tutorial groups.These students were either in their second or third year of studies at the Business School, Curtin Sarawak.Tutorial participation contributes ten percent to the final grade of the unit.Though it is a relatively small proportion of the unit grade, yet it is large enough to motivate students to attend and participate in weekly tutorial discussions.Each tutorial group met once per week for one and half hours throughout a 12-week semester.
During the first session in both cycles of the action research, students were informed of the nature and process of peer assessment, clarify rationale and expectations.This was part of the subject induction.Students were given the opportunity to voice their opinions and ask questions on the assessment.In the first cycle the criteria for assessment were provided by the lecturer but in the second cycle the criteria were established by the students.As suggested by Topping (2009, p.25), involving students in developing the criteria for assessment promotes a sense of ownership and decreases student anxiety.Following the lecturer's introduction of the methodology, students are asked to suggest criteria for assessing their fellow students.They started with brainstorming in small groups followed by a presentation of the criteria accepted by each group.To arrive at an agreed set of criteria, a discussion was facilitated by the lecturer to examine the meaning of each criterion, its use and relevance.Based on the agreed list of criteria, the lecturer developed an assessment rubric for class participation.
Students received the rubric (assessment form) on week 2 together with a complete student list.A short training to show students how to do peer assessment was conducted.The problem of impressionism in assessing classroom participation can be substantially alleviated through a scoring rubric.Using such a rubric, points for class discussion were assigned at three different times in the twelve-week semester (weeks 4, 7 and 10).With regularly assigned points, students had the opportunity to evaluate and improve their performance, thus making the final class participation mark less arbitrary.Feedback and coaching were given where needed.
Each student has to assess everyone in the class including themselves.In every tutorial session, students were asked to put their names on their desk for identification purpose.Each week there was pre-assigned readings, case studies or open-ended assignments given to the student.Throughout the semester, student participation was evaluated during whole-class discussions, small group presentations, question and answer sessions, and other in-class activities.In order to maintain confidentiality, the name of the assessor was not included in the assessment form.The individual mark on class participation (10%) was determined by taking the average individual score obtained from peer assessment, adding to the lecturer's score, then divide by two to derive the final mark.Two class representatives were appointed to assist the lecturer in computing and compiling the final marks.
At the end of the semester, student reactions to peer and self-assessment were solicited by means of anonymous questionnaires conducted before the students knew the assessment results of the unit in both cycles of research.The questionnaire consisted of 4 closed questions, 2 open questions and 13 statements to which students were asked to respond by indicating their level of agreement.A 5-point category scale was used, with 5 = strongly agree, 4 = agree, 3 = neutral, 2 = disagree and 1 = strongly disagree.The questions were derived from other published studies (Brindley & Scoffield, 1998;Ryan et al., 2007) and based on the author's experience with issues of class participation assessment.Mean for all statement were calculated and percentage was used to display the responses of the closed questions.
Results
Table 1 presents the mean score of each statements rated by the students.In the sample as a whole, students appear to find the assessment exercise of benefit and their responses were positive on six statements, taking a score above 3.50 as positive.The study corroborates Brindley and Scoffield's study (1998) where students expressed that the practice promotes a sense of ownership, engagement and personal responsibility of the learning experience (mean score 3.82).Moreover, they perceived an increase in personal motivation (3.81) as a result of their active participation in peer assessment.The results also reflected that students fully understood what was expected of them in doing the peer assessments (3.79) and the scoring rubric given was helpful in doing the peer evaluation (3.68).This is not entirely unexpected because the lecturer had spent a large amount of time discussing the process with the students and preparing them for the assessment task.
On the other hand, the results show that the students found it not easy to evaluate their peers.Refer to Figure 1 that indicates 70% of the students disclosed to alter marks as the assessment progressed.Many students did not feel the need to participate more because of peer assessment.The statement "I participated more because I knew my peers were evaluating me" only has a mean score of 3.45.
In general, the mean score of Semester 1 is higher and more positive than Semester two, even though Semester two students were involved in developing the assessment criteria.It was natural for Semester two students to assign higher score on the statements "I fully understand what was expected of me in doing the peer assessments" and "I found it easy to evaluate my peers on their class participation" due to their involvement in setting the assessment criteria.But their rating on other statements indicated that they are less in favor of the peer assessment exercise.The literature suggests that allowing the students to be involved with the creation of the evaluation criteria may improve student understanding and acceptance of the assigned grades (Dochy et al., 1999).However, the result of this study reflected a slightly different view.
According to Topping et al. (2000), students experienced a sense of socio-emotional discomfort in grading their peers.In this result, only two students stated no pressure in the peer evaluation exercise, 71% experienced some pressure (see Figure 2).Figure 3 indicated the main pressure appears to stem from the assessment process as a whole (41.6%) and class participation (32.5%).It was interesting to note that only 13% of the students felt that the pressure came from their peers and another 13% from tutors.These results supported the findings of Brindley and Scoffield's study (1998).
From the sixty written comments given, seven students reported that they get to know their peers better and paid more attention in noting the class participation of others.One student suggested to include students' photographs would help in assigning peer scores.
Discussion
Some literature advised that peer assessment of class participation should not be recommended for grading purposes due to the issue of reliability and validity of peer evaluations.The objective of this study was to explore students' opinion of their involvement in grading class participation.The study found that students do not reject peer assessment strategy.The author concurs with Gopinath (1999) that the benefits of peer assessment extend beyond the question of reliability of the grade.This issue can be minimized if students are provided with precise rating criteria and asked to rate through an interval scale on the different criteria which capture the essence of class participation.
The traditional belief that "teacher knows best" and holds the reins of power in the assessment process needs to change especially in an Asian learning culture.To develop student autonomy in learning and promote active learning, students have to get involved in the process of setting learning objectives and the process of assessment.Thus in this study, having peer assessment provided an input into lecturer's assessment of class participation served the objective of student involvement.Student involvement in assessment process also tends to increase the transparency of assessment and students' motivation in class.Peer assessment is, therefore, a valuable exercise in self-development and preparation for students' future careers.It certainly helps in building Curtin Graduate Attributes -thinking skills, information skills, communication skills, learning how to learn, and professional skills -among the students.
Peer assessment is not a set prescriptive process, but rather one that may take time to develop and may also change over time depending on the course content, class size, the curriculum, the university culture, and the students themselves.It is suggested that students need to undergo attitudinal changes towards their learning roles and need practice in more self-evaluative role behaviours if peer assessment is to become more acceptable and successful (Brindley and Scoffield, 1998).In this study, students who disliked peer evaluation believed the process was bias and could result in an unfair grade.This was commented on by eight students in their written remarks.Some felt that it was a tedious process and bringing unnecessary pressure to students as peer assessment was rarely practiced consistently in Business School.A student indicated preference using self assessment instead of peer assessment.
It is recommended to introduce self assessment exercise to various types of assignments and business units at Curtin Sarawak, starting with the first year students.This serves to expose the students to different assessment method and develop them into more autonomous learners, and less dependence on the tutor for all the answers.As these students progress in their course, greater experience will be gained in assessment and learning, then peer assessment may become more acceptable and successful, and increase in value.
To conclude, this study shows that students see the value of peer assessment and it improves the learning experience and satisfaction.Students felt connected to each other in class as they paid attention to each other's discussions, took greater ownership of their learning.Peer and self assessment promotes partnership between student and lecturer that is empowering and equal.The outcomes of this study may be of interest to lecturers who wish to introduce self and peer-assessment in higher education.This action research of peer assessment on classroom participation produces useful insights on the practice of peer assessment and sheds light on student attitudes to peer assessment.3.23 3.29 3.17 I feel intimidated by the whole process.
3.14 3.12 3.17 Assessment should be the sole responsibility of tutors.
3.08 2.90 3.29 I found it easy to evaluate my peers on their class participation.
Table 1 .
Students' View on Peer Assessment (n=77)feel sufficiently capable to mark other students' participation level.
|
v3-fos-license
|
2021-08-01T13:13:34.543Z
|
2021-07-31T00:00:00.000
|
236535957
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://cmbl.biomedcentral.com/track/pdf/10.1186/s11658-021-00278-5",
"pdf_hash": "cbd7fd08ec20c92a197d2aed120d4e1cbb2eb115",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44347",
"s2fieldsofstudy": [
"Medicine"
],
"sha1": "7243e7635e8d71c96c87ccf78065f5214ee70846",
"year": 2021
}
|
pes2o/s2orc
|
β-acetoxyisovaleryl alkannin (AAN-II) from Alkanna tinctoria promotes the healing of pressure-induced venous ulcers in a rabbit model through the activation of TGF-β/Smad3 signaling
Alkannin-based pharmaceutical formulations for improving wound healing have been on the market for several years. However, detailed molecular mechanisms of their action have yet to be elucidated. Here, we investigated the potential roles of AAN-II in improving the healing of pressure-induced venous ulcers using a rabbit model generated by combining deep vein thrombosis with a local skin defect/local skin defect. The extent of healing was evaluated using hematoxylin and eosin (HE) or vimentin staining. Rabbit skin fibroblasts were cultured for AAN-II treatment or TGFB1-sgRNA lentivirus transfection. ELISA was used to evaluate the levels of various cytokines, including IL-1β, IL-4, IL-6, TNF-α, VEGF, bFGF, TGF-β and PDGF. The protein levels of TGF-β sensors, including TGF-β, Smad7 and phosphor-Smad3, and total Smad3, were assayed via western blotting after TGF-β knockout or AAN-II treatment. The results show that, for this model, AAN-II facilitates ulcer healing by suppressing the development of inflammation and promoting fibroblast proliferation and secretion of proangiogenic factors. AAN-II enhances the activation of the TGF-β1-Smad3 signaling pathway during skin ulcer healing. In addition, the results demonstrate that AAN-II and TGF-β have synergistic effects on ulcer healing. Our findings indicate that AAN-II can promote healing of pressure-induced venous skin ulcers via activation of TGF-β-Smad3 signaling in fibroblast cells and provide evidence that could be used in the development of more effective treatments.
incidence is 1.5-3 people per 1000, meaning 0.5-2 million people annually [1,2]. Changes in venous hemodynamics often cause an increase in venous pressure, contributing to some extent to the difficulty of managing the ulcer. Thus, VLUs are generally secondary to venous hypertension. They are known to arise due to a series of complex cellular humeral events and potentially also some genetic factors [3].
Notably, venous ulcers often have recurrent features, implying that a better understanding of the underlying pathophysiology could improve treatment [4]. At present, there are many methods for making ulcer models, but there is no well-accepted one, which hinders the study of the underlying mechanisms, pathogenesis and response to drugs. There are at least 3 types of animal model: the mesenteric venule occlusion model, arterio-venous fistula model and large vein ligation model, in which venous hypertension is respectively induced by acute venular occlusion, placement of a chronic arteriovenous fistula, and ligation of several large veins [5]. For our study, we established a model using local skin defects based on the establishment of a deep vein thrombosis model in the lower limbs of rabbits.
Current therapeutic approaches include advanced wound dressings, antibiotics and surgery [2]. Some ongoing clinical trials are investigating systemic pharmacological agents as adjuncts to venous ulcer healing. Among these agents, herbal therapy with marked healing and anti-microbiological effects shows promise [6].
The root of Alkanna tinctoria contains compounds with antimicrobial, antiinflammatory and antileishmanial activities. Extracts from it have been used as a botanical drug for ulcers, inflammation and wounds since ancient times [7]. One such extract is alkannin, which contains four active compounds: β, β-dimethylacryl alkannin (AAN-I), acetoxyisovaleryl alkannin (AAN-II), acetyl alkannin (AN-III) and alkannin (AN-IV). It has antitumor effects based on multiple-target mechanisms, including crosstalk with an alkylating agent, DNA and protein; effects on ROS levels; and influence on multiple signal pathways [8]. More recent reports demonstrate that alkannin can suppress lung histopathological changes and relieve the lipopolysaccharide-induced inflammatory injury [9,10].
Of note, alkannin can also suppress the function of activated immune cells in psoriasis [28][29][30] and the four bioactive components have gained recognition as potential ingredients in dermal healing substances [11]. AAN-I has been reported to contribute to reepithelization of wounds through promotion of cell proliferation, migration and vessel formation [12,13]. AN-IV can inhibit UVB-induced apoptosis via regulation of HSP70 expression in human keratinocytes [11]. The IC 50 values of the four alkannins were recently determined for human dermal cells and shown to be significantly different [14]. Notably, AAN-II shows a strong healing effect and can significantly suppress H 2 O 2 -induced cellular senescence, possibly through upregulation of the expressions of collagen I and elastin in human dermal fibroblasts or keratinocytes [15]. These findings and advances on the effects of alkannin on promoting wound healing encouraged us to investigate the roles of AAN-II in venous ulcer healing.
Although alkannin-based pharmaceutical formulations for improving wound healing have been on the market for several years and their roles in wound-healing have been extensively demonstrated [16], the detailed molecular mechanisms have yet to be elucidated. Therefore, we investigated the role of AAN-II in improving the healing of pressure-induced venous ulcers using our rabbit model.
Pressure-induced venous rabbit model
We purchased 30 adult female New Zealand rabbits from the Shanghai Laboratory Animal Center of the Chinese Academy of Sciences and randomly divided them into three groups: control (no ulcer), ulcer without treatment, and ulcer with AAN-II treatment (acetoxyisovaleryl alkannin; CAS No. 69091-17-4; purity 99 %; C 23 H 26 O 8 ; MedChemExpress, USA). The treatment dosage was 20 mg/kg body weight.
The ulcer was established by causing a local skin defect based on the deep vein thrombosis model. The wound tissues were sampled for relevant tests on days 7 and 14. The physiological and behavioral characteristics of animals in each group were recorded daily. The ulcer area was calculated as its length times width times 1/4 π. All animal studies were approved by the Institutional Animal Care and Use Committees of Shuguang Hospital Affiliated to Shanghai University of Traditional Chinese Medicine (Approval No. 20,180,122 on March 1 2018) and performed in adherence with the Basel Declaration and the institutional guidelines for the care and use of animals.
Hematoxylin-eosin (HE) staining and immunohistochemistry (IHC)
HE staining of formalin-fixed paraffin-embedded (FFPE) tissue sections were performed using the standard protocol. After HE staining, the FFPE sections were histopathologically evaluated and assessed for collagen deposition.
A second batch of FFPE sections was used for IHC examination with a primary antibody against vimentin. After deparaffinizing, the sections were permeabilized in a citrate buffer solution, microwaved for 10 min, washed with phosphate-buffered saline (PBS), then put in 3 % H 2 O 2 for 15 min to block endogenous peroxidase activities. After washing with PBS, the sections were incubated with goat serum for 30 min, then overnight with the anti-vimentin primary antibody (1:200 at 4 °C). The subsequent steps were performed following the protocol for the secondary biotinylated antibody kit (Zhongshan Biotech, China). Histological images were taken with a digital-sight imaging system (Nikon Corporation, Japan).
Cell viability assay
Cell viability was measured using a Cell Counting Kit-8 (CCK-8) according to the manufacturer's protocols (Dojindo Laboratories, Japan) and the protocol from an earlier report [17]. In brief, 5,000 rabbit skin fibroblast cells were purchased from Jining Biotech Corp. Shanghai and cultured in high-glucose Dulbecco's modified Eagle medium (DMEM) with 10 % fetal bovine serum (FBS). Then, they were were seeded in 96-well plates and treated with or without AAN-II (1 µM). After treatment, 10 µl of CCK-8 solution was added. After 1 h, the absorbance at 450 nm was measured using a microplate reader (Bio-Rad, USA).
NGS-based transcriptome analysis
Total RNA from tissues with or without AAN-II treatment was extracted using Trizol Reagent (Invitrogen, USA) and cDNA was synthesized using First-strand cDNA synthesis kit according to the manufacturers' protocols (Roche Applied Science, USA). The cDNA was sequenced at 1 × 100 bp/single read using an Illumina HiSeq 3000 instrument (Illumina, USA). Then, the obtained sequences were blasted with the set of chromosomes of the rabbit NCBI project 12,819, AAGW00000000 assembly, accessible at the public UCSC Genome Bioinformatics Site (http:// genome. ucsc. edu/). The differentially expressed genes were verified with a threshold for absolute fold change > 1.5 and for false discovery rate (FDR) < 0.05. IPA analysis was used to define the enriched annotational functions. An absolute value > 2 is considered significant.
CRISP-cas9 and cell transfection
To effectively downregulate TGF-β expression, two sgRNA target sequences were synthesized (5′-CGT ACT TGT TTA CAC CCA TG-3′ and 5′-ACA AGT TGA CGG GAC AGA AG-3′) for the TGFB1 gene. A non-silencing sequence (CGC TTC CGC GGC CCG TTC AA) was used as a negative control. Synthesized constructs were cloned into a lenti-CAS9-sgRNA-EGFP vector with an XbaI site and then transfected into 293 T cells using Lipofectamine 3000 according to the manufacturer's instructions (Invitrogen, USA). The lentivirus particles with TGFB1-sgRNA were collected for further use. Subsequently, rabbit skin fibroblast cells were seeded into 6-well plates and transfected with the lentivirus and polybrene. Cell extracts were collected for molecular tests at the indicated times.
Western blotting
Treated cells were lyzed using RIPA lysis buffer containing 1 mM phenylmethylsulfonyl fluoride (PMSF). Protein extracts were quantified using a Bio-Rad protein assay (Bio-Rad Laboratories, USA). Electrophoresis was performed on 20 µg protein samples on a 10 % SDS-polyacrylamide gel, followed by transfer to a 0.2 μm PVDF membrane using a Bio-Rad semi-dry instrument. After blocking with 5 % BSA in TBST buffer for 1 h at room temperature, the membranes were incubated overnight at 4 °C with various primary antibodies including TGF-β (1:1000), total or phosph-Smad3 (1:1000) and Smad7 (1:1000) from Santa Cruz Biotechnology Company. β-actin was the internal reference. After incubation with an anti-goat secondary antibody, the membranes were subjected to ECL western blot system (Pierce, USA) according to the manufacturer's instructions. Quantification of bands was analyzed using Quanti-tyOne software (Bio-Rad Laboratories, USA).
ELISA assay
The filtered supernatant and appropriate ELISA kits were used to determine the levels of various cytokines according to the manufacturer's instructions (R&D Systems, USA) and the protocol from an earlier report [18].
Statistical analysis
Data were analyzed using statistical analysis software (SPSS 19.0, USA) and statistical mapping was performed using GraphPad Prism 11.0. Quantitative variables were determined using a one-way analysis of variance (ANOVA). Student's t-test was used for twogroup comparisons. p < 0.05 is considered statistically significant.
AAN-II facilitates ulcer healing by suppressing the development of inflammation in this model
To determine whether AAN-II promotes ulcer healing in the pressure-induced venous ulcers in this rabbit model, we performed a histological analysis of ulcer wound morphology in the two ulcer groups. The results reveal that the re-epithelialization rate of the wounds was 38 % in the AAN-II treatment group, but only 16 % in the group without treatment (also as a positive control, similarly hereinafter). The healing of the woundclosure area was significantly faster in the AAN-II treatment group than in the group without treatment, in which the epithelium layer remained open and was covered by a large scab (Fig. 1A-C).
Cytokines are well-known regulators of the wound-healing process thanks to their promotion of angiogenesis and recruitment of inflammatory cells [19]. To investigate possible changes in chemokines during wound healing with AAN-II treatment, we analyzed the overall features of cytokines, including IL-1β, IL-6, TNF-α and IL-4, in all three groups on days 7 and 14 ( Fig. 1D-G). The results show that compared to the control group, the ulcer group presented an increase in the levels of pro-inflammatory cytokines, including IL-1β, IL-6 and TNF-α, and a decrease in the level of anti-inflammatory cytokine IL-4 AAN-II treatment significantly induced a decrease in pro-inflammatory cytokine levels and an increase in the anti-inflammatory cytokine level.
AAN-II promotes proliferation of fibroblasts and secretion of proangiogenic factors
A key step in healing is the transition from inflammation to cell proliferation [20]. Recent works have showed that fibroblasts play a role in the inflammation-to-proliferation transition and are critical in the deposition and remodeling of extracellular matrix components and wound contraction [21]. In the study, the ulcer groups with and without AAN-II treatment were compared. The results show that the number of fibroblasts significantly increased after AAN-II treatment ( Fig. 2A), suggesting an obvious proliferation of fibroblasts during ulcer healing. The collagen fraction was significantly reduced on days 7 and 14 in the ulcer group compared to the control group, while the collagen fraction gradually improved in the group with AAN-II treatment compared with the group without treatment (Fig. 2B). Subsequently, to demonstrate the effects of AAN-II on fibroblast proliferation, fibroblasts were cultured in vitro and treated with AAN-II. The results from the CCK-8 assay show that after AAN-II treatment, cell viability significantly increased compared to untreated cells (p < 0.05; Fig. 2C). Moreover, the levels of some cytokines that promote the proliferation of fibroblasts, including VEGF, bFGF, TGF-β and PDGF, significantly increased in the AAN-II treated cells compared with the untreated cells (p < 0.05; Fig. 2D). These results suggest that AAN-II could promote fibroblast proliferation during ulcer healing.
AAN-II enhances the activation of the TGF-β1/SMADs signaling pathway
To explore the molecular mechanism of AAN-II promoting fibroblast proliferation, transcriptome sequencing technology was utilized to obtain the mRNA expression profiles with or without AAN-II treatment. Comparisons, bioinformatics analyses and validation experiments were then conducted. The results show that there 212 mRNA had different expressions between the treated and untreated groups. These dysregulated mRNAs mainly function in metabolic and repair processes and participate in 32 biological pathways, with the TGF-β1/SMADs signaling pathway at #1 (Fig. 3A, B).
Therefore, the protein levels of some TGF-β1/SMADs-related molecules, including TGF-β, Smad7, phosphor-Smad3 and total Smad3, were determined in the purified primary fibroblasts from the control, ulcer and ulcer with AAN-II treatment groups. The levels of TGF-β and phosphor-Smad3/Smad3 were significantly lower in the ulcer group than in the control group, but slightly higher in the AAN-II treatment group. The levels of Smad7 in the three groups showed the opposite change (Fig. 3C, D).
To further investigate whether the role of AAN-II is dependent on the presence of TGF-β, we knocked out TGF-β in fibroblasts with a CRISPR-Cas 9 system and determined the protein levels of TGF-β, Smad7, phosphor-Smad3, and total Smad3. When TGF-β was knocked out, AAN-II treatment could not significantly upregulate phosphor-Smad3 or total Smad3 and the level of smad7 decreased (Fig. 3E, F), suggesting a dependence of AAN-II on the activation of TGF-β/smad3. Fig. 3 AAN-II enhances activation of the TGF-β1/SMADs signaling pathways during ulcer healing. A The IPA analysis of AAN-II treatment-related DEG interaction network. B -Gene ontology analysis was done using a bioinformatics-based approach. C, D TGF-β1 and its downstream proteins p-Smad3, Smad3 and Smad7 were measured via western blotting in different groups. Data are representative of 3 independent experiments, shown as ratios of these proteins to GAPDH and presented as means ± SEM. *p < 0.05 compared with the control group. # p < 0.05 compared with the ulcer group. E, F TGF-β1 and its downstream proteins p-Smad3, Smad3 and Smad7 were measured via western blotting in different groups with or without TGF-β knockdown or AAN-II treatment. Data are representative of 3 independent experiments, shown as ratios of these proteins to GAPDH and presented as means ± SEM. *p < 0.05 compared with the TGF-β knockdown group Yang et al. Cell Mol Biol Lett (2021) 26:35
AAN-II and TGF-β present synergistic effects on ulcer healing in this rabbit model
To investigate whether AAN-II and TGF-β can synergistically improve ulcer healing, the ulcers in the model animals were independently treated with AAN-II or TGF-β or a combination of AAN-II and TGF-β. Pathohistochemical analysis showed that the combination of AAN-II and TGF-β significantly promoted significantly the proliferation of fibroblasts (Fig. 4A), reduced the ulcer area (Fig. 4B) and increased the percentage of ulcer healing (Fig. 4C) compared with the effects of AAN-II or TFG-β alone. Furthermore, the relative levels of inflammatory factors including IL-1β, IL-6 and TNF-α were significantly lower and the IL-4 level was higher (Fig. 4D). These results indicate that AAN-II and TGF-β present synergistic effects on ulcer healing in this rabbit model.
Discussion
Histopathological analysis confirmed the successful establishment of our rabbit model of venous leg ulcers (VLUs). When compared, the ulcer group without treatment had higher proliferation rates for inflammatory cells and fibrous connective tissue and the control group showed low levels of inflammatory cell infiltration and necrotic neutrophils. Our model provided a solid experimental platform for our subsequent experiments. The current important issues in the management of VLUs in the lower limbs are professional uncertainty and clinical variability [22]. The therapeutic pharmacology for VLUs mainly involves two medications: pentoxifylline and phlebotropic agents [23]. Traditional Chinese medicine is becoming more widely accepted in the treatment of chronic ulcers. For example, a meta-analysis showed, compared with other treatments, effects of Chinese herbal medicine ointment for pressure ulcer on the total effective rate are beneficial [8]. Traditional Chinese medicine assumes that chronic skin ulcers always have "virtual" and "stasis" states because of the incessant healing. In previous studies, we observed wound healing after external use of Zizhu herbal ointment (a traditional Chinese herbal formulation from Shuguang Hospital Affiliated to Shanghai University of Traditional Chinese Medicine), in patients with diabetic foot ulcers or VLUs in clinical practice [24][25][26]. Alkanna tinctoria is the main component of this formulation, and it has been identified as effective in the treatment of dermatitis, including psoriasis [27].
Alkannin is an important bioactive component of A. tinctoria with four bioactive components: AAN-I, AAN-II, AN-III and AN-IV. We focused on AAN-II, finding that it facilitates ulcer healing by suppressing the development of inflammation. We demonstrated an AAN-II treatment-induced decrease in pro-inflammatory cytokine levels and an increase in anti-inflammatory cytokine levels. These phenomena could be related to inhibition of AAN-II in elevated venous pressure-induced inflammatory cascades.
Healing is a multistep process, involving hemostasis, proliferation, inflammation, immunological response and remodeling [19,28]. It is well recognized that fibroblasts are critical in supporting wound healing [1]. Therefore, we evaluated the proliferation of fibroblasts after AAN-II treatment and verified that it could promote this process during the ulcer healing. We also identified the TGF-β/SMADs signaling pathway as the crucial node among the AAN-II-related networks using transcriptome sequencing technology combined with bioinformatics analyses. The effects of AAN-II were shown to be dependent on the activation of TGF-β/smad3 signaling. This is the first demonstration of the association between AAN-II and TGF-β/smad3 signaling. Developing novel therapeutic approaches requires a better understanding of the mechanisms that underly wound healing [29]. Therefore, we further explored the therapeutic potential of targeting the chemokine TGF-β and using AAN-II together as a novel approach for VLU treatment. Intriguingly, we found that they presented synergistic effects on ulcer healing.
Conclusions
These results demonstrate the effects of AAN-II on inhibiting inflammation and promoting fibroblast proliferation in this model. Additionally, our in vivo results demonstrate that AAN-II and TGF-β had synergistic effects on ulcer healing. These findings might provide research evidence for a novel therapeutic approach to venous leg ulcers or venous ulcers in general. Clinical research is necessary to determine the feasibility and therapeutic benefit of AAN-II for ulcer healing.
|
v3-fos-license
|
2023-03-08T16:05:46.185Z
|
2023-03-06T00:00:00.000
|
257395368
|
{
"extfieldsofstudy": [],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.frontiersin.org/articles/10.3389/frsen.2023.1073765/pdf",
"pdf_hash": "44b0e5cbfb92082b802cdd31e5bedc674eef35fe",
"pdf_src": "Frontier",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44348",
"s2fieldsofstudy": [
"Environmental Science",
"Mathematics"
],
"sha1": "1581c70197a75d41d97748fbe1a7fae9e59b082e",
"year": 2023
}
|
pes2o/s2orc
|
Evaluating the effective resolution of enhanced resolution SMAP brightness temperature image products
The MEaSUREs Calibrated Enhanced-Resolution Passive Microwave Daily Equal-Area Scalable Earth Grid 2.0 Brightness Temperature (CETB) Earth System Data Record (ESDR) includes conventional- and enhanced-resolution radiometer brightness temperature (T B ) images on standard, compatible grids from calibrated satellite radiometer measurements collected over a multi-decade period. Recently, the CETB team processed the first 4 years of enhanced resolution Soil Moisture Active Passive (SMAP) L-band (1.41 GHz) radiometer T B images. The CETB processing employs the radiometer form of the Scatterometer Image Reconstruction (rSIR) algorithm to create enhanced resolution images, which are posted on fine resolution grids. In this paper, we evaluate the effective resolution of the SMAP T B image products using coastline and island crossings. We similarly evaluate the effective resolution of the SMAP L1C_TB_E enhanced resolution product that is based on Backus-Gilbert processing. We present a comparison of the spatial resolution of the rSIR and L1C_TB_E enhanced resolution products with conventionally-processed (gridded) SMAP data. We find that the effective resolution of daily CETB rSIR SMAP T B images is slightly finer than that of L1C_TB_E and about 30% finer than conventionally processed data.
Introduction
The NASA MEaSUREs Calibrated Enhanced-Resolution Passive Microwave Daily Equal-Area Scalable Earth Grid 2.0 Brightness Temperature (CETB) Earth System Data Record (ESDR) is a single, consistently processed, multi-sensor ESDR of Earthgridded microwave brightness temperature (T B ) images that span from 1978 to the present (Brodzik et al., 2018;Brodzik and Long, 2016). It is based on new fundamental climate data records (FCDRs) for passive microwave observations from a wide array of sensors (Berg et al., 2018). The CETB dataset includes both conventional-and enhanced-resolution T B images on standard map projections and is designed to serve the land surface and polar snow/ice research communities in studies of climate and climate change . Recently, T B image products from L-band Soil Moisture Active Passive (SMAP) radiometer (Entekhabi et al., 2010;Piepmeier et al., 2017) data were added to the CETB dataset (Long et al., 2019;Brodzik et al., 2021). Conventional-resolution CETB T B images are created using standard drop-in-the-bucket (DIB) techniques, also known as gridding (GRD). To create finer resolution images, reconstruction techniques are employed . The images are produced on compatible map projections and grid spacings Brodzik et al., 2021). Previous papers have used simulation to compare the resolution enhancement capabilities of the radiometer form of the Scatterometer Image Reconstruction (rSIR) algorithm and the Backus-Gilbert (BG) approach (Backus and Gilbert, 1967;Backus and Gilbert, 1968), where it was found that rSIR provides improved performance compared to BG with significantly less computation Long et al., 2019).
The CETB products combine multiple orbit passes, which increases the sampling density, into a twice-daily product. For rSIR, the increased sampling density permits the algorithm to extract finer spatial information. In contrast, the enhanced resolution SMAP L1C_TB_E product (Chaubell, 2016;Chaubell et al., 2016;Chaubell et al., 2018) is created from individual 1/2 orbits using a version of the BG interpolation approach Gilbert, 1967, 1968;Poe, 1990). To reiterate, one important difference between the two products is multiple passes are combined in the rSIR processing to create hemisphere images, whereas only a single pass is used in BG processing of the swathbased L1C_TB_E product. Both rSIR and L1C_TB_E exhibit finer spatial resolution than conventional GRD processing as defined by the 3 dB width of the pixel spatial response function (PSRF).
In this paper, actual SMAP data are used to measure and compare the effective spatial resolution of the rSIR and L1C_TB_ E enhanced resolution products. The results are compared to the effective resolution of conventional gridded processing. The paper is organized as follows: after some brief background in Sec. II, a discussion of the measurement and pixel spatial measurement response functions is provided in Sec. III. Section IV presents estimates of the response functions. A discussion of posting versus effective resolution is given in Sec. V, followed by a summary conclusion in Sec. VI.
Background
The SMAP radiometer operates at L-band (1.41 GHz) with a 24 MHz bandwidth, and collects measurements of the horizontal (H), vertical (V), and 3rd and 4th Stokes parameter polarizations with a total radiometric uncertainty of 1.3 K (Piepmeier et al., 2017;Piepmeier et al., 2014). The SMAP spacecraft was launched in January 2015 and flies in a 98.1°inclination sun-synchronous polar orbit at 685 km altitude. SMAP collects overlapping T B measurements over a wide swath using an antenna rotating at 14.6 rpm. The nominal 3-dB elliptical footprint is 39 km by 47 km (Piepmeier et al., 2014;Piepmeier et al., 2017;Long et al., 2019).
Enhanced resolution SMAP T B products
CETB products are created by mapping individual T B measurements onto an Earth-based grid using standard Equal-Area Scalable Earth Grid 2.0 (EASE2) map projections (Brodzik et al., 2012;Brodzik et al., 2014). In the GRD conventional-resolution gridded CETB product, the center of each measurement location is mapped to a map-projected grid cell or pixel. All measurements within the specified time period whose centers fall within the bounds of a particular grid cell are averaged together . The unweighted average becomes the reported pixel T B value for that grid cell. Since measurement footprints can extend outside of the pixel, the effective resolution of GRD images is coarser than the pixel size. We call the spacing of the pixel centers the posting or the posting resolution, see Figure 1.
Finer spatial resolution CETB products are generated using reconstruction with the rSIR algorithm (Long and Daum, 1998;Long and Brodzik, 2016). The iterative rSIR algorithm employs regularization to tradeoff noise and resolution by limiting the number of iterations and thereby producing partial reconstruction (Long et al., 2019). The rSIR products are posted on fine resolution grids with an effective resolution that is coarser than the posting resolution; i.e., they are oversampled . For SMAP the CETB generates global cylindrical equal-area T B images using GRD at both 36 km and 25 km postings and rSIR-enhanced T B images on nested EASE2-grids at 3, 3.125, and 8 km postings . We note that the finest spatial frequency that can be represented in a sampled image is twice the posting resolution, though the effective resolution can be coarser than this (McLinden et al., 2015). The different postings in the CETB enable users to readily analyze data from multiple sensors .
The SMAP L1C_TB_E product is also produced on standard EASE2 map projections, but only at a single posting resolution of 9 km (Chaubell, 2016;Chaubell et al., 2016). This product uses BGbased optimal interpolation to estimate T B over the swath, interpolated to the grid pixels based on the instrument T B measurements (Poe, 1990) on a per-orbit (single-pass) basis. The key differences between the CETB and L1C_TB_E products are how multiple passes are treated. The L1C_TB_E product is generated on a per pass basis with one image product per pass, while the CETB product combines multiple passes into twice-daily images, i.e., two images per day. The CETB product enables somewhat better effective spatial resolution with limited impact on the temporal resolution and few individual files.
Pixel and measurement response functions
The SMAP radiometer collects measurements over an irregular grid. As described below, each measurement has a unique spatial measurement response function (MRF) that describes the contribution of each point on the surface to the measured value. The measurements, possibly from multiple orbit passes, are processed into a uniform pixel grid. The value report for each grid element or pixel is a weighted sum of multiple measurements. The pixel spatial response function (PSRF) describes the contribution of each point on the surface to the reported pixel value, i.e., how much the brightness temperature at a particular spatial location contributes to the reported brightness temperature of the pixel. In effect, the PSRF is the impulse response of the measurement system for a particular pixel. The PSRF includes the image formation process as well as the effects of the sampling and the measurement MRFs that are combined into the reported pixel value. In contrast, the MRF is just the spatial response of a single measurement. Analysis of the PSRF defines the effective resolution of the image formation.
We note that in general, the extent of spatial response function of a pixel in a remote sensing image can be larger than its spacing (the posting resolution) so that the effective extent of the pixels overlap, as illustrated in Figure 1, i.e., the pixel size is greater than the posting resolution. This means that the effective resolution of the image is coarser than the posting resolution . When the posting resolution is finer than the effective resolution, the signal is sometimes termed oversampled as illustrated in Figure 1. While in principle in such cases the image can be resampled to a coarser posting resolution with limited loss of information, deliberate oversampling provides flexibility in resampling the data and is the approach taken by CETB when it reports images on map-standard pixel sizes (posting resolutions).
Radiometer spatial measurement response function
This section provides a brief summary of the derivation of the MRF of the SMAP radiometer sensor and the algorithms used for T B image construction from the measurements. The effective spatial resolution of the image products is determined by the MRF and by the image formation algorithm used. The MRF is determined by the antenna gain pattern, the scan geometry (notably the antenna scan angle), and the integration period. We note that for T B image reconstruction, the MRF is treated as non-zero only in the direction of the surface.
The MRF for a general microwave radiometer is derived in Long et al., 2019). Microwave radiometers measure the thermal emission from natural surfaces (Ulaby and Long, 2014). In a typical satellite radiometer, an antenna is scanned over the scene of interest and the output power from the carefully calibrated receiver is measured as a function of scan position. The reported signal is a temporal average of the filtered received signal power. The observed power is related to receiver gain and noise figure, antenna loss, physical temperature of the antenna, antenna pattern, and scene brightness temperature (Ulaby and Long, 2014).
Because the antenna is rotating and moving during the integration period, the effective antenna gain pattern G s is a smeared version of the instantaneous antenna pattern. The observed brightness temperature measurement z can be expressed as where MRF(x, y) is the measurement response function expressed in surface coordinates x, y . It is the normalized effective antenna gain pattern, where G b is the integrated gain, In effect, the MRF describes to what extent the emissions from a particular location on the surface contribute to the observed T B value. A typical SMAP MRF has an elliptical, nearly Gaussian shape that is centered at the measurement location (Long et al., 2019). Due to the varying observation geometry (orbit, oblate Earth, and azimuth scanning), the MRF varies between measurements.
Sampling considerations
The SMAP radiometer is conically scanning. Integrated T B measurements are collected at fixed 17 ms intervals (Piepmeier et al., 2014), which yields an along-scan spacing of approximately Frontiers in Remote Sensing frontiersin.org 11 km. Due to the motion of the spacecraft between antenna rotations, the nominal along-track spacing is approximately 28 km. This yields a surface sampling density of approximately 11 km × 28 km, which according to the Nyquist criterion can unambiguously support wavenumbers (spatial frequencies) up to 1/22 km −1 × 1/56 km −1 when including data from a single pass. However, the MRF includes information from higher wavenumbers than this. This information can alias into lower wavenumbers (Skou, 1988;McLinden et al., 2015). Combining multiple passes increases the sampling density, which can support higher wavenumbers and avoid aliasing. The tradeoff of combining multiple passes is reduced temporal resolution.
Image formation
The image formation process estimates the surface brightness temperature map T B (x, y) from the calibrated measurements z. This can be done on a swath-based grid (i.e., in swath coordinates) or on an Earth-based map projection grid , as done in CETB and L1C_TB_E image production. This paper considers only an Earth-based map projection grid.
The simplest image formation algorithm is DIB (GRD), where the measurements whose centers fall within a map grid element (pixel) are averaged into that pixel. The effective resolution of GRD imaging is coarser than the effective resolution of a measurement since individual measurements included in the pixel value extend outside of the pixel area and their centers are spread out within the pixel; thus, the effective resolution is coarser than the posting resolution. Various inverse distance-weighting averaging techniques have been used to improve on DIB. The weighting acts like a signal processing window (McLinden et al., 2015).
Reconstruction techniques can yield finer effective resolution so long as the spatial sampling requirements are met (Skou, 1988;Early and Long, 2001). In the reconstruction algorithms, the MRF for each measurement is used in estimating the surface T B on a fine-scale grid Long et al., 2019). The rSIR algorithm has proven to be effective in generating high resolution T B images for SMAP (Long et al., 2019). The rSIR estimate approximates a maximum-entropy solution to an underdetermined equation and least-squares solution to an overdetermined system. rSIR provides results superior to the BG method with significantly less computation . rSIR uses truncated iteration to enable a tradeoff between signal reconstruction accuracy and noise enhancement. Since reconstruction yields finer effective resolution, the image products are called 'enhanced resolution.' The enhancement at a particular location depends on the local input measurement density and the MRF, which can vary with each measurement. As discussed in (Long et al., 2019), in order to meet Nyquist requirements for the rSIR signal processing, the posting resolution in the images must be finer than the effective resolution by at least a factor of two.
An alternate approach to reconstruction is optimal interpolation. The BG optimal interpolation approach was introduced to radiometer measurements by Poe (1990), and used for the L1C_TB_E product (Chaubell, 2016;Chaubell et al., 2016). This approach estimates the pixel value on a fine grid as the weighted sum of nearby measurements (Long and Daum, 1998) where the weights are determined from the MRF. Solving for the weights involves a matrix inversion that includes a subjectively selected weighting between the antenna pattern contribution and the noise correlation function. The result has finer resolution than GRD processing, but somewhat less than rSIR (Long and Daum, 1998), which the results in Sec. IV confirm.
Pixel spatial response function
As noted previously, the MRF describes the spatial characteristics of an individual measurement, i.e., how much the brightness temperature at each spatial location contributes to the measurement, while PSRF describes the spatial characteristics of reported pixel values, i.e., how much the brightness temperature at a particular spatial location contributes to the reported brightness temperature of the pixel value. Analysis of the PSRF defines the effective resolution of the image formation.
The PSRF can be computed using the MRFs of the individual measurements combined into a particular pixel. For linear image formation algorithms such as GRD, the PSRF is the linear sum of the MRFs of the measurements included in the pixel . Note, however, that the PSRF varies from pixel to pixel due to the differences in location of the measurement within the pixel area and variations of the measurement MRFs (Long et al., 2019). We further note that the variation in the MRFs between measurements precludes the use of classic deconvolution algorithms, which require a fixed response function. Typically, the PSRF is normalized to a peak value of 1.
While the pixel value in BG is linearly related to the measurements in BG optimal interpolation, the weights used in the interpolation vary non-linearly with pixel and measurement location. This complicates estimation of the PSRF for algorithms that employ BG. Similarly, the non-linearity in the rSIR algorithm complicates computing the PSRF. Prior studies have relied on simulation to compute the PSRF using simulated impulse function Long et al., 2019). In this paper we use actual SMAP data to estimate the PSRF for both L1C_TB_E and rSIR products.
Given the PSRF, the effective resolution of an image corresponds to the area of the PSRF greater than a particular threshold, typically −3 dB (Ulaby and Long, 2014;Long et al., 2019). We often express the resolution in terms of the square root of the area, which we call the "linear resolution" in this paper. For example, the ideal PSRF for a rectilinear image is a two-dimensional "rect" or "box-car" function that has a value of 1 over the pixel area and 0 elsewhere, see Figure 2. The ideal posting resolution of an image consisting of 36 km square pixels is 1296 km 2 , which corresponds to a linear resolution of 36 km.
Since only a finite number of discrete measurements are possible, we must unavoidably assume that the signal and the PSRF are bandlimited such that they are consistent with the sample spacing (Long and Franz, 2016). A bandlimited version of this ideal boxcar PSRF is a two-dimensional sinc function, as seen in Figure 2. For ideal 36 km sampling, this bandlimited PSRF is the best achievable PSRF that is consistent with the sampling. By the Nyquist criterion, signals with frequency higher than 1/2 the sampling rate (posting) cannot be represented without aliasing.
Frontiers in Remote Sensing frontiersin.org A common way to quantify the effective resolution is the value of the area corresponding to when the PSRF is greater than 1/2, known as the 3 dB PSRF size (Ulaby and Long, 2014). The effective resolution (the 3 dB PSRF size) is larger than the pixel size, and thus is larger than the posting resolution. Note that if we choose a smaller PSRF size threshold, e.g., −10 dB instead of −3 dB, the area is even larger. When the posting resolution is finer than the effective resolution (i.e., the image is oversampled as illustrated in Figure 1) the image can, in principle, be resampled to a coarser posting resolution with limited loss of information (Meier and Stewart, 2020). However, deliberate oversampling provides flexibility in resampling the data, and is the approach taken by CETB when it reports images on map-standard pixel grids with fine posting resolution. The finer posting preserves as much information as possible.
One way to determine the effective resolution is based on first estimating the step response of the imaging process. By assuming the PSRF is symmetric, the PSRF can be derived from the observed step response, greatly simplifying the process of estimating the effective resolution. Recall that the step response is mathematically the convolution of the PSRF with a step function. The PSRF can thus be computed from the step response by deconvolution with a step function. In this case the deconvolution product represents a slice of the PSRF. The effective linear resolution is the width of the PSRF above the −3 dB threshold.
Resolution estimation of actual data
In this section, we evaluate the effective linear resolution of SMAP image data from actual T B measurements using SMAP L1C_ TB_E 1/2 orbit data and CETB daily images at both conventionaland enhanced-resolution via estimation of the pixel step response. Our methodology for using a brightness temperature edge is similar to that of (Meier and Stewart, 2020). We note that polar CETB images are generated twice daily using a local time-of-day (ltod) criterion. At each pixel this combines measurements from different passes that occur within a short (4 h) ltod window for each of the two images Long et al., 2019). Since the L1C_ TB_E products are swath-based, to create daily images from L1C_ TB_E products, overlapping swaths during a particular local time of day interval (i.e., morning or evening time periods were separately averaged. This converts the single pass L1C_TB_E data files into multi-pass images. Note that the combination is only within the same few hour local time of day interval. Combining passes within the same local time of day only slightly degrades the temporal resolution, but also tends to reduce the noise level. The precise time
FIGURE 3
Example evening CETB SMAP radiometer vertically-polarized (v pol) T B image processed with rSIR on an EASE2 map grid for day of year 091, 2015. Open ocean appears cold (low T B ) compared to land, glacial ice, and sea ice. The thick red box to the right of and below center outlines the study area.
Frontiers in Remote Sensing
frontiersin.org intervals covered by the different image products are not quite the same, but are very close, resulting in similar images. Because the rSIR images are posted at 3 km spacing but are deliberately oversampled by at least a factor of two, we apply an ideal (brickwall) lowpass filter with a cutoff at 12 km.
To compute the step response, we arbitrarily select a small 200 km by 200 km region centered at approximately 69N and 49E in the Arctic Ocean, see Figure 3. (Results are similar for other areas.) The transitions between radiometrically cold ocean and warm land provide sharp discontinuities that can be simply modeled. Ostrov Kolguyev (Kolguyev Island) is a nominally flat, tundra-covered island that is approximately 81 km in diameter with a maximum elevation of~120 m. The island is nearly circular. Since there is noise and variability in T B from pass to pass, average results over a 20-day time period are considered. Figure 4 shows individual CETB and L1C_TB_E subimages over the study period. The data time period is arbitrarily selected so that the image T B values vary only minimally, i.e., they are essentially constant over the time period with high contrast between land and ocean.
Two horizontal transects of the study region are considered in separate cases. One crosses the island and the coast, while the second crosses a patch of sea ice and the coast, see Figure 4. Due to the dynamics of the sea ice that is further from the shore, only the nearcoast region is considered. For simplicity, we model the surface brightness temperature as essentially constant with different values over land and water. Figure 4 compares 10-day averaged daily GRD, rSIR, and L1C_ TB_E images of the study region. In these images, the cooler (darker) areas are open ocean. Land areas have high temperatures, with seaice covered areas exhibiting a somewhat lower T B . The GRD images are blocky, while the high resolution images exhibit finer resolution and more accurately match the coastline. These images were created by averaging 10 days of daily T B images in order to minimize noiselike effects due to (1) T B measurement noise and (2) the effects of the variation in measurement locations within each pass and from pass to pass. The derived PSRF and linear resolution thus represent temporal averages. The derived PSRFs are representative of the single-pass PSRFs.
Examining Figure 4 we observe that the ocean and land values are reasonably modeled by different constants, with a transition zone at the coastline. It is evident that the GRD images are much blockier than the L1C_TB_E and rSIR images. This is due to the finer grid resolution of the enhanced resolution images and their better effective resolution. Due to the coarse quantization of the GRD image, the island looks somewhat offset downward, whereas the L1C_TB_E and rSIR images better correspond to the superimposed coastline map. Figure 5 plots the image T B value along the two study transects. Note that rSIR T B values have sharper transitions from land to ocean than the GRD images, and that the GRD image underestimates the island T B . The GRD values also have smoother transitions than L1C_TB_E and rSIR and overestimate T B in the proliv Pomorskiy strait separating the island and coastline south (right) of the island. The rSIR curves appear to exhibit a small under-and over-shoot near the transitions compared to the L1C_TB_E images.
Lacking high resolution true T B maps, it is difficult to precisely analyze the accuracy and resolution of the images. However, we can employ signal processing considerations to infer the expected behavior of the values and hence the effective resolution. For analyzing the expected data behavior along these transects we introduce a simple step model for the underlying T B . Noting that the T B variation over land near the coast for the coastline case is essentially constant with a variation of no more than a few K, we model the land as a constant. Similarly, the ocean T B is modeled as a constant. This provides a simple step function model for T B for the coastline. The island-crossing case is similarly modeled but includes a rect corresponding to the island. The modeled T B is plotted in Figure 5 for comparison with the observed and reconstructed values. The modeled T B is filtered with a 36 km Gaussian response filter, shown in blue, for comparison. The latter represents an idealized result, i.e., what can be achieved from the model assuming a Gaussian MSRF.
FIGURE 4
Average of daily SMAP vpol T B images over the study area (see Figure 3) spanning days of year 91-100 with a coastline (Wessel and Smith, 2015) overlay. (A) 36 km GRD. (B) 9 km L1C_TB_E. (C) 3 km rSIR. Note the apparent offset of the island in the GRD, which results from the coarse pixels. The thick horizontal lines show the data transect locations where data is extracted from the image for analysis. The black line is the "island-crossing" case while the red line is the "coastline-crossing" case.
Frontiers in Remote Sensing frontiersin.org Examining Figure 5 we confirm that L1C_TB_E and rSIR images have sharper transitions than the GRD images and the GRD image underestimates the island T B . The GRD images, which have longer ocean-side transitions than L1C_TB_E and rSIR, underestimate the island T B , and overestimate T B in the proliv Pomorskiy strait separating the island and coastline south (right) of the island. The ripple artifacts in the rSIR T B transition from ocean to land in both examples are the result of the implicit low-pass filtering in the reconstruction. The pass-to-pass variability in the T B observations is approximately the same for all cases in most locations, suggesting that there is not a significant noise penalty when employing rSIR reconstruction or L1C_TB_E optimal interpolation for SMAP. Insight can be gained by examining the spectra of the signal. Figure 6 presents the wavenumber spectra of the key signals in Figure 7. The spectra were computed by zero padding the data. For simplicity, only the Fourier transforms of the average curves are shown. The spectra of the modeled signal are shown in blue. Peaking at 0 wavenumber, they taper off at higher wavenumbers. The filtered model signal, shown in dark blue, represents the best signal that can be recovered. The GRD signal closely follows the ideal until it reaches the 1/72 km −1 cutoff frequency permitted by the grid, beyond which it cannot represent the signal further. L1C_TB_E and rSIR follow the ideal signal out to about 1/ 36 km −1 then track each other out to the 1/18 km −1 cutoff for the L1C_ TB_E sampling. The rSIR continues out to the 1/12 km −1 cutoff. Details Plots of T B along the two analysis case transect lines shown in Figure 4 for the (A) coastline-crossing and (B) island-crossing cases. Daily values over the study period are shown as thin lines. The curves from the average images are shown as thick lines. The discrete step and convolved Gaussian step models are also shown. The x-axis is centered on the coastline or island center for the particular case. Frontiers in Remote Sensing frontiersin.org of high wavenumber response differ between the coastline-crossing and island-crossing cases, but the same conclusions apply. Deconvolution of the step response is accomplished in the frequency domain by dividing the step response by the spectra of the modeled step function, with care for how zeros and near-zeros in the modeled step function are handled in the inverse operation. The ideal GRD PRSF (blue dashed line) is a rect that cuts off at 1/ 36 km −1 . The estimated GRD PSRF spectrum closely matches the ideal. The rSIR and L1C_TB_E PSRF spectra match the ideal in the low frequency region, but also contain additional information at higher wavenumbers, which gradually rolls off. This additional spectral content provides the finer resolution of rSIR compared to the GRD result.
Finally, the estimated one-dimensional PSRFs are computed as the inverse Fourier transform of the PSRF spectra in Figure 6 as shown in Figure 7. Table 1 shows the linear resolution for each case, computed as the width of the PSRF at the −3 dB point. For comparison, the linear resolutions using both −2 dB and −10 dB thresholds are shown. In all cases the resolution of rSIR is better than the observed GRD resolution.
A key observation is that the effective resolution, as defined by the 3-dB width of the derived PSRFs, is very similar for both analysis cases. As expected, the observed GRD PSRF results are coarser than the ideal GRD PSRF due to the extension of the SMAP MRF outside of the pixel area. rSIR closely follows the ideal GRD and provides a significant improvement over the actual 36 km grid. rSIR is better than L1C_TB_E for the island, but slightly less than L1C_TB_E for the coastline crossing. L1C_TB_E also shows improvement over GRD for the coastline-crossing case, but is slightly worse than GRD for the island-crossing case. The L1C_TB_E PSRF matches the Gaussianfiltered model over the main lobe with small shoulders on the sides of the mainlobe in the coastline case. The rSIR resolution represents a linear resolution improvement of nearly 30% from the observed GRD resolution with a slight improvement over the idealized model resolution. We conclude that rSIR provides finer effective resolution than GRD products, with a resolution improvement of nearly 30%. The resolution enhancement of L1C_TB_E can be similar in some cases, but not all. rSIR provides more consistent effective resolution improvement than L1C_TB_E for the studied cases.
Discussion
Regardless of the posting resolution (the image pixel spacing), the effective resolution of the reconstructed T B image is defined by the PSRF. To avoid aliasing, the posting resolution must be smaller (finer) than the effective resolution. We note that as long as this requirement is met, the posting resolution can be arbitrarily set. Thus the pixel size can be arbitrarily determined based on the pixel size of a standard map projection such as the EASE2 system (Brodzik et al., 2012;Brodzik and Long, 2016).
There are advantages of a finer posting resolution. For example, since the effective resolution can vary over the image due to the measurement geometry, the PSRF is not spatially constant, and to ensure uniform pixel sizes, the image may be over-sampled in some areas. Fine posting ensures all information is preserved and that the Nyquist sampling criterion is met. Furthermore, finer posting provides optimum (in the bandlimited sense) interpolation of the effective information in the image. This interpolation can be better than bi-linear or bi-cubic schemes often used for interpolation in many applications. We also note that fine posting resolution is required by the reconstruction signal processing to properly represent the sample locations and measurement response functions. On the other hand, oversampled images produce larger files and there is potential confusion among users in understanding the effective resolution and adjacent pixel correlation.
When creating the original CETB dataset, Long and Brodzik wanted to ensure that while the different frequency channels have different resolutions which necessitates using different grid resolutions, the grid resolutions are easily related to each other (i.e., by powers of 2) to simplify comparison and the use of the data . Hence, grid sizes were chosen such that, based on careful simulation, the RMS error in the reconstructed T B images was minimized subject to choosing from a small set of possible sizes. This analysis is one reason that particular channels are on particular resolution posting grids while their effective resolutions may be coarser-the finer grid provides better error reduction in the reconstruction .
As previously noted, the ideal PSRF is 1 over the pixel and 0 elsewhere, i.e., a small box car function. However, since we are representing the surface T B on a discrete grid, we must assume that the signal is bandlimited so that the samples can represent the signal without aliasing. Thus, the bandlimited ideal PSRF is a low-pass filtered rect function, which is a two-dimensional sinc function (Figure 2), though in practice the real PSRF has a wider main lobe and smaller side lobes. Because the PSRF is non-zero outside of the pixel area, signal from outside of the pixel area "leaks" into the observed pixel value. For example, consider a PSRF that is −10 dB at adjacent pixels. If there is an open ocean pixel where the ocean T B is 160 K adjacent to a land pixel where T B is 250 K, the PSRF permits the land T B signal to contribute approximately 9 K to the observed ocean value, essentially raising the observed value to 169 K from its ideal value of 160 K.
As evident in Figure 5, sharp transitions in the surface T B are under-estimated in all the products. The high resolution products better localize the edge transitions, but may have fluctuations (over-and under-shoot) near the edges, a result of Gibb's phenomena. The fluctuations can be minimized by filtering or smoothing the data at the expense of the effective spatial resolution but are not entirely eliminated even for the low resolution GRD data. These error values result in errors in geophysical values inferred from the estimated T B . The error tolerance is dependent of the application of the estimated geophysical values and may vary by user and application. Fine resolution requires tolerance to fluctuations near sharp edges.
The fact that the PSRF is non-zero outside of the pixel area also means that nearby pixels are statistically correlated with each other-they are not independent even in the ideal case. The correlation is even stronger when the effective resolution is coarser than the posting resolution. This effect may need to be considered when doing statistical analysis of adjacent pixels.
Conclusion
This paper considers the effective resolution of conventional-and enhanced-resolution SMAP T B image products available from the NASA-sponsored CETB ESDR project (Brodzik et al., 2021) and the SMAP project L1C_TB_E product Chaubell et al., 2018). These products include conventionally processed (GRD) gridded images and rSIR and BG optimal interpolation enhanced-resolution images. To evaluate and compare the resolutions of the two products the step function response is derived from coastline and island transects in SMAP T B images. From these, the effective resolution is determined by computing the average PSRF. As expected, the effective resolution is coarser than the pixel posting (spacing) in all cases. From Tab. 1, the effective 3-dB resolution of conventionally processed (GRD) data, which is posted on a 36 km grid, is found to be approximately 45.9 km, while the effective resolution of the rSIR daily enhanced-resolution images is found to be 29.8 km, which is nearly a 30% improvement. The resolution improvement of L1C_TB_E can be nearly as high at times but is less consistent. The results verify the improvement in resolution possible for daily SMAP T B images using the rSIR algorithm.
Data availability statement
The datasets presented in this study can be found in online repositories. The names of the repository/repositories and accession number(s) can be found in the article/Supplementary Material.
|
v3-fos-license
|
2021-02-28T06:16:47.893Z
|
2021-02-23T00:00:00.000
|
232064965
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "HYBRID",
"oa_url": "https://doi.org/10.1016/j.healthplace.2021.102535",
"pdf_hash": "f98bd4d43741ddbae2e05e0f32b9b5646d72ac8b",
"pdf_src": "MergedPDFExtraction",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44350",
"s2fieldsofstudy": [
"Sociology",
"Geography"
],
"sha1": "bfa6c0065931bd4e9225cd0c1b12e0ffb07a8358",
"year": 2021
}
|
pes2o/s2orc
|
Exposure to unhealthy product advertising: Spatial proximity analysis to schools and socio-economic inequalities in daily exposure measured using Scottish Children ’ s individual-level GPS data
This study aimed to understand socio-spatial inequalities in the placement of unhealthy commodity advertise- ments at transportation stops within the Central Belt of Scotland and to measure advertisement exposure using children ’ s individual-level mobility data. We found that children who resided within more deprived areas had greater contact with the transport network and also greater exposure to unhealthy food and drink product advertising, compared to those living in less deprived areas. Individual-level mobility data provide evidence that city- or country-wide restrictions to advertising on the transport network might be required to reduce inequalities in children ’ s exposure to unhealthy commodity advertising.
Introduction
Children's exposure to unhealthy commodity marketing is a global priority for policy action due to its status as a risk factor for the development of non-communicable diseases (NCDs) (World Health Organization, 2016). The literature on the commercial determinants of health identifies a range of unhealthy commodity industries, chiefly those that produce and market alcohol, tobacco and foods high in fat, salt and sugar (HFSS) (Kickbusch et al., 2016), but increasingly other health-harming industries such as gambling (Goyder et al., 2020). Advertising drives harmful consumption of alcohol (Jernigan, 2006), tobacco (Blecher, 2008) and foods high in fat, salt and sugar (HFSS) (Harris et al., 2009;Smith et al., 2019). Research suggests that children are frequently exposed to unhealthy commodity advertising (Kelly et al., 2008), and that advertisements employ techniques to which children are vulnerable (Boyland et al., 2012). Children's vulnerability to marketing communications is recognised in the United Kingdom (UK) Committees of Advertising Practice (CAP) code, which prohibits advertisers from directly encouraging children to buy any product (Advertising Standards Authority, 2007).
There is robust evidence to show that NCDs are patterned by social deprivation (Di Cesare et al., 2013) and that understanding the social patterning of exposure to unhealthy commodity advertisements may be vital to informing policies to reduce health inequalities. A wealth of international evidence demonstrates that unhealthy commodity industries target socially deprived communities (Barbeau et al., 2005;Kwate, 2007;Day and Pearce, 2011). Areas around schools and other institutions that serve children and families, such as libraries and recreation facilities, may be of relevance to health policy if they are targeted areas (Hillier et al., 2009). Evidence from Australia and New Zealand suggests that areas around schools contain disproportionately frequent advertisements for food high in HFSS (Vandevijvere et al., 2018;Kelly et al., 2008), a phenomenon that may be more pronounced around schools with higher levels of socioeconomic deprivation (D'silva, 2017;Day and Pearce, 2011). However, evidence from Scotland suggests that deprived areas may not contain disproportionate advertising for HFSS food and drink (Robertson et al., 2017). Governments have taken steps to protect under-16s from exposure to advertising of HFSS products, for example the UK Advertisements Standards Authority (ASA) guidelines ban advertising within 100m of schools (Advertising Standards Authority, 2007, 2018. Additionally, some advertisers voluntarily do not advertise within 200m of schools (Greater London Authority, 2019). Compliance with these guidelines and whether the protected areas surrounding schools should be extended has not been assessed. Transport facilities represent a key venue of advertising, and therefore a potential target for legislation. Conventionally, mapping and monitoring aspects of the outdoor environment has relied upon physical audits performed by fieldworkers [e.g. (Sainsbury et al., 2017)]. An emerging alternative approach involves the use of publicly available datasets of panoramic photographs of outdoor locations (Bader et al., 2017). The present study involves the use of images from Google Street View as raw data for the creation of a large-scale dataset capturing advertisements on bus stops throughout the Central Belt of Scotland.
There are methodological differences in approaches to measuring exposure to unhealthy commodity advertising. The most frequently used approaches use static entities, such as administrative units or predefined circular, network or polygon buffers placed around fixed points, such as schools, with the aim of quantifying and describing the advertising environment there (Day and Pearce, 2011;Huang et al., 2020). These approaches have many weaknesses but there are two particularly important problems. First, living in a specific location or attending a school/work-place does not equate with exposure to all the environmental attributes there. Second, those who share a residential or workplace location do not necessarily experience equal exposure to the environment there; they move around. (Perchoux et al., 2013). More recently, the availability of precise location technologies, such as Global Positioning System (GPS) devices has increased and the number of studies using them to provide more accurate measures of exposure is growing. GPS data have been used to measure individual-level exposure to, for example, air pollution (Sinharay et al., 2018) and tobacco outlets (Caryl et al., 2019). This approach has not been used to explore inequalities in exposure to unhealthy commodity advertising at an individual-level. Comparing this new method with a more conventional area-level static boundary measure of exposure is important, not only to help researchers assess the utility of the new methods, but also because misunderstanding of exposure by researchers could lead to ineffective policymaking (Sadler and Gilliland, 2015).
Aims and objectives
The first aim of this study was to understand if unhealthy commodity advertisements are socially and spatially patterned, in terms of being located within pre-specified geographical distances from individuals and places, and, if such patterning was evident, whether specific types of products were more or less likely to be advertised in disadvantaged areas and near schools. The second aim was to explore individual-level sociospatial patterning of advertisement exposure for Scottish Children aged 10-11 years old.
The specific research objectives were to: 1. Categorise the content of advertisements at bus stop locations across a large and varied geographical area. 2. Explore associations between the socio-spatial distribution of bus stop advertisements using area-based socioeconomic information. 3. Test for associations between specific categories of unhealthy commodity advertisements in the local area surrounding schools. 4. Calculate children's 'real' exposure to bus stop advertising using individual mobility data of Scottish children.
Methods
Our study design had three main parts: i) To create a dataset that captured the advertising content of all bus stops within the study area. ii) To measure the proximity of each bus stop, coded with advertising content, to schools within the same study area.
iii) To join the bus stop audit, coded with advertising content, to GPS tracks of school children, to show the adverts that children encountered as they travelled.
Study area and bus stop locations
The study area included the Central Belt of Scotland (1934 km 2 ), incorporating the administrative boundaries of Scotland's two most populated cities, Glasgow and Edinburgh. By including two major cities and the urban/rural hinterlands between, we included a varied advertising landscape to assess that was also manageable in terms of both time and cost of auditing. The cities of Glasgow and Edinburgh contain areas which are amongst the most and least deprived areas in Scotland (Scottish Government, 2012).
Advertising at bus stops was selected because these are usually in an outdoor environment that can be virtually audited using tools such as Google Street View, as opposed to advertising within rail, subway or tram stations, which largely are indoor/covered and inaccessible for virtual audit. Bus stops also cover a wide ranging geographical area, compared to rail stations, providing a wider geographical area for our study. We extracted bus stop locations from the UK's Ordnance Survey Points of Interest dataset (code: 0732) (Ordnance Survey, 2019).
Advertising rating questionnaire
A 15-item coding frame was created to categorise the main product being advertised within a visible bus stop advertisement. The key product categories to be captured by the coding frame were identified deductively based on both their prominence in literature on the commercial determinants of health (Kickbusch et al., 2016;Stuckler et al., 2012;Goyder et al., 2020) and their relevance to current policy priorities emerging from consultation with a group of representatives from health charities and public health agencies in Scotland. These categories include HFSS food and soft drinks; alcohol; nicotine products; and gambling, as well as subcategories such as confectionary and energy drinks, which were created to understand advertising of specific types of product that may be relevant to policymaking, for example potential restrictions on the sale of energy drinks to young people (Scottish Goverment, 2019).
Having established the relevant product categories deductively, the final coding frame frame/questionnaire (Supplementary Table 1) was developed inductively and iteratively based on the research team's exploratory scoping of the content of advertising on bus stops. Due to the nature of the research, rigorous, objective measurement of the nutritional content of food and non-alcoholic drink products was not possible, but sub-categories were developed that enabled the research team to be sure that close to all of the coded advertisements would contain products that would be deemed suitable for restriction in policies such as Transport for London's restrictions on the advertising of unhealthy food (Transport for London, 2019). Based on exploratory scoping of the content of advertisements, four subcategories of food and drink were deemed inappropriate for inclusion within grouped 'unhealthy' variables: fruit and vegetables; fruit juice or smoothie; caffeinated products; and water.
If an advertisement did not include one of the specified categories, the auditors were asked to select 'other', and a free text description was documented for a third of the sample of these to provide a summary of what these advertisements were. Where advertisements were illegible due to, for example, poor image quality these were selected as 'unable to distinguish'.
Virtual street audit
The audit was performed using the Computer Assisted Neighbourhood Visual Assessment System (CANVAS) Google Street View auditing software (https://beh.columbia.edu/street-view/). This software allows rigorous "virtual audits" that are much more rapid and cost-effective than on-location fieldwork, enabling coding of the large study area. This approach has been validated for interrater validity and concurrent validity in other contexts, such as neighbourhood audits (Bader et al., 2015;Mooney et al., 2014). Validity assessment specific to this context is described in the subsequent sub-section. A total of 10,305 bus stops were located within the study area, geocoded and imported into CANVAS. The most recent image captured by Google Street View was audited, the date of images ranged between 2008 and 2020, the majority of images audited were captured in 2019 (n:5007 (48.6% of all bus stops)), 2018 (n:2105 (20.4%)), 2016 (n:625 (6.1%)) and 2017 (n:399 (3.9%)). Three Survey Assistants were recruited as auditors and each was given a selection of randomly allocated bus stops to audit using CANVAS.
Inter-rater reliability (IRR)
The auditors had no previous experience of using CANVAS but had previously participated in academic field-work studies, such as administering questionnaires. Auditors took part in a 1-hour face-to-face training session where they were informed of the study purpose and the advertising rating questionnaire. A series (n = 20) of images of bus stop advertisements were coded together as a group to guide them through the auditing process. Auditors then independently completed a practice audit of 60 bus stops for another UK city, Liverpool, to become accustomed to the software and rating questionnaire.
Area-level deprivation
Each bus stop was assigned the deprivation rank of the datazone within which it was located. Datazones are small areal units used in the production of official statistics in Scotland. They contain populations of between 500 and 1000 household residents (Scottish Goverment, 2006)). Deprivation rank was assigned using the Income Domain of the 2016 Scottish Index of Multiple Deprivation (SIMD) (Scottish Government, 2012). The Income Domain measures low income as indicated by the receipt of government benefits and was chosen over the full SIMD as that includes an element of geographical and facility accessibility which may have biased our results. The datazone income ranks were grouped using a binary deprivation variable (least deprived/most deprived) in which the three most deprived quintiles were grouped into the least deprived category for the Central Belt of Scotland area.
Distance to schools
The locations of all schools within the Central Belt of Scotland were extracted from Ordnance Survey points of interest classification (code: 2031). Each school location was plotted within ArcMap 10.6. The Scottish Road and Path network was obtained from the Ordnance Survey MasterMap Integrated Transport Network Layer. Using the Network Analyst extension with ArcMap 10.6, 100-meter (m), 200m, 800m road and path network buffers were created surrounding each school location, and a dichotomised (yes/no) variable was created for each bus stop to identify whether it was located within any of the distance buffers. The 100m network buffer was selected based on the UK Advertisements Standards Authority (ASA) guidelines to protect under-16s from exposure to advertising of HFSS products within 100m of schools (Advertising Standards Authority, 2007, 2018). A 200m buffer was chosen due to some advertisers (for example, McDonalds) voluntarily not advertising within 200m of schools (Greater London Authority, 2019). A larger 800m buffer was used to assess advertising across a wider geographical area, as the Scottish Government have requested that the Advertising Standard Authority (ASA) prohibit advertising of HFSS products within 800m of locations frequently accessed by children (such as schools and leisure centres) (Scottish Goverment, 2018).
SPACES
We used data from participants in the 'Studying Physical Activity in Children's Environments across Scotland' (SPACES) study (Mccrorie et al., 2017) who were recruited from the Growing Up in Scotland (GUS) study-a nationally representative longitudinal cohort study originating in 2005. From a possible 2402 children who participated in GUS sweep 8 interviews, 2162 consented to be approached by SPACES researchers, of which 51% (n = 1096) consented to take part. Participants were provided with an accelerometer (ActiGraph GT3X+) and a GPS (Qstarz-STARZ BT-Q1000XT; Qstarz International, Taiwan) and asked to wear them over eight consecutive days between May 2015 and May 2016 when the participants were 10-11 years old. SPACES inclusion criteria required at least four weekdays of accelerometer data and one day of weekend data, resulting in a subset of 774 participants. Of these, 229 participants (54% female) resided in the Central Belt study area and met our inclusion criteria of providing at least 1 h of GPS data.
Area-level deprivation
A measure of area-level socioeconomic deprivation for the datazone containing each participant's home address was assigned to each child using the Income Domain of the SIMD. Due to under-representation of children from the most deprived areas we created a binary deprivation variable (least deprived/most deprived) in which the three most income deprived quintiles were grouped into the least deprived category (30% of participants).
Bus stop advertisement category
The number and proportion of all advertisements by category were described for all 15 main categories (Table 1). Due to some categories having a small number of advertisements and to provide relevant outcomes for policy makers, aggregation was conducted. The following categories were use in the subsequent analyses: -Unhealthy food and/or drink, including sugar-sweetened beverages, fast food, confectionary, crisps and savoury snacks, cakes, pastries, puddings and sweet biscuits, and ice-cream/frozen desserts. -Unhealthy food, including fast food, confectionary, crisps and savoury snacks, cakes, pastries, puddings and sweet biscuits, and icecream/frozen desserts. -Sugar-sweetened beverages.
Free-text descriptions of 467 (of 1764) advertisements classified as 'other' were also summarised into overall categories and supplied as Supplementary Table 2.
The number of advertisements by the area-level deprivation (least deprived/most deprived) were described and a Chi-Square test performed to test whether there was a relationship between these variables. Logistic regression was performed to explore whether advertisements were more or less likely to be located within areas based on deprivation, each advertising category was modelled individually.
Proximity to schools
A binary logistic regression model was used to predict the odds of a bus stop advertisement containing a specific product category, for example unhealthy food, within a 100m, 200m of 800m buffer of all schools. Individual product categories were modelled discretely.
Individual level exposure of children to bus stop advertisements
The straight-line distance from each GPS location to every bus stop location was measured using the sf package in R (Pebesma, 2018). The nearest bus stop to each GPS location was retained along with information about the advertisements at that stop. Using a novel methodology (Caryl et al., 2019), GPS locations were classed as 'exposed' when distance to nearest bus stop containing an advertisement was <10m. The 10m threshold was used because this is the distance a child walking at 1 m/s (3.6 kph) would travel between each GPS location. Participants were asked to wear GPS devices during waking hours, leading to variation in daily wear time. To account for this, we standardised rates of exposure by modelling counts of exposed GPS locations for each participant with total wear time (e.g. total GPS locations) as an offset. Exposure rates of each participant to each category of advertisement were compared between the binary income deprivation levels with negative binomial generalised linear models to account for overdispersion.
To determine which response variables would have a sufficient sample size to model, an alpha = 0.05 with a 50% probability (of failing to detect a difference of a small effect (0.5 x standard deviation)) required 16 individuals per group to be exposed to specific advertising categories (Croarkin et al., 2006), therefore individual-level exposure was not performed for gambling and e-cigarette advertisements.
Comparison of our sample with the national level demographic distributions indicate slight under-representation of children from the two most deprived quintiles and over-representation of the least deprived quintiles. However, after applying individual-level cross-sectional weights that were generated for all GUS respondents in sweep 8 (Mccrorie et al., 2017), our sample could be considered nationally representative.
In addition to socioeconomic status, we also included control variables for children, sex; the season in which they were tracked (winter: October-March); and whether their residence was in an urban or rural setting, following Mccrorie et al. (2020). For the latter, we used the Scottish Government's six-category classification system, which considers both population size of the settlement and remoteness/accessibility (based on drive time to the nearest settlement with a population of 10,000 people or more) (Scottish Goverment, 2006). Settlements are defined as a group of high-density postcodes (i.e. more than 2.1 residential addresses per hectare, or population per hectare greater than five) whose combined population rounds to 500 people or more (National Records Of Scotland, 2016). They are separated by low density postcodes. To ensure sufficient sample size within groups, we dichotomised the six-category classification system into two categories (urban, rural), each comprising three of the original classes.
Models were fully adjusted for income deprivation, urbanity, sex and season. Results are presented as exponentiated coefficients (transformed back to response scale as models were negative binomial using a log link function) and effects are presented for income deprivation, urbanity, sex, and season (Reference categories: Income deprivation = least deprived; Urbanicity = urban; Sex = male; Season = winter) by advertisement category. All Individual-level exposure analyses were performed in R-4.0.0.
Bus stops
All 10,305 bus stop locations were audited, of which 9701 (94.1%) actually contained a visible bus stop. Of the 9701 bus stops, 7856 (80.9%) did not contain an advertisement, 532 (5.3%) had one visible advertisement, 1294 (13.7%) two advertisements, and 19 (0.2%) had three advertisements. A total of 1845 bus stops had one or more advertisements and all of these were subsequently categorised.
Advertisement categories
1845 bus stops were audited, totalling 3123 advertisements as some bus stop locations had one or more advertisements. Fast food products totalled 15.3% of all advertisements (n = 478), confectionary 6.8% (n = 211) and alcohol 4.0% (n = 124) (full list of advertisements: Table 1). Over half of the advertisements (n = 1,764, 56.5%) were 'other' and 427 of these were described using free-text and are summarised in Supplementary Table 2. Table 2 presents the grouped advertisement categories by area level socio-economic deprivation, highlighting that, in terms of advertisement location and type of advertisement, there did not appear to be a social relationship. A similar pattern was displayed in the results of the logistic regression models (Supplementary Table 3).
Advertisement proximity to schools
The likelihoods of each advertisement category being displayed within a 100m, 200m and 800m network buffer around schools are presented in Table 3. The results indicate that it is very unlikely that unhealthy products were advertised within the school environment. This was consistent from a 100m-800m network distance of schools. A similar pattern was found for e-cigarette advertising, where there were no advertisements within 100m or 200m, and very unlikely to be within 800m of a school (OR:0.33, 95% CI 0.13 to 0.86). However, there was an increased likelihood of gambling advertising within 100m of schools.
There was an increase of 'other' products being advertised around schools (100m: OR:3.20, 95% CI 2.00 to 5.11), as we may expect when 'unhealthy' products are less likely to be advertised here; the advertising space must be used.
Children's exposure to advertisements and advertising types by socioeconomic deprivation
Full outputs from models comparing exposure to advertisement categories across binary income deprivation levels, urbanicity, sex and season are shown in Supplementary Table 4, while summary outputs are shown in Fig. 1. These indicate that children living in the most deprived areas encountered bus stops (with or without advertisements) significantly more frequently than those in the least deprived areas (coefficient Fig. 1. Effect size (i.e. mean difference) and 95% CIs between exposure to advertisement categories for children by area-level income deprivation, urbanity, sex, and season (Reference categories: Income deprivation = least deprived; Urbanicity = urban; Sex = male; Season = winter). Note: Models are fully adjusted for deprivation, urban, sex and season. Where 95% CI for coefficients intercepts one, there is no difference in exposure between income deprivation levels, urbanity, sex, and season. Where the 95% CIs fall above one, it indicates that children in, for example, the most deprived areas experienced greater exposure, compared to the least deprived. Statistical significance: ** p < 0.01; * p < 0.05.
(coef): 1.45, 95% CI 1.09 to 1.95). They also experienced significantly greater exposure to unhealthy foods (coef: 1.18, 95% CI 1.06 to 1.31), unhealthy foods and drink (coef: 1.18, 95% CI 1.0 to 1.31), and 'other' advertisements (coef: 1.62, 95% CI 1.07 to 2.46).. Children living in rural areas were exposed to less advertisements regardless of type (coef: 0.44, 95% CI 0.24 to 0.81) than children residing in urban areas. Children living in urban areas had greater exposure to unhealthy food (coef: 1.29, 95% CI 1.04 to 1.31) and unhealthy food and drink (coef: 1.29, 1.04 to 1.31) advertising than those living in rural areas. Fig. 1 also indicates that while some models did not reach statistical significance (due to small sample sizes) there was evidence of more socioeconomic patterning (Sullivan and Feinn, 2012), for example, Alcohol and Sugar-sweetened beverage advertising for children from the most deprived areas.
Discussion
The primary aim of this study was to understand whether there is socio-spatial inequality in the distribution of unhealthy commodity advertisements. In terms of advertisement location and type of advertisement, we did not find an association with the area based deprivation measures. We also found that unhealthy commodity advertisements were unlikely to be located around schools, which was consistent from a 100m-800m network distance surrounding schools. Together, these results indicate no bias towards more deprived areas or schools in the locations of unhealthy commodity advertisements on bus stops in the Central Belt of Scotland.
Our secondary aim was to measure 'real' exposure to unhealthy commodity advertisements using individual mobility data of children aged 10/11 years old. Here, we found that children who resided within more deprived areas had greater contact with the transport network and also evidence for socio-economic inequalities in exposure to advertisements. Children from more deprived areas were more likely to be exposed to unhealthy food and unhealthy food and drink product advertising compared to those living in less deprived areas.
A recent study in New Zealand, using an area-based design, assessed unhealthy commodity advertisements for transport stops within a 500m walking distance of schools and found there to be no association between unhealthy commodity advertisements around schools or a sociospatial patterning, the results were opposite to the authors' hypothesis, finding that advertising increased as distance from school also increased (Huang et al., 2020). In Austria, a 950m walking buffer surrounding schools was applied and found child-oriented snacks were not more frequently advertised there (Missbach et al., 2017). A recent systematic review found adherence to voluntary codes of practice for online and on television advertising prior to 2013 to be high in the UK but this was not universal in all countries (Galbraith-Emami and Lobstein, 2013) and a North American study showed city-level variation in area specific advertisements (Hillier et al., 2009). Given the global nature of the unhealthy commodity advertising and NCD crisis, future research using the methodology applied here for cities in a range of countries could further strengthen the body of evidence of unhealthy commodity advertising exposure.
We found that children from more deprived areas had greater exposure to bus stops and the transport network. In combination with our observation of no association between school proximity and unhealthy commodity advertisement, this suggests that transport network wide restrictions of unhealthy commodity advertisements, rather than school-based spatial restrictions, may be effective to target inequalities in exposure for children. A previous study has found that transportation stops and roads were an important component of where 10-11-year-old children living in Scotland spent their time (Olsen et al., 2019). However, that study did not explore differences by socioeconomic status, so this analysis provides important additional insights. Further evidence for socioeconomic inequalities in exposure to unhealthy commodity advertising also stems from Scotland (Robertson et al., 2017) but also globally, for example in Sweden (Fagerberg et al., 2019), Australia (Settle et al., 2014), North America (Lowery and Sloane, 2014) and the UK (Thomas et al., 2019). Scottish data for 2017 show that a higher proportion of children from more deprived areas used a service bus to travel to school than those from less deprived areas (most deprived quintile: 9%; least deprived quintile: 5%), a similar but larger difference was found by net household income (up to £15,000: 12.2%; over £40, 000: 2.9%) (Scottish Government, 2018). As well as socioeconomic inequalities, racial inequalities in advertising outside schools in North America have been highlighted where Hispanic schools had significantly more food and beverage advertisements outside than other schools (Herrera and Pasch, 2018).
Marketing targeted at children has been labelled an unfair exploitation of children's inherent vulnerabilities (Kunkel et al., 2004). Children's cognitive development limits their ability to differentiate between marketing messages and reality (Ludvigsen and Scott, 2009), and makes them particularly susceptible to persuasion from advertising (Rozendaal et al., 2010). Further, evidence suggests that unhealthy commodity advertising has cumulative effects on children, with attitudes, choices and consumption behaviours correlating with frequency of exposure to marketing messages (Scully et al., 2012;Gordon et al., 2011). Given children's heightened vulnerability to marketing messages, it has been suggested that unhealthy commodity advertising which targets children breaches their rights to appropriate information, protected by the United Nations Convention for the Rights of the Child (Smith et al., 2019;United Nations, 1989). From this perspective, restricting unhealthy commodity advertising in settings frequented by children would represent a priority for public policy. Transportation networks represent a key component of the built environment within which marketing could be restricted, particularly when the cumulative effects of children's exposure to promotion of unhealthy commodities are considered.
Since 2019, Transport for London (TfL) has prohibited advertising for HFSS foods on bus stops, taxis and aspects of their transport network, covering 32 London boroughs and representing 30 million daily journeys (Transport for London, 2018). Advertising for alcohol is prohibited on public transport, bus stops and stations in Ireland (Alcohol Action Ireland, 2019), and on bus shelters and other local authority property in New York City (City Of New York, 2019). Based on our findings, we would recommend similar city-level or national policy restrictions were implemented on transport networks in Scotland and elsewhere.
Our findings hold significant policy importance. They highlight that when measuring inequality in exposure to unhealthy commodity advertisements, the sole use of area-level measures of socio-economic situation may be insufficient.
Strengths and limitations
Our study has a number of strengths. We conducted a virtual audit covering a substantial (1934 km 2 ) geographical area containing 10,305 bus stops. We were able to conduct a novel analysis using individuallevel mobility data of children collected using precise GPS devices which could be linked to our advertising audit. This allowed us to compare area-or individual-based measures of exposure for unhealthy commodity advertising, a key strength of this study design. The methods used here can be applied elsewhere to provide evidence of the spatial nature of unhealthy commodity advertising in different contexts and provide evidence of relevance to policy.
We noted several limitations to our study. We did not conduct a nutrient profiling model or similar to categorise healthy/unhealthyjust visual identification of a products' likely categorisation. We mitigated this by keeping more controversial categories (yoghurts, coffee) out of the combined 'unhealthy foods' category as in other studies (Huang et al., 2020;Sainsbury et al., 2017).
As we collected data using Google Street View auditing software, we were only able to collect information about advertising at bus stops, we omitted other outdoor advertising (such as: billboards, monoliths, buses, taxis, repurposed phone boxes), broadcast advertising, print advertising and the rapidly-growing world of online advertising/marketing. Google Street View data have temporal variation between images; however, this introduces a randomness to the sample that may smooth out temporary anomalies and seasonal variations in advertising practices. However, this should also be noted as a limitation of the study design as the outcome may exhibit spatial autocorrelation due to not being able to control what season street imagery was captured, which will likely display a spatial pattern as geographical areas may be audited on the same day. The images audited from Google Street View were captured across a significant time period, the majority of the images audited here were captured between 2017 and 2020 (73.1% of sample), and the earliest image during 2008. This can be viewed as both a strength and limitation as it creates random exposure measurement error and bias to the null, therefore the exposure may be larger than our results suggest. The individual-level data of children's movements were collected during 2015 and 2016, prior to the collection of the majority of the advertising images. As the children are aged 10-11 years and nationally representative cohort, it is unlikely for this age group mobility patterns have changed significantly.
Study context in relation to the outbreak of 2019 novel coronavirus disease (COVID-19)
The global COVID-19 pandemic may have two outcomes relevant to this research. Firstly, increased public discourse around resource limitations within national health providers/services may increase political and public acceptance of policy approaches to the prevention of obesity and other NCDs. Emerging evidence suggests that obesity and other underlying NCDs worsen the acuteness of COVID-19 (Simonnet et al., 2020;Ryan et al., 2020;Dietz and Santos-Burgoa, 2020). Further, the UK Prime Minister, who had previously sought to reframe policies such as the Soft Drinks Industry Levy as 'sin taxes' (Iacobucci, 2019), reportedly now favours policy interventions to tackle obesity (Tunnidge, 2020). Restrictions on movement of people enforced by governments worldwide has led to less people on public transport and in public settings (Google LLC, 2020), therefore exposure, at least temporarily, will likely have been reduced. However, this may increase advertising exposure in other settings for children, such as online, in print and television which is worthy of investigation.
Conclusions
We found no evidence for placement of unhealthy commodity advertisements in more deprived areas or around schools in the Central Belt of Scotland. This suggests that school-based restriction boundaries alone, in addition to current ASA restrictions that have likely been effective in reducing advertising in these locations, would be ineffective in reducing children's exposure to unhealthy commodity advertising. However, our novel application of individual-level mobility data provides evidence that city-or country-wide restrictions to advertising on the transport network might be required to reduce inequalities in exposure to unhealthy commodity advertising for children. This research, using different advertising exposure measures, is important because misunderstanding of exposure by researchers could lead to ineffective policymaking.
Declaration of competing interest
The authors declare that there are no conflicts of interest.
|
v3-fos-license
|
2020-12-03T09:04:28.104Z
|
2020-09-24T00:00:00.000
|
229528189
|
{
"extfieldsofstudy": [
"Psychology"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://doi.org/10.29329/epasr.2020.270.20",
"pdf_hash": "a8a820355de3b62c32d41bbab005589a30be7f60",
"pdf_src": "ScienceParsePlus",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44352",
"s2fieldsofstudy": [
"Education"
],
"sha1": "11a8799c6c04901cb57515560ea565d715d6bb73",
"year": 2020
}
|
pes2o/s2orc
|
Investigation of the Relationship Between Secondary School Students' Physical Activity Attitudes and School Life Satisfaction
In this study, it is aimed to investigate the relationship between the attitudes of students studying in secondary school to physical activity and their school life satisfaction. The relationship between middle school students' attitudes towards physical activity and school life satisfaction was tested by the structural equation model. A total of 299 students who study at secondary schools affiliated to Çanakkale central schools, 53.2% (n = 159) of whom are male and 46.8% (n = 140) of female, are the research group. Attitude scale to physical activity and school life satisfaction scales were used as data collection tool in secondary school students. It has been observed that socialization and Self-Trust dimensions, which are the sub-dimensions of attitude to physical activity, affect school satisfaction positively and significantly, and are statistically significant. As a result, physical activity environments affect students 'feelings of socialization and self-confidence positively, which is thought to affect students' school life satisfaction positively. To gain the habit of doing physical activities in schools, it can be suggested to increase the physical education lesson hours and to direct them to extra sports activities.
Introduction
The level of life satisfaction perception of the individual affects life in many areas positively or negatively. It has been stated that individuals with high life satisfaction have positive professional and interpersonal relationships, and that these individuals are more resistant to diseases and longer than those with low life satisfaction (Lyubomirsky, King & Diener, 2005). The criteria that the individual sets in his life determine the level of perception of life satisfaction (Pavot & Diener, 1993).
Although life satisfaction is considered as a sub-dimension of happiness (Dilmaç & Ekşi, 2008), it is evaluated with many concepts such as psychological well-being, quality of life, subjective well-being (Dost, 2007). Diener, Emmons, Larsen, and Griffin (1985) defined life satisfaction as pleasure, feeling happy, being good in various ways. Many factors affect people's life satisfaction either positively or negatively. It is observed that these factors show some differences compared to adulthood in adolescence. This is because adolescence is a period in which many physical, psychological, and social changes occur (Byrne, Davenport & Mazanov, 2007;Moksnes, Byrne, Mazanov & Espnes, 2010). Factors affecting life satisfaction for students; school, family, friends, and life environment (Huebner, Laughlin Ash and Gilman, 1998). It can be stated that the positive experiences that the student has at school increase the quality of life of the student and that it will increase learning by increasing school motivation. It was observed that Life satisfaction and economic situation (Shek, 2005) related to academic success (Jovanović & Jerković, 2011), family relationships (Hampden- Thompson, & Galindo, 2017) cultural difference (Kaya, Çensiz & Aynas, 2019;Liu, Tian & Gilman, 2005), disability status (Eroğlu & Acet, 2017), ethnic structure (Leung, Pe-Pua & Karnilowicz, 2006), level of happiness in physical education classes (Uğraş & Güllü, 2020) and self-efficacy (Erol, 2017) in research. Abroad (Arciuli, Emerson & Llewellyn, 2019;Danielsen, Samdal, Hetland & Wold, 2009;Geagea, MacCallum, Vernon, & Barber, 2017;Hampden-Thompson & Galindo, 2017) and domestically (Arındağ & Seydooğulları, 2018;Baş & Yurdabakan, 2017;Kermen, Tosun & Doğan, 2016;Şahin, 2018) in recent years, the subject of school life satisfaction has been the subject of much research. The reason for this is that students' satisfaction with school life is concerning many subjects such as academic success, peer relations, belonging to the school. One of life satisfaction and lifestyle behaviors is physical activity (Penedo & Dahn, 2005). Researches have shown that physical activity in adolescents prevents weight gain (Simon et al., 2008), stress reduction (De Moor, Beem, Stubbe, Boomsma & De Geus, 2006), sociologically and psychologically positively (Åberg et al., 2009;(Santino et al, 2019;Vella, Cliff, Magee & Okely, 2014;Ussher, Owen, Cook & Whincup, 2007;Kleszczewska, Dzielska, Salonna & Mazur, 2018). The World Health Organization mentioned the importance of physical activity for both mental and physical health (Who, 2015). It is observed that there is a decrease in the level of physical activity despite the prevention of obesity, decreased risks of cardiovascular disease, and psychological benefits. It can be said that this decline was caused by some reasons such as academic concerns, environmental effects, and the family's attitude towards physical activity. It can be stated that physical education lessons and extracurricular activities are the most suitable environment for spreading the physical activity habit throughout life. The goals and achievements of the physical education lesson programs can be expressed in terms of effective weight (Güllü, Arslan, Görgüt & Uğraş, 2011;Uğraş & Aral, 2018) to give students the habit of doing physical activity. Considering that physical activity has many benefits for students, it can be said that the attitude towards physical activity can positively affect school satisfaction. In this study, it is aimed to investigate the relationship between middle school students' attitudes towards physical activity and school life satisfaction.
Method
In this study, a relational screening method was used to examine the relationship between middle school students' attitudes towards physical activity and school life satisfaction. The relational screening method was chosen to give an idea about the cause-effect relationship between attitude to physical activity and school life satisfaction (Fraenkel, Wallen & Hyun, 2012). The relationship between middle school students' attitudes towards physical activity and school life satisfaction was tested by the structural equation model. SEM is one of the powerful analysis methods to develop theory between variables.
Study Group
Students studying at secondary schools affiliated to Çanakkale central schools in the 2019-2020 academic year participated in the study. A total of 299 students, 53.2% (n = 159) of the students are male and 46.8% (n = 140) of the students are female students. When the distribution of the participants by class level is examined, 23.4% of 5th grade (n = 70), 20.1% of 6th grade (n = 60), 24.1% of 7th grade (n = 72) and 32.4% of 8 classes (n = 133) appear to be.
Data Collection Tools
Physical Activity Attitude Scale for Secondary School Students: Scale developed by Yıldızer, Bilgin, Korur, Yüksel and Demirhan (2019), "Love (5 items)", "Willingness (7 items)", "Benefit (4 items)", "Socialization (5 items) "and" Self Confidence (4 items) "consisted of 5 dimensions and 25 items. Confirmatory factor analysis was performed for the construct validity of the scale. According to CFA results, item factor loads were observed to vary between .44 and .79. As a result of the first CFA, Educational Policy Analysis and Strategic Research, V 15, N 3, 2020 © 2020 INASED two items were removed in order. According to DFA results after the procedure, χ2 / sd (1.818), GFI (.886), CFI (.908), IFI (.909), TLI (.895) and RMSEA (.052) values were found. It can be stated that the scale has acceptable reference values (Kline, 2016;Tabachnick & Fidell, 2007). For the internal reliability of the scale, Cronbach's alpha values were found to be .805 in willingness dimension, .767 in love dimension, .796 in benefit dimension, .700 in the socialization dimension and .710 in selfconfidence dimension.
School Satisfaction Scale: The scale adapted to Turkish by Telef (2014) developed by Randolph, Kangas, and Ruokamo (2009), consists of one dimension and 6 items. According to CFA results, item factor loads were observed to vary between .64 and .81. According to DFA results after the procedure, χ2 / sd (3.518), GFI (.968), CFI (.976), IFI (.977), TLI (.954) and RMSEA (.082) values were found. It can be stated that the scale has acceptable reference values (Kline, 2016;Tabachnick & Fidell, 2007). The Cronbach's alpha value of the scale was found to be .870. These results show that the scale is reliable.
Analysis of data
SPSS 23 and AMOS 23 statistics programs were used to analyze the data. Before determining whether the data are suitable for the structural equation model, 24 questionnaires with missing, erroneous, and extreme values were prepared and the process was continued with a total of 299 data sets. For the normality assumption of the data, skewness and kurtosis values were examined. After determining that DFA and Cronbach's Alpha reliability coefficients met the necessary conditions for Structural Equation Modeling (SEM), analyzes were performed. While SPSS 23 was used for descriptive statistics and Pearson correlation analysis, AMOS 23 was used in DFA and SEM analysis. χ2 / sd, IFI, CFI, TLI, NFI, and RMSEA values were examined to test the SEM model.
Results
Correlation analysis was conducted to test the relationship between school life satisfaction, which is the predicted variable of the research, and attitude to physical activity, which is the predictive variable. The relationship between sub-dimensions of attitude to extracurricular activities and subdimensions of attachment to school is presented in Table 1. Table 1 is examined, it is seen that there is a positive and low level significant relationship between students' ,who participated the research, "Affinity" scores and "School Satisfaction" (r = .174, p <.01), "Benefit" (r = .186, p <.01), "Socialization" (r = .191, p < .01), Self-Trust (r = .202, p < .01) scores. It was determined that there was no significant relationship between Willingness and School Satisfaction. The path analysis of the relation of attitude to physical activity sub-dimensions with school life satisfaction is shown in Figure 1. According to these results, it was seen that the model reached acceptable reference values and the model was confirmed (Kline, 2016;Tabachnick & Fidell, 2007 Educational Policy Analysis andStrategic Research, V 15, N 3, 2020 © 2020 INASED 431 According to Table 2, it was observed that the socialization dimension, which is one of the sub-dimensions of attitude to physical activity, affects school satisfaction positively and significantly (β = .316, p> .05) and is statistically significant. Self-Trust, one of the sub-dimensions of attitude to physical activity, was found to affect school satisfaction positively and significantly (β = .242, p> .05).
It was found that the sub-dimensions of attitude to physical activity affinity, willingness, and benefit do not statistically affect School Satisfaction.
Discussion, Conclusion and Recommendations
In this study, it is aimed to investigate the relationship between the attitudes of students studying in secondary school to physical activity and school life satisfaction. While it was determined that the physical activity attitude levels of the students studying in schools affiliated to the city center of Çanakkale have a positive effect on school life satisfaction in terms of Socialization and Self-Trust, it was revealed that they did not have a significant effect on Affinity, Willingness and Benefit dimensions.
It is a fact that physical activity is associated with physical and psychosocial wellbeing in children (Hancox, Milne & Poulton, 2004;Kriemler et al. 2010;Santino et al., 2019;Vella, Cliff, Magee & Okely, 2014;Ussher, Owen, Cook & Whincup, 2007;Kleszczewska, Dzielska, Salonna & Mazur, 2018). However, the physical activity levels of young people are not at the desired level (Troiano et al., 2008). Some countries such as Scotland have implemented some strategic plans to increase the physical activity levels of children and young people (Inchley, Kirby & Currie, 2011). In this study, it can be said that the physical activity attitude sub-dimension scores of the students studying in secondary school are at a good level. It is concluded that this situation positively affects the school life levels.
Studies are showing that physical activity positively affects family relationships (Beets, Cardinal & Alderman, 2010;Pugliese & Tinsley, 2007) and is related to the level of physical activity of young people. Students with a high level of attitude towards physical activity can often be considered as family support. Considering the important role of peer and parent support in increasing the level of physical activity (Laird et al., 2016), it is likely that it positively affected life satisfaction in children and adolescents. The habit of doing physical activity is realized in schools through physical education lessons and sports activities outside the classroom. Extracurricular sports activities have a positive effect on the social development of the students as well as their physical development. (Fredricks & Eccles, 2006). With the extracurricular sports activities, students get the opportunity to socialize. Through these activities, they have a chance to spend time in a fun environment with their peers and teachers. Sports environments have the feature to strengthen the peer relations of students such as team, togetherness, solidarity. It is a fact that sports are effective in socializing individuals (Hardin & Greer, 2009). Physical education lessons where students have the opportunity to do physical activity are seen as entertaining by students (Namlı, Temel & Güllü, 2017;Temel & Güllü, 2016). Pehlevan and Bal (2018) concluded that participation in sports in secondary school is effective in peer relationships and the development of social relationships. The fact that adolescents have fun and a good time in social activities and sports activities and enable socialization may have caused students to increase their school life satisfaction. In their study, Belton, Prior, Wickel, and Woods (2017) concluded that extracurricular physical activity activities on students studying in disadvantaged schools have a positive effect on students' life satisfaction. Another study found that adolescents who do sports are happier than those who do not (Snyder, Martinez, Bay, Parsons, Sauers & McLeod, 2010). These studies show that it is possible to say, physical activity environments affect adolescents' socialization and indirectly their life satisfaction. According to the research results, Self-Trust, which is one of the sub-dimensions of physical activity, positively affected school satisfaction. Sports environments are one of the places where adolescents can experience their feelings of achievement and acceptance. It can be said that the self-confidence that adolescents gain through physical activity can positively affect their school life satisfaction. In this study, it was found that as the self-confidence score of secondary school students increased from the sub-dimensions of physical activity, school life satisfaction increased. It has been revealed by research that the self-confidence levels of adolescents who do sports are higher than those who do not do sports (Aykora, 2019;Gündoğdu, 2019;Özbek, Yoncalık & Alıcak, 2017;Terlemez, 2019;Yarımkaya, Akandere & Baştuğ, 2014). It can be stated that physical activity increases students' self-confidence and in this case it positively affects the school life satisfaction of students.
As a result, it was concluded that the physical activity attitudes of the students studying in secondary school positively affect their school life satisfaction. Physical activity environments positively affect students' feelings of socialization and self-confidence, which positively affects their school life satisfaction. According to data from PISA (2015), considering that ranked last in OECD countries, life satisfaction scores of students in Turkey be directed to the physical activity requirements of the students come forward. To gain the habit of doing physical activities in schools, it can be suggested to increase the physical education lesson hours and to direct them to extra sports activities. It can also be provided to organize activities where the benefits of physical activity can be transferred to their families.
|
v3-fos-license
|
2018-04-03T03:35:11.735Z
|
2015-02-07T00:00:00.000
|
6784089
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "HYBRID",
"oa_url": "https://bmcobes.biomedcentral.com/track/pdf/10.1186/s40608-014-0030-4",
"pdf_hash": "c4f003bfb6716c615150c066d19fba2afa92abf2",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44353",
"s2fieldsofstudy": [
"Medicine"
],
"sha1": "c4f003bfb6716c615150c066d19fba2afa92abf2",
"year": 2015
}
|
pes2o/s2orc
|
Prevalence of overweight and obesity among primary school children in a developing country: NW-CHILD longitudinal data of 6–9-yr-old children in South Africa
Background Widespread trends of increasing child obesity are reported in developing countries. This longitudinal NW-CHILD study investigated changes in overweight and obesity over a three year period among 574 children between the ages 6 and 9 (282 boys, 292 girls; 407 black, 143 white) in South Africa (SA), taking into consideration sex, race and school type. Stratified random sampling was used to identify 20 schools, across 5 school SES levels (quintiles), in 4 educational districts of the North West Province of SA. Standard anthropometric techniques and international age adjusted BMI cut-off points for children were used to determine overweight and obesity, 3-years apart. Mixed models were used to analyse the effects of sex, race and socio-economic status (SES) of the school. Results Overall obesity increased over 3-years by 4% from 12.5% at baseline to 16.7% during follow-up. Obesity increased significantly in both white (4.2%) and black (2.0%) children, although overall prevalence in the final year was double (27.3%) in white children compared to black children (13.3%). Prevalence in obesity increased more in boys (3.2%) compared to girls (2.4%), although girls showed a higher overall prevalence (18.5%). SES effects were significant where children in schools associated with higher SES, had the highest rate of increase and the highest prevalence of obesity. A significant change towards an unhealthy BMI was found in 9.2% of the group over the 3-year period, although a small percentage (3.0%) also transitioned towards a healthier BMI. Conclusions Overall obesity prevalence rose significantly from 6–9-years. Obesity, compared to overweight, increased more during this period. Prevalence and rate of increase differed markedly in different sexes, race and SES, masking the extent of the problem. Shifting towards an unhealthy BMI was more common than obtaining a healthier BMI over the 3-year period. It also demonstrated the difficulty of breaking the cycle of obesity, once it had started. Early prevention strategies are needed based on the trends established in this study, with special attention to white children living in high SES regions, and black children in economic transition.
Background
The Global Burden of Disease Study [1] describes obesity as a global health challenge showing widespread increasing trends over the past decades, with no national success stories of decreasing trends over the past 33 years. The prevalence of obesity in children rose worldwide by 47.1% between 1980 and 2013 [1]. Overweight and obesity, which were previously considered problems afflicting mainly the affluent, are now markedly on the increase in low and middle income countries, particularly in urban areas [2][3][4]. Globalisation, improving economic conditions and changing dietary habits in developing countries are purported as responsible for the rapid increase in obesity [3]. This increase is associated with a lack of supportive policies in sectors such as health, agriculture, transport, urban planning, environment, food processing, distribution, marketing and education. Presently, it is estimated that more than 30 million overweight children live in developing countries and 10 million in developed countries [3]. Available estimates for the period between the 1980s and 1990s show that the prevalence of overweight and obesity in children increased by a magnitude of two to five times in developed countries (e.g. from 11% to over 30% in boys in Canada), and up to almost four times in developing countries (e.g. from 4% to 14% in Brazil) [5]. Globally, increasing prevalences for developing countries are reported from 1980 to 2013 in children and adolescents showing changes from 8.1% to 12.9% in boys and from 8.4% to 13.4% in girls [1]. In 2010, 43 million children (35 million in developing countries) were estimated to be overweight and obese, with 92 million at risk of overweight [6].
The fastest growth rates of obesity among pre-school children are found in Africa where the numbers of overweight and obese children in 2010 were more than double those reported in 1990 [6]. Results of a systematic review [4] further substantiate reports of overweight/ obesity transition among school-aged children in Sub-Saharan Africa. Statistics furthermore indicate that South Africa (SA) has amongst the highest child obesity rates in Africa [2,4]; the prevalence of obesity among South African children is comparable to that found in developed countries more than a decade ago [7].
A review summarizing obesity among South African children, from birth to the age of 19 years, indicates low overweight and obesity rates before 1999, with more recent studies showing a mean prevalence of just over 15% for overweight and obesity combined [8]. This review reported that this prevalence does not give a true reflection of the problem, because overweight and obesity differ markedly between age groups, boys and girls, ethnic groups, and geographical areas. Findings also showed a significant increase in overweight and obesity from 1999 to 2004, based on reported prevalence's of cross-sectional studies [7].
Only a few cross-sectional studies and trend analyses have reported on obesity prevalence in adults and children in SA [2,9]. Rates of overweight and obesity among children between 13-19 years (grades [8][9][10][11] from the first National Youth Risk Behaviour Survey in 2002 in SA showed overweight prevalence of 6.9% for boys and 24.5% for girls, and obesity prevalence of 2.2% for boys and 5.3% for girls respectively [2]. The researchers' remark that it is difficult to determine whether the rates observed in their study represent an increase in prevalence, although their data indicate an expanding 'epidemic' in obesity and related chronic diseases [2]. They concluded that consistent with data from other countries in transition, it is highly likely that overweight and obesity prevalence rates today are higher than those found 10 to 20 years ago. Temporal trends in obesity among children and adults in SA have been reported, based on a 2005 nationally representative sample of 1-9-year-old children and a sample of 16-35-year-old adolescents and adults, compared with study populations from the SA National Food Consumption Survey (NFCS) 1999. Data were re-analysed according to the WHO 2006 and 2007 reference values [9]. Taking into account the limitations of a comparison between the 1999 and 2005 national data, a significant decrease was however seen in overweight based on BMI nationally (17.1% to 14% (10% overweight and 4% obese). An overall overweight and obesity prevalence of 10.3% is also reported among 7-9 year-old children [9].
Time trend analyses of obesity prevalence, based on representative and national surveys, have also been performed in developed countries including Portugal [10], Japan [11], Slovenia [12], UK [13], USA [14,15] and Canada [16]. Some report rising obesity prevalence among child populations [10,11], although trends of levelling-off and stabilizing prevalence's are also reported (e.g., in the USA [14] and in the UK [13]. Furthermore, researchers [15] conclude that although nationally representative data of prevalence rates among children in the USA were not significantly different from 2009 to 2010, more severe forms of obesity have increased over the last 14 years. In Slovenia, twenty year trends indicate that the odds for obesity (odds ratio 3.7) is growing at higher rates than overweight (odd ratio 1.7) per year, especially among boys [12]. Trends indicating higher obesity increases in boys are also reported elsewhere [1,11,13,16]. Age adjusted BMI increased in Japanese children over a 25-year-period; 6-14-year-old boys showed an increase of 0.32 kg/m2 per 10 years, and 0.24 kg/m2 per 10 years in girls [11]. In Canada, during the period from 1986 to 1996, overweight increased among boys from 11% to 33%, and from 13% to 27% among girls while obesity increased from 2% to 10% in boys and from 2% to 9% in girls [16]. Time trends in the UK (1995-2010), based on the Health Survey of England, indicated marked increases in prevalence of boys 2-15 years from 11% to 17% while girls showed increases from 12% to 15%.
Overweight and obesity are highlighted as a major public health issue in SA [17], and there is a clear need for accurate estimates of the prevalence and severity of obesity. Continued surveillance of nutritional status, as an important component of a national strategy to prevent and control both malnutrition and chronic diseases, is recommended [2]. Cross-sectional studies [7,[17][18][19][20] and results of trend analyses [2,9] are available of SA children. However, there are only a few studies reporting on the prevalence of obesity longitudinally in pre-pubertal South African children. This leaves a gap in the knowledge regarding the rate of growth of this health problem. Such studies can provide accurate estimates of the rate of change in obesity prevalence among pre-pubertal children. It can also ensure the chances of improved and accurate knowledge about weight changes in the same individual, thus providing direction for future research. The association of the adiposity rebound, which is reported to occur around the age of six, and obesity in later years, further highlight that the period between 6 and 9 years may be a critical period for obesity prevention [21]. Recent findings also indicate that obesity intervention is most successful during the pre-pubertal period [22]. This study was designed to address this knowledge gap by obtaining more information about the current rate of change in overweight and obesity in pre-pubertal children in the age range 6-9 years in the North West Province (NWP) of SA taking into consideration sex, race and school SES quintiles.
Study design, setting and population
The research formed part of the NW-CHILD (Child-Health-Integrated-Learning and Development) longitudinal study. Measurements were made at 3-intervals between 6 and 12 years (grade 1, grade 4, grade 7). This study was conducted in 1 of the 9 provinces in SA, the North-West Province (NWP) where approximately 8.2% of the national population lives. The NWP is characterised by high poverty levels especially in rural areas, unequal distribution of income between different population groups, and unemployment [23]. Income per capita in the NWP is seventh from nine provinces and it is estimated that 72.9% of children living in the NWP suffer from poverty [24].
Sample size
The total group that were measured during baseline in 2010, when the participants were in grade 1, consisted of 816 learners (419 boys and 397 girls, 567 black, 218 white, 20 mixed ancestry and 11 Indian children), with a mean age of 6.78 years. Three years later, in 2013, 574 children with a mean age 9.87 (±038), were available for the 1 st interval point measurements, which represented an attrition rate of 30.1% of the original sample. Boys (n = 282, 49.12%) and girls (n = 292, 50.87) were equally distributed in the group. 27 mixed ancestry and Indian children were part of the group but were omitted from the racial comparison because of the small numbers. More black (n = 407) than white (n = 143) children were part of the group, while the number of children in the different school quintiles ranged between 96 and 130. The distribution of the learners in the different school quintiles was as follows: Quintile (Q) 1 (n = 120); Quintile 2 (n = 96), Quintile 3 (n = 130), Quintile 4 (n = 108) and Quintile 5 (n = 120). A possible bias as a result of lost subjects during follow-up were analysed using independent t-testing, calculating differences between lost subjects and those who remained in the study in 2013, in baseline height (p = 0.553, d = 0.04), mass (p = 0.03, d = 0.16), BMI (p = 0.008, d = 0.19) and fat percentage (p = 0.223, d = 0.09). No evidence of bias could be found based on the insignificant Cohen's d-values [25].
Sampling method
The participants were selected by means of a stratified random sample. Stratification was done by school district, gender and school quintile (Q) in collaboration with the Statistical Consultation Service of the North-West University (NWU). To determine the sample, a list of schools in the NWP was obtained from the Department of Basic Education. From the list of schools in the NWP, which are grouped in 8 education districts, each representing 12-22 regions with approximately 20 schools (minimum 12, maximum 47) per region, stratified random sampling was used to select regions and schools with regard to population density and school status (Quintile 1, i.e. schools from very poor economic sectors to Quintile 5, i.e. schools from very good economic sectors). The quintile status of a school is determined by the National Treasury, according to the National Poverty Table, obtained from the National Census data which include income, dependant ratios and levels of literacy. This poverty classification is used by the Department of Basic Education in each province to classify schools in different quintiles. Quintile 1 and 2 schools are the poorest schools and are released from paying any school fees [23]. Throughout the paper Q1-3 schools will by definition represent schools from low SES, while Q4-5 schools will represent high SES.
Anthropometry
The anthropometric measurements included the following: height (cm), body mass (kg), skinfolds (sub-scapular and triceps, mm) and waist circumference (cm). These variables were measured by trained postgraduate students in Human Movement Sciences. All measurements were done in accordance to the protocol of the International Society for the Advancement of Kinanthropometry [26]. Height was measured barefoot to the nearest 0.1 cm by means of a Harpenden portable stadiometer (Holtain Limited, U.K.). Body mass was measured with an electronic scale (BF 511, Omron) to the nearest 0.1 kg. From the height and body mass measurements the body mass index (BMI) was calculated for each participant (body mass (kg)/height (m) 2 ). The triceps and sub-scapular skinfolds were measured with a pair of Harpenden skinfold callipers and each skinfold was measured twice to obtain an average of the two measurements. [27]. Cut-off values for the sum of the triceps and subscapular skinfolds for 6 to 7-year-old overweight boys are 16-17, for overweight girls 19-22, for 6 to 7-year-old obese boys 20-24 and for obese girls 27-28. Cut-off values for 9 to 10-year-old overweight boys are 23-24, for overweight girls 29-32, for 9 to 10-year-old obese boys 34-33 and for obese girls 41-43 [27]. These skinfold measurements were selected because they show the highest relation with the overall percentage of fat in the bodies of children [28]. Intrarater reliability was determined by intra-class correlation coefficients which showed good reliability for the subscapular (.994) and the triceps (.995) skinfolds.
The prevalence of overweight (OW) and obesity (OB) was determined by using the International age-adjusted cut-off points provided by Cole et al. (2000) [29]. Children have a risk for overweight and obesity if their BMI is respectively between the 85 th and 95 th percentile for age and gender. The following international cut-off values were used. It is calculated for overweight and obesity by sex between 2 and 18 years, defined as body mass index between 25 and 30 kg/m2 up to the age of 18. These cutoff values for girls are: 6 years: 17
Ethical clearance and administrative clearance
Ethical approval for the study was obtained from the Ethics Committee of the NWU (No. 00070 09 A1). Permission was also obtained from the Department of Basic Education of the NWP and the principals from the selected schools. Informed consent had to be provided for each child by their parents or legal guardian, before they were allowed to participate in the study.
Statistical analysis
Data was descriptively analysed by means and percentages. Linear mixed models in SPSS (version 22) were used with school as subject and an unstructured covariance matrix to determine the main effects of race, gender and SES as well as all interaction effects, using the 2010 baseline measurements as co-variants. Frequency tables were used to determine prevalence of overweight and obesity by group, sex, race and school quintile. Significance of differences in BMI categories and shifts over time in this relationship (p < 0.05) were determined by 2-Way summary tables. Pearson Chi-square analysis determined statistical significance of differences in BMI status and relationships over time, while the Cramer's V was used to establish effect sizes. The following values are used as an estimation of practical significance (Cramer's V = 0.1 (small), 0.3 (moderate), 0.5 (large) [25]. Table 1 provides the statistics of obesity prevalence in 2010 and 2013, by group, but also according to sex, white and black children and different school quintiles. During follow-up in 2013, the group had a combined overweight (9.4%) and obesity (7.3%) prevalence of 16.7%, compared to that found in 2010 (Combined OW/OB = 12.7%; 8.2% OW; 4.5% OB). Prevalence during follow-up was 14.9% and 18.5% respectively for boys and girls, compared to that found during baseline (10.6% and 14.7%). White children displayed a prevalence of 27.3% compared to 13.3% in black children during follow-up, where the prevalence increased respectively from 20.3% in white children and 10.3% in black children during baseline. The change in prevalence in the different school types was much lower in Q1-Q3 schools which had only black learners compared to that found in Q4-Q5 schools which had black and white learners (Q1-9.2%-10.0%; Q2-8.3%-8.5; Q3-3.9%-7.7%, Q4-19.4%-25.9; Q5-23.3%-31.7%).
Results
The combined OW/OB prevalence of 16.7% during follow-up, showed an increase of 4.0% over the 3-yearperiod in the group, increasing from 12.7% during baseline. Overweight increased with 1.2% from 8.2% to 9.4%, compared to obesity which increased more with 2.8% from 4.5% to 7.3%. Obesity prevalence also increased more in boys (3.2%) from 3.9% to 7.1% compared to that of girls (2.4%) changing from 5.2% to 7.5%. Obesity increased significantly in black (1.96%) and in white participants (4.2%). The highest rates of obesity increases were found in Q3 (1.5% to 3.9%), Q4 (8.3% to 14.8%) and Q5 (5.8% to 10.0%) schools. Table 2 displays percentage shifts in BMI categories over the 3-year period. Overall, 87.8% of the group stayed within the BMI classification that they were classified in during the baseline measurements. A significant upward transition of 9.2% (Cramer's V = .530) was however, found in the group, where 53 participants moved from a normal weight to overweight or obese classification based on BMI. A reverse trend was however, also observed, but among a much lower percentage of the group (n = 17, 3.0%). Similar, but bigger upward compared to downward shifts were observed in both sexes (Table 2), while more girls showed decreasing tendencies. In white children, a significant shift (Cramer's V = .531) towards overweight and obesity took place among 14.7% (n = 21) of the group, compared to 4.9% (n = 7), shifting from overweight and obese to normal weight. Among black children, a significant upward shift towards an unhealthy BMI occurred in 7.4% children (n = 30) compared to a reverse tendency of 2.5% in 10 children (Cramer's V = .531). This indicates that for every 3 children that move into an overweight or obese category, one transferred back to a healthier classification. The difference between increasing and decreasing tendencies in the different school quintiles was much bigger in Q3-Q5 schools (5.3%, 11.1% and 12.5% respectively) compared to in Q1 to Q3 schools (0.8% and 1.0%), where the percentages of participants showing upward or downward transitions were more or less the same. A clear upward transition to more overweight or obese categories, is however evident from Table 2, showing that children attending school quintiles associated with higher SES are more prone to become overweight or obese than children attending school quintiles associated with lowers SES. Tables 3 and 4 provide descriptive data of the group (N = 574) during the baseline and follow-up measurements, according to gender, race and school quintile by means of mixed models. Descriptive values of height, body mass, BMI and fat percentage are displayed as well as the significance of main effects for follow-up measurements. Linear mixed models on baseline measurements (2010) indicated a quintile*sex interaction for height where boys in Q4 schools where practically significantly taller than girls (d = 0.82). A race*quintile interaction for BMI also indicated that black children in Q4 schools had significantly higher BMI values than white children (d = 0.5). For baseline fat percentage, both quintile and race were statistically significant, while for baseline mass only quintile had a significant effect. In the follow-up, baseline measurements of 2010 (height, mass, BMI and fat percentage respectively) were also included as co-variates to determine the effects of race, sex and quintile in the linear model. Interaction effects that showed significance in these analyses were Quintile*race for BMI and mass*quintile. The main effects of SES (school types expressed as quintiles), were significant for all variables, while sex and race were only significant with regards to fat percentage. Quintile 1 to 3 schools included only black children, but the interaction between race and quintile was evident in Q4 schools where the BMI and mass of black children were significantly higher in 2013 compared to those of white children (BMI 19.8 vs 17.6, d = 0.61 and mass 38.3 kg vs 33.9 kg,d = 0.53).
Discussion
The aim of this study was to determine the rate of increase in prevalence of overweight and obesity over a 3-year-period in pre-pubertal South African children. This is a first study in SA, a developing country, to provide prevalence statistics of childhood obesity obtained by follow-up measurements in pre-pubertal children, aged 6 to 9-years. 12.7% of the group were OW or OB in 2010, compared to 16.7% of the same group in 2013. The rate of increase in the group was 4.0% over the 3-year period. Increases in the group were similar in boys and girls, while white children had much higher increases compared to black children as a group, and higher SES (which included white and black children) also contributed to higher prevalences and rates of increase. A different picture of the extent of the problem emerged, when interactions of race and SES were considered, than when the participants were analysed as a group. White children had a prevalence increase of 7.0% (20.3% to 27.3%), which was double compared to black children where the increase was 3.0% (10.3% to 13.3%). However, in Q4 and Q5 schools black children showed higher increases in combined OW/OB compared to white children (8.2% vs 5.2% Q4, 9.4% vs 7.7% Q5, Table 1), which was also much higher than the increase for black children when analysed as a group (3%) or for Q1-Q3 schools were only black children were enrolled in these schools. Overall Q4 and Q5 schools which represent children from more affluent families and environments showed much bigger increases in prevalence compared to Q1 to Q3 schools which can be ascribed to improved living conditions. The main effects of SES in 2013 were significant for all variables, while sex and race were only significant for fat percentage. The Quintile*race interaction effect for BMI showed significantly higher BMI values among black children compared to white children in Q4 schools. The higher BMI of black children compared to white children in higher SES, can in part be ascribed to the economic transition of black families in South Africa, although a longitudinal study in America over a period of 17-years of racial differences [30] in 5-14 year old children, also confirmed contrasting patterns of increase in BMI between white and black children. Annual increases in BMI varied in this study from 0.60 kg/ 2 per year in white girls to 0.78 kg/ 2 per year in black girls and yearly increases in BMI before age 18 were 25% to 55% higher in black compared to white girls. Our results, however confirm the conclusions made by Rossouw et al. [8] indicating that the prevalence of childhood obesity in SA does not give a true reflection of the problem, as overweight and obesity differ markedly between age groups, boys and girls, ethnic groups and geographical areas. However, evidence of an overweight/obesity transition in school-aged children in Sub-Saharan Africa is substantiated by research [4], while the fastest growth rates of obesity among pre-school children are also found in Africa where the number of overweight and obese children in 2010, were more than double that found in 1990 [6].
In order to offer a perspective on the rate of increase of 4.0% in our group over a 3-year period, we compared this percentage increase with estimated increases that are reported by other studies. There is however, a lack of longitudinal studies, to enable direct comparisons with. Studies that were used are thus not necessarily based on the same age groups or time periods and findings obtained by trend analysis, were also incorporated. Researchers studied 450 obesity surveys of 144 countries to quantify the worldwide prevalence of OW and OB among pre-school children in Africa [6], and reported the estimated prevalence of childhood OW and OB in Africa, in 2010, as 8.5%, which they expect to reach 12.7% in 2020, indicating a predicted increase of 4.2% over a ten-year-period. A longitudinal study of 306 black children from low income families in Jamaica, reported that overweight increased by 6% (3.5% to 9.5%), from 7-8 years to 11-12%, while tracking of BMI was also high during follow-up [31]. An increase is also reported in OW and OB in first grade children in Chile [32], which is also a developing country, from 6.5% to 7.8% in boys and girls respectively in 1987, to 17% and 18.6% in 2000, which shows an increase of 12% over a 13-year period. Prevalence for developing countries is reported to have changed from 1980 to 2013 in child and adolescent boys and girls, from 8.1% to 12.9% in boys and from 8.4% to 13.4% in girls, indicating an estimate increase of 5% over this period [1].
Although not directly comparable to other studies, the rate of increase of 4.0% that were found among our 6-9year-old group of SA children, thus displayed a more rapid increase over a shorter period of time, providing evidence of an expanding epidemic in pre-pubertal children in our study. In Japan [11], the largest odds ratio was also observed in the 6-8year-old children in whom the prevalence of obesity more than doubled from 4.2% (1976-1980) to 9.7% (1996)(1997)(1998)(1999)(2000). The increase in our group was however influenced by SES and the interaction between SES and race, indicating that white children as a group and children from higher SES which included white and black children showed the most rapid increases. Statistics reported on this prevalence among white children in other SA studies [2,7,33] are also consistent with the higher prevalence found among white children. Differences reported in another SA study between ethnic groups also indicated that the results may be confounded by differences in SES [7]. However, the higher prevalence among white children compared to black children as a group, still differs from the statistics reported in developed countries such as the USA, Canada and Norway [34].
The rate of increase in overweight showed a slighter upward trend, compared to obesity prevalence which increased nearly twofold during the 3-year period. A 20 year trend analysis in Slovenia also reported a higher increase in obesity compared to overweight [12]. A trend analysis on 1-9 year old South African children, furthermore found that overweight decreased over time, while obesity increased [9]. The prevalence of obesity and severe obesity were studied over a period of 14 years in the USA in children, ages 2 to 19 years [15]. From 2011 to 2012, 17.3% of the children were obese, 5.9% met criteria for class 2 obesity (BMI ≥ 120% of the 95 th percentile and 2.1% met criteria for class 3 obesity (BMI ≥ 140% of the 95 th percentile). The researchers concluded that these rates were not significantly different from 2009 to 2010, but that more severe forms of obesity have increased over the last 14 years. Although percentages of Class 2 and 3 obesity were not determined in our study, obesity prevalence was the most severe form of OB in our study, and increased more during the 3-year-period than OW, which seems to follow the same pattern as described in the USA, a developed country.
The prevalence of obesity increased more in boys compared to girls over the 3-year period although girls still had a higher prevalence of obesity during follow-up. In addition more girls moved back to a healthier BMI compared to boys over the 3-year period. This trend of a higher increase in boys is consistent with other studies worldwide. In Canadian children, an increase is reported for OW (11 to 33% in boys, 13 to 27% in girls) and OB (2% to 10% in boys and 2 to 9% in girls) between 1981 and 1996 [16]. In 6-14-year old Japanese children, age adjusted BMI increased with 0.32 kg/ m per 10 years in boys and 0.24 kg/ m per 10 years in girls over 25-years, 6-14 years as derived from a national nutrition survey [11]. Time trends in the UK (1995-2010) based on the Health Survey of England (HSE) indicate an increase in prevalence of boys 2-15 from 11% to 17% while prevalence in girls increased from 12% to 15%. Trend analysis in the USA [14] of two large representative federal health surveys and data systems show a four-fold increase in obesity prevalence among 6-17-year-old male (5.5% to 21.6%) and a three-fold increase among female children (5.8% to 17.7%) between 1976 and 2008. The average annual rate of increase in obesity prevalence was furthermore 4.5% for male children and 3.8% for females in this study.
Prevalence in both white (4.2%) and black children (2.0%) increased significantly over the 3-year period, although white children displayed a much bigger increase and had almost double the prevalence of obesity (27.3%) than black children (13.3%) during follow-up. The combined OW/OB prevalence increase in white children was also bigger (20.3% to 27.3%) over the 3-year period than among black children (10.3% to 13.3%), although not in Q4 and Q5 schools where the increases among black children in Q4 schools were 8.2% (26.5%-34.7%) and 10.3% (20.7%-31.0%) in Q5 schools compared to those of white children (Q4, 5.6% (13.3%-18.9% and 7.7% (Q5, 24.5%-32.2%, Table 1). It further seems that South African children from higher SES have the highest prevalence and rates of increase in OW and OB. White children were all enrolled into the Q4 and Q5 schools that were part of the study, and these school quintiles represent more affluent schools, families and environments and also showed the highest combined prevalence. This differs from the findings of studies conducted in other countries such as the USA and UK, indicating the highest rates of obesity and severe obesity among children from minority groups or who are underserved by the health care system [34,35]. These studies are however conducted in developed countries while SA is considered a developing country in transition with high socio-economic disparities. It can thus be deduced that higher SES is currently associated with higher increases in overall prevalence in predominantly pre-pubertal white but also among black children in economic transition (Q4-19.4%-25.9%; Q5-23.3%-31.7%) as the increase in Q1-Q3 schools (Q1-9.1%-10%; Q2-8.3%-8.5%, Q3-3.9%-7.7%), based mainly on statistics of black children, were much less over the same period. Q1-Q3 schools enrol children from areas with high levels of food insecurity, [24,36,37], thus levels of underweight might be high in these schools. Black children in Q4 schools had significantly higher BMI and mass during follow-up compared to white children in these schools and their combined OW/OB prevalence increases were also higher. From this, the conclusion can be drawn that it can quite probably be the result of westernization and urbanization of more affluent black families. The high prevalence found among white children might also still be a result of the post-apartheid regime which exposed these children to circumstances equal to those in developed countries, such as sedentary lifestyles. Although many interrelated behaviour patterns can be contributing factors, decreased physical activity levels, among black girls, and higher food security which can contribute to higher availability and intake of processed foods, can be offered as reasons for these major differences between black children in low and high SES schools. Cultural beliefs regarding ideal body mass might be furthermore a possible contributing factor to the black and white differences that were found [36]. Spending money at school tuck shops on unhealthy foods might also play a role in the increased prevalence found in children attending Q4 and Q5 schools [38]. The prevalence of obesity that was established among 6-9-year old children by means of the first South African National Health and Nutrition Examination Survey (SANHANES-1), compared with the NFCS-2005, indicates a prevalence of 11.8% (OW 8.4% OB 3.4, mean BMI 16.2) (2013) and 10.3% (OW 7.8% OB 2.24, 2005, mean BMI 16.0) respectively [39]. This survey, however, has shortcomings in the sense that it essentially refers to African and coloured children residing in SA. When compared to our prevalence of 10.3% that were established in 2010 for 6-year-old black children, and 13.3% for 9-year old black children in 2013, a similar prevalence is confirmed in this ethnic group (11.8%). A longitudinal study of 306 black children from low income families in Jamaica, also reported low prevalence's with an increase from baseline at 7-8 years (3.5%) to 9.5% at 11-12% [31].
Lastly, our results established that although a considerable percentage of the group transitioned over the 3-year period to an unhealthier BMI classification, a small percentage also moved to a healthier BMI. Boys and girls showed similar transition tendencies, while white children and children in schools representing higher SES (which included black children in Q4 and Q5 schools) showed higher shifts towards more unhealthy BMI's in comparison to children in lower SES school types. Decreasing tendencies were also observed in BMI levels, although to a much smaller extent compared to the increasing tendencies that were found, resulting in a significant increase in combined OW/OB between 6 and 9 years. Although our changes in BMI were over a shorter follow-up period of 3 years, it agrees with a longitudinal study on tracking of BMI in Chinese children which was done on children who were aged between 6 and 13 years at baseline, reporting that over a 6-year period (1991-1997), that BMI remained unchanged in 40% of the group, while 30% moved to a lower or higher quintile and that overweight children were 2.8 times as likely as other children to become overweight adolescents [40]. This study, however, includes a high percentage of underweight children, and the researchers found that a smaller proportion of Chinese children in a rapidly changing society, continue to be overweight than what is reported in higher income countries.
Our study had limitations that need to be taken into consideration. This was not a nationally representative study but based on regional data of only 1 of the 9 provinces in South Africa. Research incorporating prevalence's of all the provinces in SA are thus recommended. The strong points of the study are, however the stratified and longitudinal design, and the fact that the findings are based on real measurements and not self-reported height and weight data. This is also an on-going study with follow-up measurements due in 2016, which will provide an even more accurate picture of this growing problem among children over a period of 6 years as they move from early childhood into adolescence.
Conclusion
These results confirm that pre-pubertal children living in SA, a developing country, are not excluded from the rising epidemic of childhood obesity with clear tracking tendencies. These young children are especially vulnerable to the side-effects associated with obesity such as adverse health risks, and developmental shortcomings because of their young age and consequently earlier exposure to unhealthy lifestyles and chronic conditions [20,36]. White learners attending schools in higher socio-economic areas (Quintile 4 and 5), showed double the increase in prevalence than black children in lower SES, although OW and especially OB were also prevalent among black learners who mostly attended schools representing lower socio-economic circumstances. Black children who attended school types associated with higher SES, showed high OW/OB prevalence and clear signs of economic transition, which impacted negatively on their body composition. This trend among black families with increasing economic opportunities is a definite concern that might need a shift in how public health nutrition and medical resources are allocated in the future. The results of this study can thus help health professionals, policy makers and experts in the field of child development to plan future preventative strategies for these children who are undergoing vast changes in diet and physical activity behaviour which include clinical management or public health intervention programmes for altering body composition levels. Awareness and educational campaigns that raise concern among parents regarding the future health problems that children might encounter due to unhealthy weight status at a young age, are also important. Culturally appropriate campaigns and strategies for interventions that would be effective for each group are also recommended. Future research, including national epidemiological and tracking studies, and intervention studies in this area is recommended to obtain a better understanding of this rising global health challenge among transitional populations. Clearly, children in developing countries, are confronted with additional challenges to their wellbeing, and are clearly not excluded from the dangers of lifestyle changes.
|
v3-fos-license
|
2018-04-03T00:00:36.906Z
|
2013-06-25T00:00:00.000
|
3205357
|
{
"extfieldsofstudy": [
"Biology",
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "BRONZE",
"oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1002/bies.201300037",
"pdf_hash": "347d5ee40fb39863af3ea98108ad707e50deff5b",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44356",
"s2fieldsofstudy": [
"Biology"
],
"sha1": "347d5ee40fb39863af3ea98108ad707e50deff5b",
"year": 2013
}
|
pes2o/s2orc
|
Genome reduction as the dominant mode of evolution
A common belief is that evolution generally proceeds towards greater complexity at both the organismal and the genomic level, numerous examples of reductive evolution of parasites and symbionts notwithstanding. However, recent evolutionary reconstructions challenge this notion. Two notable examples are the reconstruction of the complex archaeal ancestor and the intron-rich ancestor of eukaryotes. In both cases, evolution in most of the lineages was apparently dominated by extensive loss of genes and introns, respectively. These and many other cases of reductive evolution are consistent with a general model composed of two distinct evolutionary phases: the short, explosive, innovation phase that leads to an abrupt increase in genome complexity, followed by a much longer reductive phase, which encompasses either a neutral ratchet of genetic material loss or adaptive genome streamlining. Quantitatively, the evolution of genomes appears to be dominated by reduction and simplification, punctuated by episodes of complexification.
Introduction: Complexity can either increase or decrease during the evolution of various life forms
The textbook depiction of the evolution of life on earth is that of an ascent toward a steadily increasing organismal complexity: from primitive protocells to prokaryotic cells to the eukaryotic cell to multicellular organisms to animals to humans, the crowning achievement of the entire history of life. On general grounds, this "progressivist" view of evolution has been repeatedly challenged, in particular in the eloquent writings of Gould [1]. Gould argued that the average complexity of life forms has barely increased over the course of the history of life, even as the upper bound of complexity was being pushed upwards, perhaps for purely stochastic reasons, under a "drunkard's walk" model of evolution.
It has been well known for decades that the evolution of numerous parasitic and symbiotic organisms entails simplification rather than complexification. In particular, bacteria that evolve from free-living forms to obligate intracellular parasites can lose up to 95% of their gene repertoires without compromising the ancestral set of highly conserved genes involved in core cellular functions [2,3]. The mitochondria, the ubiquitous energy-transforming organelles of eukaryotes, and the chloroplasts, the organelles responsible for the eukaryotic photosynthesis, are the ultimate realizations of bacterial reductive evolution [4,5]. However, such reductive evolution, its paramount importance for eukaryotes notwithstanding, was considered to represent a highly specialized trend in the history of life.
From a more general standpoint, there are effectively irrefutable arguments for a genuine increase in complexity during evolution. Indeed, the successive emergence of higher grades of complexity throughout the history of life is impossible to ignore. Thus, unicellular eukaryotes that, regardless of the exact dating, evolved more than a billion years after the prokaryotes, obviously attained a new level of complexity, and multicellular eukaryotic forms, appearing even later, by far exceeded the complexity of the unicellular ones [5][6][7][8]. Arguably, the most compelling is the argument from the origin of cellular life itself: before the first cells Organismal complexity is hard to define but genomic complexity is much more tractable Complexity is one of those all-important characteristics of any system that seems to be easily grasped intuitively ("we know it when we see it") but is notoriously difficult to capture in a single, quantitative and constructive definition [9,10]. The approach that comes the closest to meeting these criteria might involve the quantity known as Kolmogorov complexity (also known as algorithmic entropy), which is defined as the length of the shortest possible description of a system (often represented as a string of symbols) [11]. However, Kolmogorov complexity is generally incomputable, and the concept is particularly difficult to apply to biological systems because of the non-trivial connection between the "description" (the genome) and the system itself (organismal phenotype). A useful practical approach to quantify the complexity of a system is to count the number of distinct parts of which it consists, and this is how organismal complexity is usually addressed by those that attempt to analyze it in a (semi) quantitative manner [12,13]. Recently, McShea and Brandon [13] formulated the "First Law of Biology", or the "Zero Force Law of Evolution" according to which unconstrained evolution leads to a monotonic increase in the average organismal complexity, due to purely the increase of entropy with time that is mandated by the second law of thermodynamics for any closed system. However, the utility of equating complexity with entropy is dubious at best as becomes particularly clear when one attempts to define genomic complexity. Indeed, using sequence entropy (Shannon information) as a measure of genomic complexity is obviously disingenuous given that under this approach the most complex sequence is a truly random one that, almost by definition is devoid of any biological information. Hence, attempts have been made to derive a measure of biological complexity of a genome by equating it with the number of sites that are subject to evolutionary constraints, i.e. evolve under purifying selection [8,14,15]. Although this definition of genomic complexity certainly is over-simplified, it shows intuitively reasonable trends, i.e. a general tendency to increase with organismal complexity [8]. Moreover, introducing the additional definition of biological information density, that is per-site complexity, one can, at least in principle, describe distinct trends in genome evolution such as a trend toward high information density that is common in prokaryotes and the contrasting trend toward high complexity at low density that is typical in multicellular organisms [8]. At a coarse-grain level, biological complexity of a genome can be redefined as the number of genes that are conserved at a defined evolutionary distance. Unlike the number of sites that are subject to selection, the conserved genes are rather easy to count, so this quantity became the basis for many reconstructions of genome evolution [16,17].
The relationship between genomic complexity and the complexity at various levels of the phenotype, from molecular to organismal, is far from being straightforward as it has become clear already in the pre-genomic era [18]. Comparative genomics reinforced the complex relationships between the different levels of complexity in the most convincing manner by demonstrating the lack of a simple link between genomic and organismal complexities [19]. Suffice it to note that the largest bacterial genomes encompass almost as many genes as some "obviously" complex animals, such as for example flies, and more than many fungi. One of the implications of these comparisons is that there could be other measures of genomic complexity that might complement the number of conserved genes and perhaps provide a better proxy for organismal complexity. For example, in eukaryotes, a candidate for such a quantity could be the intron density that reflects the potential for alternative splicing [20].
Genomic complexity is far easier to quantify than phenotypic complexity (even if the latter is easier to recognize intuitively). Indeed, the remarkable progress of genome sequencing, combined with the development of computational methods for advanced comparative genomics, provides for increasingly reliable reconstruction of ancestral genomes which transforms the study of the evolution of complexity from being a speculative exercise to becoming an evidencebased research direction. Here, we examine the results of such reconstructions and make an argument that reductive evolution resulting in genome simplification is the quantitatively dominant mode of evolution.
Genome reduction pervades evolution
A reconstruction of genome evolution requires that the genes from the analyzed set of genomes are clustered into orthologous sets that are then used to extract patterns of gene presence-absence in the analyzed species. The patterns are superimposed on the evolutionary tree of these species and the gene compositions of the ancestral forms as well as the gene losses and gain along the tree branches are reconstructed using either maximum parsimony (MP) or maximum likelihood (ML) methods (see Box 1) [21][22][23][24]. The ML methods yield much more robust reconstructions than the MP methods but also require more data. Similar methods can be applied to reconstruct evolution of other features for which orthologous relationships can be established, e.g. intron positions in eukaryotic genes.
Certainly, we are far from being able to obtain comprehensive evolutionary reconstructions for all or even most life forms. Nevertheless, reconstructed evolutionary scenarios are accumulating, some of them covering wide phylogenetic spans, and many of these reconstructions point to genome reduction as a major evolutionary trend ( Table 1). The most dramatic but also the most obvious are the evolutionary scenarios for intracellular parasitic and symbiotic bacteria that have evolved from numerous groups of free-living ancestors. A typical example is the reductive evolution of the species of the intracellular parasites Rickettsia from the ancestral "Mother of Rickettsia" [25,26]. Reductive evolution of endosymbionts can yield bacteria with tiny genomes consisting of 150-200 genes and lacking some essential genes such as those encoding several aminoacyl-tRNA synthetases, which is suggestive of an ongoing transition to an organelle state [3]. Indeed, the ultimate cases of reductive evolution involve the mitochondria and chloroplasts that have lost nearly all ancestral genes (e.g. 13 out of the several thousand genes in the ancestral alpha-proteobacterial genome are retained in animal mitochondria) or literally all genes in the case of hydrogenosomes and mitosomes [27]. Certainly, in this case, the evolutionary scenario appears as ultimate reduction "from the point of view" of the symbiont; the complexity of the emerging chimeric organism drastically increases, both at the genomic and at the phenotypic level, and it has been argued that such complexification would not have been attainable if not for the endosymbiosis [5,28]. Furthermore, hundreds of genes, in the case of the mitochondrion, and even thousands in the case of the chloroplast, were not lost but rather transferred from the endosymbiont genome to the nuclear genome of the host [29][30][31].
Deep genome reduction, with the smallest sequenced genome of only 2.9 Mb, is also observed in Microsporidia, the eukaryotic intracellular parasites that appear to be highly derived fungi [32]. The most extreme genome reduction among eukaryotes is observed in nucleomorphs which are remnants of algal endosymbionts present in cryptophytes and chlorarachniophytes and retain only a few hundred genes [33].
Beyond parasites and symbionts, reductive evolution was observed in several groups of organisms that evolved a commensal life style. One of the best-characterized cases involves the Lactobacillales, a group of Gram-positive bacteria that is extremely common in a variety of animal-and plantassociated habitats. A maximum parsimony reconstruction revealed substantial gene loss, from $3,000 genes in the common ancestor of Bacilli to $1,300-1,800 genes in various Lactobacilli species [34,35]. The genes apparently have been lost in a stepwise manner, with substantial loss associated with each internal branch of the tree and most but not all of the individual species. These losses were only to a small extent offset by inferred gain of new genes.
Certainly, the evolution of the genomes of parasites, symbionts and commensals is not a one-way path of reduction. On the contrary, the reduction ratchet is constrained by the advantages of retaining certain metabolic pathways that complement the host metabolism [36,37]. Notably, mathematical modeling of the evolution of the insect endosymbiont Buchnera aphidicola showed that metabolic requirements could determine not only the end point of genomic reduction but to some extent also the order of the gene deletion [38]. Moreover, the reductive trend is countered by proliferation of genes involved in parasite-host interaction such as, for example, ankyrin repeat proteins that act as secreted virulence factors [39,40]. Quantitatively, however, in most parasites and symbionts, these processes make a relatively minor contribution compared to the massive genome reduction.
An evolutionary reconstruction for Cyanobacteria, an expansive bacterial phylum that consists mostly of freeliving forms and includes some of the most complex prokaryotes, produced mixed results, with several lineages characterized by genome expansion [41]. Nevertheless, even in these organisms, evolution of one of the two major branches was dominated by extensive genes loss, and several lineages were mostly losing genes in the other major branch.
Conceivably, the most compelling evidence of the dominance of genome reduction and simplification was obtained through the reconstruction of the genomic evolution of archaea that almost exclusively are free-living organisms [17,42]. The latest ML reconstruction based on a comparative analysis of 120 archaeal genomes traced between 1,400 and 1,800 gene families to the last common ancestor of the extant archaea [42]. Given the fractions of conserved and lineage-specific genes in modern archaeal genomes, this translates into approximately 2,500 genes in the ancestral genome, which is a larger genome than most of the extant archaea possess (Fig. 1). The reconstructed pattern of gene loss and gain in archaea is non-trivial: there seems to have been some net gene gain at the base of each of the major Box 1 Reconstruction of ancestral genomes: Maximum parsimony and maximum likelihood approaches Dollo Parsimony. Only one gain per character is allowed; the pattern of losses, sufficient to produce the observed presence-absence pattern, with the minimum number of losses, is selected [86,87]. Weighted Parsimony. The relative gain-to-loss weight is set prior to reconstruction; the pattern of losses and gains with the minimum weighted score, sufficient to produce the observed presence-absence pattern, is selected [88][89][90]. Maximum Likelihood. Gain and loss probabilities per unit of time (possibly different for different tree branches) are the parameters; the presence-absence pattern and tree branch lengths are observed; the set of parameters and the gain-loss pattern, maximizing the likelihood of the observed presence-absence pattern, is selected [21][22][23][24]. archaeal branches that was almost invariably followed by substantial gene loss; as discussed below, this could be a general pattern of genome evolution. The notable exceptions are Halobacteria and Methanosarcinales, the two archaeal lineages in which evolution was strongly impacted by horizontal gene transfer from bacteria [43,44] that offset the gene loss and led to genome expansion (Fig. 1). Although less reliable than the genome-wide ML reconstructions, attempts on the reconstruction of the ancestral state of specific functional systems seem to imply even more striking complexity of archaeal ancestors. For example, comparative analysis of the cell divisions machineries indicates that the common ancestor of the extant archaea might have possessed all three varieties of the division systems found in modern forms [45].
Reconstructions of the evolution of eukaryotic genomes yielded expanding ancestors as the number of diverse genomes available for comparative analysis grew. At least until recently, the available collection of eukaryotic genomes remained insufficient for reliable ML reconstruction. However, maximum parsimony reconstruction traced between 4,000 and 5,000 to the last eukaryotic common ancestor (LECA) [46,47]. An even simpler analysis identified over 4,000 genes that are shared between Naegleria gruberi, the first free-living excavate (one of the supergroups of unicellular eukaryotes that also includes parasitic forms such as trichomonas and giardia) for which the genome was sequenced and at least one other supergroup of eukaryotes, suggesting that these genes were inherited from the LECA [48,49]. Such estimates are highly conservative as they disregard parallel gene loss in different major lineages, an important phenomenon in the evolution of eukaryotes. Indeed, even animals and plants, the eukaryotic kingdoms that seem to be the least prone to gene loss, have lost about 20% of the putative ancestral genes identified in the unicellular Naegleria. Collectively, these findings imply that the genome of the LECA was at least as complex as the genomes of typical extant free-living unicellular eukaryotes [50]. Even more striking conclusions were reached by the reconstruction of the evolution of the eukaryotic protein domain repertoire that involved comparison of 114 genomes [51]. The results of this reconstruction indicate that most of the major eukaryotic lineages have experienced a net loss of domains that have been traced to the LECA. Substantial increase in protein complexity appears to be associated only with the onset of the evolution of the two kingdoms of multicellular eukaryotic organisms, plants, and animals.
Remarkably congruent results have been obtained in reconstructions of the gain and loss of introns in eukaryotic genes. In this case, the availability of thousands intron positions provide for the use of powerful ML methods. The reconstructions consistently indicate that ancestral eukaryotes including the LECA and the founders of each supergroup were intron-rich forms, with intron densities higher than those in the genes of most extant eukaryotes and probably only slightly lower than those in the modern organisms with the most complex gene structures, such as mammals [20,52,53]. Remarkably, intron-rich ancestors were reconstructed even for those major groups of eukaryotes that currently consist entirely of intron-poor forms such as the alveolates that apparently evolved via differential, lineagespecific, extensive intron loss [54]. All in all, intron loss clearly dominated the evolution of eukaryotic genes, with episodes of substantial gain linked only with the emergence of some major groups, especially animals [20,53], in full agreement with the results of the evolutionary reconstruction for the eukaryotic domain repertoire [51]. As previously pointed out by Brinkmann and Philippe [55], simplification could be an "an equal partner to complexification" in the evolution of eukaryotes. The latest reconstruction results suggest that simplification could be even "more equal" than complexification.
Both neutral and adaptive routes lead to genome reduction
Genome reduction in different life forms seems to have occurred via two distinct routes: (i) the neutral gene loss ratchet and (ii) adaptive genome streamlining [8,56]. Typically, the reductive evolution of intracellular pathogens does not seem to be adaptive inasmuch as the gene loss does not appear to occur in parallel with other trends suggestive of streamlining such as shrinking of intergenic regions or intense selection on protein-coding sequences manifest in a low Ka/Ks ratio. On the contrary, the intracellular bacteria appear to rapidly evolve under weak selection [3,57]. The lack of correlation between different genomic features that are generally viewed as hallmarks of adaptive genome streamlining (i.e. selection for rapid replication), along with the presence of numerous pseudogenes that seem to persist for relatively long time spans and similarly persistent mobile elements [3,[58][59][60], implies that in these organisms genomic reduction stems from neutral ratchet-like loss of genes that are non-essential for intracellular bacteria. This route of evolution conceivably was enabled by the virtual sequestration of intracellular parasites and symbionts from HGT and by the ensuing reduction of the effective population size [61][62][63]. This apparent non-selective mode of gene loss is compatible with the small effective population size of parasites and symbionts, which results in an increased evolutionary role of genetic drift and infeasibility of strong selection [64,65]. On a long-term evolutionary scale, these organisms are likely to be headed for extinction due to the diminished evolutionary flexibility that reduces their chance of survival in case of environment change [66]. Coming back to the definitions introduced above, in the evolution of parasites and symbionts, the decrease in the biological complexity of genomes occurs in parallel with the decrease in information density. However, bona fide adaptive genome streamlining appears to be a reality of evolution as well. Features of such streamlining are detectable in the genomes of the highly successful free-living organisms such as the cyanobacterium Prochlorococcus sp. [67,68] and the alpha-proteobacterium-Candidatus Pelagibacter ubique, apparently the most abundant cellular life forms on earth [56,69,70]. These bacteria possess highly compact genomes and evolve under strong purifying selection suggesting that in these cases, the loss of non-essential genes, mobile elements and intergenic regions is indeed driven by powerful selection for rapid genome replication and minimization of the resources required for growth. Genome evolution of these highly successful life forms involves a drop in the overall complexity but an increase in information density. Of course, all the pressure of genome streamlining notwithstanding, the lifestyle of these free-living, autotrophic organisms imposes non-negotiable constraints on the extent of gene loss in these organisms because they have to maintain complete, even if minimally diversified metabolic networks. Additionally, an important factor in the evolution of these organisms that dwell in microbial communities could the "Black Queen effect" whereby selection operates at the community level so that otherwise essential genes can be lost as long as the respective metabolites or other commodities are provided to some community members [56,71].
Reconstructions of genome evolution in both prokaryotes and eukaryotes indicate that the loss of genes and introns typically occurs roughly proportionally to time, thus conforming with a form of genomic molecular clock [53,[72][73][74][75]. In contrast, the gain of genes and introns appears to be sporadic and mostly associated with major evolutionary innovations, such as in particular the origin of animals and plants. Thus, it has been concluded that gene loss is mostly neutral, within the constraints imposed by gene-specific purifying selection, whereas gene gain is controlled by positive selection [75]. The former conclusion seems to be robust whereas the latter is dubious as gene gain in transitional epochs could be more plausibly attributed to genetic drift enabled by the population bottlenecks that are characteristic of these turbulent periods of evolution [8,65,76].
In cases of both neutral and adaptive genome reduction, this process appears to involve specialization contingent on environmental predictability whereas the bursts of innovation considerably opens up multiple new niches for exploration by evolving organisms.
A biphasic model of evolution
The findings that in many if not most lineages evolution is dominated by gene (and more generally, DNA) loss that occurs in a roughly clock-like manner whereas gene gain occurs in bursts associated with the emergence of major new groups of organisms imply a biphasic model of evolution (Fig. 2). Under this model, the evolutionary process in general can be partitioned into two phases of unequal duration: (i) genomic complexification at faster than exponential rate that is associated with stages of major innovation and involves extensive gene duplication, gene gain from various sources, in particular horizontal gene transfer including that from endosymbionts, and other genomic embellishments such as eukaryotic introns, and (ii) genomic simplification associated with the gradual loss of genes and genetic material in general, time log complexity typically at the rate of exponential decay. The succession of the two phases appears to be a recurrent pattern that defines the entire course of the evolution of life. The first, innovative phase of evolution is temporally brief, engenders dramatic genomic and phenotypic perturbations, and is linked to population bottlenecks. The second, reductive phase that represents "evolution as usual" is protracted in time, is facilitated by the deletion bias that seems to be a general feature of genome evolution [77][78][79], and is associated either with a continuously small effective population size, as in parasites and symbionts with decaying genomes, or with evolutionary success and increasing effective population size as in free-living organisms undergoing genome streamlining [56,57,64]. Clearly, the reductive phase of evolution is not limited to the loss of genes that were acquired in a preceding burst of innovation. An excellent case in point is the evolution of eukaryotes, where the explosive phase of eukaryogenesis yielded duplications of a substantial number of genes. Many of these gene duplicates diversified and persisted throughout the course of eukaryote evolution whereas numerous other genes were lost in multiple lineages [46,47,51]. Interestingly, detailed reconstruction of the independent processes of reductive evolution in several parasitic bacteria appears to reveal a "domino effect" that, on a much smaller evolutionary scale, causes punctuation in reductive evolution itself [80]. It appears that the gradual, stochastic course of gene death is punctuated by occasional bursts when a gene belonging to a functional module or pathway is eliminated, rendering useless the remaining genes in the same module or pathway.
Certainly, the biphasic model of evolution depicted in Fig. 2 is not all-encompassing as continuous, long-term increase in genome complexity (but not necessarily biological information density) is observed in various lineages, our own history (that is, evolution of vertebrates) being an excellent case in point. Nevertheless, to the best of our present understanding informed by the reconstructions of genome evolution, extensive loss of genetic material punctuated by bursts of gain is the prevailing mode of evolution.
The biphasic model of evolution presented here expands on the previously developed scenario of compressed cladogenesis [81][82][83]. It also conceptually reverberates with Gould's and Eldredge's punctuated equilibrium model [84], where the periods of "stasis" actually represent relatively slow genome dynamics that in many if not most lines of descent is dominated by the loss of genetic material.
Conclusions and outlook
The results of evolutionary reconstructions for highly diverse organisms and through a wide range of phylogenetic depths indicate that contrary to widespread and perhaps intuitively plausible opinion, genome reduction is a dominant mode of evolution that is more common than genome complexification, at least with respect to the time allotted to these two evolutionary regimes. In other words, many if not most major evolving lineages appear to spend much more time in the reductive mode than in the complexification mode. The two regimes seem to differ also qualitatively in that genome reduction seems to occur more or less gradually, in a roughly clock-like manner, whereas genome complexification appears to occur in bursts accompanying evolutionary transitions. Genome reduction apparently occurs in two distinct and distinguishable manners, i.e. either via a neutral ratchet of genetic material loss or by adaptive genome streamlining.
Despite the diversity of the available case stories of reductive evolution, the current material is obviously insufficient for an accurate estimation of the relative contributions of genome reduction and complexification to the evolution of different groups of organisms. To derive such estimates, evolutionary reconstructions on dense collections of genomes from numerous taxa are required. Even more detailed analysis including careful mapping of loss and gain of genetic material to specific stages of evolution is necessary to refute or validate the model of punctuated genome evolution outlined here. In a more abstract plane, a major goal for future work is the development of a rigorous theory to explain biphasic evolution with the populations' dynamic framework.
|
v3-fos-license
|
2020-05-28T09:13:50.916Z
|
2020-05-22T00:00:00.000
|
219471309
|
{
"extfieldsofstudy": [
"Physics"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://doi.org/10.1088/1367-2630/ab95df",
"pdf_hash": "87c9070d725dc6b1a3c184a80fa328627a356daf",
"pdf_src": "Arxiv",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44357",
"s2fieldsofstudy": [
"Physics"
],
"sha1": "8d0c8f404ead15d1d409dc40422240401d7e3d66",
"year": 2020
}
|
pes2o/s2orc
|
Zitterbewegung-mediated RKKY coupling in topological insulator thin films
The dynamics of itinerant electrons in topological insulator (TI) thin films is investigated using a multi-band decomposition approach. We show that the electron trajectory in the 2D film is anisotropic and confined within a characteristic region. Remarkably, the confinement and anisotropy of the electron trajectory are associated with the topological phase transition of the TI system, which can be controlled by tuning the film thickness and/or applying an in-plane magnetic field. Moreover, persistent electron wavepacket oscillation can be achieved in the TI thin film system at the phase transition point, which may assist in the experimental detection of the jitter motion (Zitterbewegung). The implications of the microscopic picture of electron motion in explaining other transport-related effects, e.g., electron-mediated RKKY coupling in the TI thin film system, are also discussed.
In general, the ZB frequency scales with the energy gap and can be reduced in systems with narrow energy gaps, e.g., narrow gap semiconductors [32] and topological insulators [7]. At the same time, the oscillation of a wavepacket usually decays over time, which results from the interference between oscillations of different momentum-dependent frequencies [2,8]. Therefore, for the ZB effect to be observed, it is crucial to prolong or even indefinitely sustain the oscillatory motion. There have been some proposals to achieve persistent ZB motion, for example, by using semiconductor nanowires [2], or time-dependent systems [33,34]. In principle, we can also design a system in which the ZB oscillation frequency is independent of electron momentum. In this way, we can avoid the interference effect and render the ZB motion persistent and robust against damping.
In this work, we show that such persistent ZB motion can be realized in topological insulator (TI) thin * sonhc85@gmail.com † elembaj@nus.edu.sg films [35][36][37][38]. TI thin films differ from the more commonly studied semi-infinite TI slabs in that they have both a top and bottom surface, each of which can host surface states. The surface states on the two surfaces are coupled to each other due to the finite thickness of the film. In such thin films, the energy gap in the surface states can be controlled by applying an in-plane magnetic field [35] or tuning the thickness of the film [36][37][38]. Topological phase transitions can thus be induced by closing the gap. We show that at the transition point, there exists a momentum-independent oscillation frequency, which can give rise to persistent ZB oscillations of electron wavepackets. Furthermore, we find that the motion of electron in the x − y plane is anisotropic with respect to the injection direction and confined to a certain region of the TI film.The anisotropy of the electron motion due to the ZB effect has consequences for transport-related properties of the thin film system. Here, we focus on the inter-layer interaction between two localized magnetic centers by means of Ruderman-Kittel-Kasuya-Yosida (RKKY) mechanism [39][40][41][42]. The RKKY interaction has been extensively investigated in various systems such as superconductors [43][44][45], topological insulators [46][47][48][49][50][51], Weyl and Dirac semimetals [52][53][54][55][56][57], graphene [58][59][60][61], carbon nanotubes [62,63], semiconductor quantum wires [64,65], and tunneling junctions [66]. The RKKY interaction is mediated by the itinerant electrons. Intuitively, one would then expect the enhancement of the RKKY interaction when the magnetic centers lie along a preferred direction of electron motion, and a corresponding suppression of the RKKY interaction when the electrons are prohibited from moving between the two centers. We find that, indeed, the anisotropy of the RKKY coupling is in line with that of the electron motion. We show that maximum RKKY coupling occurs when the separation between the two magnetic centers is perpendicular to the line connecting the Dirac points.
This manuscript is organized as follows. In section II, we present the model Hamiltonian and derive the dynamics of both plane-wave and wavepacket electrons. We discuss the confinement of the electron trajectory and the regime conditions for persistent ZB oscillation. In section III, the RKKY coupling is calculated in both the weak and strong hybridization limits, and its correlation with the electron motion is also discussed. Finally, section IV contains a summary of our main conclusions.
II. ELECTRON DYNAMICS
We first consider a TI thin film subject to an in-plane magnetic field. For simplicity, we assume that the magnetic field is applied along the x-direction, so that the corresponding gauge field is A B = −ŷBz. As the thickness d of the thin film is comparable to the surface state decay length, the two surfaces are hybridized. The effective Hamiltonian of the system is then [35] where H D (k) = v f (z × σ) · k is the Dirac Hamiltonian describing the topological surface state, in which v f is the Fermi velocity, σ the vector of Pauli spin matrices, and z the unit vector perpendicular to the film (see Fig. 1). ∆ is the hybridization parameter describing the coupling between the top and bottom surfaces, and τ the vector of the Pauli matrices in pseudo-spin space that represents the electron occupancy at the top and bottom surfaces. For simplicity, we set = 1, and introduce the characteristic momenta corresponding to the hybridization energy k ∆ = ∆/v f and the wavevector corresponding to the magnetic field k B = e/2cBdŷ. The eigenenergies of the system are then given by in which s, τ = ± represent the real spin and pseudo-spin indexes respectively, and we define k u = k 2 ∆ + k 2 y , k v = k 2 B + k 2 x , Θ = arctan ky k∆ , and Φ = arctan k B kx . The bandstructure of the TI film is depicted in Fig. (1). An energy gap of Otherwise, the bandstructure is gapless, with the formation of two Dirac cones separated by 2q 0 = 2 k 2 B − k 2 ∆ along the direction perpendicular to the magnetic field.
In particular, at the transition value k ∆ = k B , the two Dirac cones merge to form a single cone. The corresponding eigenstates are given by the fourvectors where N sτ are the normalization factors, and To study the ZB in this multi-band system, we derive the time-evolution of position operator, which is described in the Heisenberg picture asr(t) = e iH0tr (0)e −iH0t , and which at t = 0 is formally represented byr(0) = i∇ k . The time-dependent position operator comprises of a non-oscillatory part that describes the translational motion related to the intraband interference, and an oscillatory part that is associated with the ZB motion [2,8,67,68] and related to the interband interference. Our interest lies in the latter, which is given by [67] (6) in which Ω ij = (E i − E j ), with i = (s, τ ) and ij = iQ i ∇ kQj are the frequency and amplitude of the oscillation, respectively. In the above, we have introduced projection operatorsQ sτ = |ψ sτ ψ sτ |, so that the Hamiltonian (1) can be decomposed as H 0 = sτ E sτQsτ . We can further express the projection operators aŝ whereR andT are involution operators satisfyingR 2 = T 2 = 1, [R,T ] = 0. The explicit forms of these operators are given in Appendix (A). Due to the electron-hole symmetry of the eigenenergies given in Eq. (2), there are only four distinct beat frequencies corresponding to the differences between the energies of interfering eigenstates. These frequencies are given by where k ± is given by Eq. (2).
A. Bound trajectory
Having derived the position operator in Eq. (6), we now trace out the electron trajectory in the system. In general, a free electron can travel in a region as large as the area of the system defined by its physical boundaries, e.g., edges or interfaces. However, we show that in the TI thin film system, the electron trajectory is bound within an area determined by the initial state (spin and momentum) of the electron and the energy gap of the system. Consider an electron injected into the top surface of the TI film with initial spin state in the spin up direction and momentum k which is represented by the planewave |ψ 0 (k) = e ik·r |φ 0 . The position of the electron on the x-y plane at time t can be calculated from Eq. (6) and is given explicitly by where Corresponding expressions for other combinations of injected spin orientation and injection surfaces can be obtained from symmetry arguments. Eq. (1) in terms of k ∆ and k B is, explicitly, Eq. (11) is invariant upon a simultaneous τ reflection about the τ x axis and in-plane spatial inversion, i.e. τ z → −τ z , x → −x, y → −y. This implies that the x and y displacements of electrons injected into the top and bottom surfaces have the same magnitudes but opposite signs. Eq. (11) is also invariant upon a simultaneous spin reflection about σ x (σ y,z → σ y,z ) and reflection along the y axis (x → −x,y → y). This implies that spin up and spin down electrons injected into a given surface (top / bottom) have the same x displacements, and y displacements of the same magnitude but opposite signs. The electron motion of an electron injected in the top surface with initial spin in the +z direction on the x − y plane is depicted in Fig. 2(a) and (b) for different ratios of k ∆ /k B . Taking the initial position of the electron to be the origin, it can be shown that x(t) ≥ 0 for k ∆ /k B < 1, i.e., the electron is always confined in the +x-half of the x-y plane. On the other hand, when k ∆ /k B > 1, the trajectory of the injected electron encompasses the origin as shown in Fig. 2(b).
It can be seen that the electron oscillation comprises both transverse and longitudinal modes. This is a manifestation of the four-band system illustrated in Fig. 1(b), where the quantum dynamics involves not just the evolution of the spin, but also the pseudo-spin degree of freedom, which in our case, represents the surface index (top and bottom surfaces). The electron trajectories in Fig. 2 both the transverse (y) and longitudinal (x) directions. Now in the conventional ZB picture, an electron injected along the x-direction would undergo oscillations in the transverse y-direction, due to the electron spin precession and spin-momentum locking. In this simple picture, the longitudinal oscillations do not seem to play a role. To explain the emergence of the longitudinal oscillations, we need to consider the pseudo-spin (τ z ) degree of freedom. This can be ascribed to the precession of the pseudo-spin, which represents the back and forth tunneling between surfaces. From Eq. (1), this pseudo-spin dynamics is coupled to the longitudinal motion. Indeed, as shown in Fig. 2(d), the electron lies in the positive x-half when it is on the top surface, and would move to the negative x-half after tunneling to the bottom surface. Thus, the back and forth tunneling between the surfaces mediated by the hybridization ∆ translates into the oscillation of the electron motion in the longitudinal x−direction. In the thick TI film limit where the top and bottom surfaces are decoupled, i.e., ∆ = 0 in the Hamiltonian Eq. (1), the motion of the electron is simply given by r(t) = 1 k 2 (z ×k) 1 − 2 cos 2 kv f t . Surprisingly, a spin-up electron initially injected along the x-direction will only move in the y-direction, i.e, its trajectory is confined in a line perpendicular to the injection direction. This can be explained by considering the electron velocity given by v = ∂ k H = v f (z × σ). The electron spin precesses as is perpendicular to the momentum.
B. Wavepacket dynamics
In the previous section, we have considered the trajectory of a single electron. We now consider the more practical case of an electron wavepacket, which is a superposition of different momentum states. In general, the beat frequencies ws as given in Eq. (8) are dependent on the momentum. Thus, when evaluating the expectation value of the position operator for a wavepacket, the resulting interference of oscillations with different momentum-dependent frequencies would, in general, lead to a decay of the ZB over time. In order to sustain the ZB motion, we need to realize a scenario where at least one beat frequency is momentum-independent. We will show that such a scenario can be achieved by the appropriate choice of parameters such as hybridization energy and the in-plane magnetic field.
Suppose that the electron is injected in the y-direction, i.e., k x = 0, k y = k, and the hybridization and magnetic field are tuned so that k ∆ = k B . In this case, the beat frequencies of Eq. (8) are now given by (12) in which we recall that k u = k 2 ∆ + k 2 y . We can see that besides the three momentumdependent frequencies, there is one frequency w 2 that is independent of momentum. At large time scales, we would expect the oscillations associated with the other three frequencies to decay away due to interference, while the oscillations associated with the k-independent w 2 frequency would persist. This is one of the main results of this paper.
To quantitatively verify the above intuitive picture of persistent ZB motion, we consider the electron wavepacket given by where |φ 0 is the initial spin state, and a(k) = 1 √ πδk e − (k−k 0 ) 2 2δk 2 is the Gaussian distribution function that represents the spread of the electron state in momentum space, in which k 0 and δk are the initial momentum and line-width, respectively. The expectation value of the position operator Eq. (6) for the above state is given by where the integration is taken over momentum space. As a consequence of the wavepacket spread in k-space, the ZB will generally decay over time. In order to analytically describe the damping process, we will consider the narrow wavepacket limit, i.e., δk/k 0 1, so that the integration of the Gaussian function in Eq. (14) can be approximated by up to O(δk 4 ). In the above, the first term is the initial ZB oscillation with momentum k 0 , and the second term represents the deviation of the ZB around the packet center. Substituting the position operator in Eq. (6), we have in which a = x, y. The first term in the above describes oscillations with constant amplitude that are in-phase with the initial oscillation. The next two terms have time-dependent amplitudes that are linear and quadratic in time, respectively. Rearranging the equation (15), the ZB of a wavepacket can be expressed as where the decay time is defined as with the beat frequencies given by Eq. (8). At short t, the first term in Eq. (17) can be formally written as r Z (t) ≈ ij r ij (k 0 , t)e −t 2 /T 2 ij , which expresses the exponential decay of the ZB (see Fig. 3).
From Eq. (12), the decay times are obtained as where θ(x) is the Heaviside step function, T 2 corresponds to the combinations of i and j where |Ω ij | = w 2 , and T d the other combinations of i and j. As can be seen, when one of the beat frequencies, i.e., w 2 , becomes independent of momentum at resonance where k ∆ = k B , the associated decay time T 2 in Eq. (19) goes to infinity. This implies that the ZB related to this mode will be persistent. In this case, the steady state transverse oscillation is given by In the limit of large hybridization k ∆ k 0 , the persistent oscillation reduces to y(t) ≈ A ZB cos(ω ZB t) with A ZB = 1 2k∆ and ω ZB = 2v f k ∆ = 2∆ being respectively the amplitude and frequency. This persistent oscillation is depicted by the orange line in Fig. 3(b). Surprisingly, both the amplitude and frequency of the persistent mode do not depend on the initial momentum and width of the injected wavepacket and are instead determined by a single parameter, i.e., the hybridization energy. Following Eq. (19) , the ZB has a sharp transition from a transient to persistent mode at k ∆ = k B at which the bulk gap closes (Fig. 1) and the TI film undergoes a topological phase transition [35]. We can hence refer to the persistent oscillation as a topological mode of electron oscillation.
III. ELECTRON-MEDIATED RKKY INTERACTION
In the previous section, we have shown that the electron trajectory is confined and may be highly anisotropic (see e.g., Fig. 2(a)). This has consequences for the transport-related properties of the system, such as the electron-mediated RKKY interaction. The confinement of the electron trajectory implies that the electrons are not able to mediate information, e.g. angular momentum, between magnetic moments separated by a separation distance that exceeds the confinement region. In order to verify this effect, we consider two magnetic centers S i (i = 1, 2) located at R i . The electron-mediated exchange interaction between the magnetic centers is modeled by where J is the exchange coupling. The exchange interaction can be considered as a perturbation to the Hamiltonian in Eq. (1). For simplicity, we assume that R 1 = (0, 0), and R 2 = R(cos φ R , sin φ R ). We show that the RKKY coupling between the two magnetic centers does not depend on just the distance R, but also on the direction φ R between them.
In the framework of the second-order perturbation theory, the effective interaction between two magnetic im-purities is given by [39-42, 46, 64, 66] where + = + i0 + , Tr stands for the trace over the spin degree of freedom, and the expanded spin operator in spin and pseudo-spin spaces is defined asσ = τ 0 ⊗ σ, in which τ 0 is the identity matrix of rank 2. The Green's function in real space is given by the Fourier transformation where G(q, + ) = [ + − H 0 (q)] −1 is the Green's function in momentum space, and A BZ is the area of the first Brillouin zone. Let us first consider the weak hybridization limit, i.e., k ∆ k B . In this limit, the system is gapless and the two Dirac points are separated by q 0 ≈ k B . The analytical expression of the RKKY coupling can be obtained as (see Appendix B for more details) in which the range functions are In the above,û = (R × z), withR = R/R being the unit vector along R. The RKKY coupling in Eq. (25) consists of three terms: the Heisenberg exchange, the spin-frustrated, and the Dzyaloshinsky-Moriya interaction terms. As shown above, the RKKY coupling exhibits not only the usual R −2 distance dependence [46] in a semi-infinite thick TI slab with only a single surface, but also has an additional direction-dependence due to the cos(2q 0 · R) factor that is absent in the semi-infinite thick slab. This directional dependence stems from the contribution of the surface states on both surfaces of the film in mediating the effective exchange coupling, and the fact that the corresponding Dirac cones are separated in momentum space. In the case where the two magnetic impurities are separated along the x-direction, i.e., along the in-plane magnetic field direction, k B · R = 0 and the RKKY coupling reaches its maximum. This can be explained by considering the process of indirect exchange coupling between the two magnetic moments via the itinerant electrons. When an electron is in close proximity to the first magnetic moment, its spin angular momentum is coupled to that of the magnetic moment. If there is finite electron overlap with the second magnetic moment, then its spin angular momentum is also coupled to the second moment. In this way, an effective exchange coupling arises between the two magnetic moments. The strength of the effective coupling depends on the rate and probability of electron overlap between one magnetic moment and the other. In other words, if the second magnetic moment is located at a position with little electron overlap with the first magnetic moment, then the coupling between the moments would be weak. Conversely, if the second magnetic moment is at a position where the electron has a high probability of overlap, the coupling will be enhanced. In our case, when the magnetic field is applied along the x-direction, the electron motion has a tendency of being confined along the same x-direction [see Fig. 2(a)]. This means that a second magnetic moment placed along the x-direction with respect to the first moment will have high probability of being coupled by an intermediary electron, thus inducing stronger RKKY coupling.
Although Fig. 1 shows only the results of a spin up electron injected on the top surface explicitly, the results of the symmetry analysis following Eq. (11) imply that the electron trajectory will still be confined along the x direction for spins of other orientations injected into both the top and the bottom surfaces.
To quantify the correlation between the RKKY coupling and the electron trajectory, we will analyze the preferred direction of the electron motion. As the electron position oscillates over time as described in Eq. (9), we consider its average valuer(k) = lim T →∞ 1 T T 0 r(t), which is explicitly given bȳ In the limit of weak hybridization energy k ∆ k B , the above reduce tor Eq. (30) is the time-averaged position of an electron with momentum k. Now, averaging the above over momentum space up to the Fermi wave-vector, we obtain which indicates that the electron will preferably move in the direction perpendicular to the direction separating the two Dirac cones. Therefore, when R is parallel tor, and thus perpendicular to k B , the RKKY coupling strength will be maximum. This is in line with the prediction based on the electron trajectory, as discussed above.
We note that in the above, the preferred motion direction was obtained based on the position of the plane-wave electron. Here, we show that the preferred direction is the same if we consider the electron wavepacket treatment. As the Gaussian function in the wavepacket picture is time-independent, it would not alter the position value after time-averaging. From Eq. (14), the average position of an electron wavepacket initially centered at k 0 is simply derived asr pk (k 0 ) = k |a(k − k 0 )| 2r (k) ≈ r(k 0 )+δr(k 0 ), where the deviation δr(k 0 ) = δk 2 4 ∇ 2 kr (k 0 ) follows Eq. (15) for a narrow wavepacket, withr(k) given in Eq. (29). In the weak hybridization limit, applying Eq. (30), we find that the deviation δr(k 0 ) = 0, which means that the preferred direction of motion of a wavepacket coincides with that of a plane-wave electron. This result thus suggests that one may use the wavepacket treatment in understanding properties of the RKKY coupling, besides the conventional plane Bloch wave approaches [39][40][41] in future works.
On the other hand, in the strong hybridization limit k ∆ k B , the RKKY coupling is given by (details are shown in Appendix B) where the range functions are now given bỹ In this case, the surface states are gapped (see Fig. 1(d)), and the Dirac cones vanish. In this limit, the RKKY coupling becomes isotropic, i.e., it is independent of the angle between the magnetic centers. This result is consistent with the calculated electron trajectory in the gapped scenario, where its trajectory is almost isotropic in the 2D plane (see Fig. 2(c)). This can be further verified by considering the time-averaged electron position as outlined above, which is given bȳ The above goes to zero upon averaging over momentum space, so that there is no preferred direction of the electron motion in the 2D plane in this case. We remark here that in the insulating phase, the TI film has been shown to have a diamagnetic response to an in-plane magnetic field [35]. As a consequence, the magnetic moments in the TI film may acquire an additional magnetic response and the steady state magnetization may change accordingly. However, the magnetic susceptibility is extremely small, i.e. on the order of 10 −8 [35], which is several orders of magnitude smaller than even the small diamagnetic susceptibility of typical metals. The effect of the induced magnetization can thus be neglected in the bandstructure of the TI film. Since the RKKY coupling is derived from the bandstructure of the TI film, therefore it will not susceptible to this diamagnetic response.
In addition, we note that if the Fermi level lies within the gap in the insulating phase, the RKKY mechanism is no longer be valid as it relies on itinerant electrons. Instead, the indirect exchange coupling is now described by the van Vleck mechanism as discussed in previous works [69][70][71]. In our work, we assume that the Fermi level is finite, i.e., within the conduction band, and ignore the van Vleck coupling for simplicity.
IV. CONCLUSION
In this paper, we investigated the anomalous motion of electrons in topological insulator thin films. First, we showed that due to the hybridization of the surface states with opposite helicities, a spin-polarized electron will undergo oscillatory motion within a confined region. Furthermore, the oscillation is anisotropic with the preferred direction being along the separation of the two Dirac points, a finding which that be ascribed to the anisotropy of the Fermi circle. As a consequence, the direction and distance dependence of RKKY interactions mediated by itinerant electrons between two magnetic impurities in thin TI films have a strong correlation with the electron motion. Interestingly, it was found that the RKKY coupling is maximized when two impurities at a fixed distance are positioned along the separation direction of the two Dirac points. This finding is consistent with the preferred direction of the confined electron motion.
|
v3-fos-license
|
2019-05-14T14:03:57.431Z
|
2003-03-01T00:00:00.000
|
152342654
|
{
"extfieldsofstudy": [
"Sociology"
],
"oa_license": "CCBY",
"oa_status": "HYBRID",
"oa_url": "https://journal.spera.asn.au/index.php/AIJRE/article/download/490/570",
"pdf_hash": "cfeb339aa7b2034c500cbb54cd5e0e92826f4dee",
"pdf_src": "Anansi",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44358",
"s2fieldsofstudy": [
"Sociology"
],
"sha1": "151636e27d65eb74efb4e685f5cce22526b6dc33",
"year": 2003
}
|
pes2o/s2orc
|
Whose School ? Which Community ?
In this paper, we take up the theme, 'The School as a Centre in the Community' in light ofa research project that we conducted in a remote community in South Australia in 2001. In this project, 'Engaging Students In Education Through Community Empowerment', we set out to explore with Aboriginal parents, Aboriginal students, teachers and representatives of the various agencies operating in the area how groups within the community understood the issues of early exiting Aboriginal students.
Introduction
In this paper, we take up the theme, 'The School as a Centre in the Community' in light of a research project that we conducted in a remote community in South Australia in 2001.In this project, 'Engaging Students In Education Through Community Empowerment', we set out to explore with Aboriginal parents, Aboriginal students, teachers and representatives of the various agencies operating in the area how groups within the community understood the issues of early exiting Aboriginal students.
Among the stated aims of the project were: to identify current strengths and concems regarding the provision of meaningful, culturally inclusive schooling; to map the current knowledge/power relations among various education and support service providers and members of the Indigenous community; and, in the second stage, to develop, on the basis of these consultations and in collaboration with key community and education groups a community-based education project to improve the literacy, numeracy or technological skills of non-attendingadolescent students.
What emerged from the consultations with these diverse groups was that the ideal of 'school as community' was problematic especially for Aboriginal families in this community.
In this paper, we interrogate the conversations we had with key representatives of the different community groups.In particular, we consider how 'school as community' did (not) work to address the needs of youngAboriginal students. .We endeavour to critically analyse the diverse perspectives offered to us with the aim, not of 'laying blame' but of exploring where and how cross-cultural communication -a key construct in building 'communities'-continues to fail in the face of diverse bodies of knowledge and inequitable power relations.What does it take to be 'heard'?Whatis it to listen beyond the 'comfort zone'?Finally, we discuss a number of recommendations that emerged from this project which aim to build better relations and better school communities.
Review of policy and research literature
Establishing strong relationships between schools and the families/communities they serve has long been advocated as a necessary component for the education of the nation's children.School/family ties have been cited in many and various articles) policies and research reports as critical to the education of children.In the case of Aboriginal students, this has been extended to culture as the critical element for the successful education of Aboriginal young people.
Aboriginal peoples around Australia have historically experienced education as assimilation.
Resistance to an assimilationist education system has generated the inclusion in policy documents of references to the need to teach Aboriginal students from a cultural perspective.The desirability of a 'culturally inclusive curriculum' for Indigenous students and for Aboriginal parents to be able to intervene in the education of their children has been consistently cited in major reviews, including the most recent National Review of Aboriginal and Torres Strait Islander Education (MYCEETYA, 1995) which drew on and developed previous reviews including the Aboriginal Education Policy (1989).The National Strategy for the Education of Aboriginal and Torres Strait Islander Peoples (MYCEETYA, 1996(MYCEETYA, -2000) ) acknowledges the long-term goal of Indigenous peoples for selfdetermination in education.This report recognises the need for Indigenous involvement in education at a local, district, regional, state and territory or national level.A major priority of this report was to 'establish effective arrangements for the participation of Aboriginal and Torres Strait islander peoples in educational decision-making' (MYCEETA, 1995:11).Amongst the strategies suggested in this report for the participation, engagement and retention in education of Aboriginal students are: • t'.:hools establish partnerships with Indigenous communities and in particular with ,".. .ginalStudent Support Parent Association (ASSPA) committees to target participation of Aboriginal and Torres Strait Islander children in schools.
• that schools develop and implement programs which recognise home language background and use culturally appropriate instruction and assessment methods, where Standard Australian English is not fully understood by Aboriginal and Torres Strait Islander students, because English is their second or third language or dialect.In What Works.The Work Programme, a document which evolved out of the Indigenous Education Strategic Initiatives Programme's Strategic Results Projects (IESIP SRP's), the authors include in the fundamentals to a good education for Indigenous students, references to respect for students and their cultures.
Cultural dispossession is a terrible thing.It ean reduce people to shadows, a state of near invisibility.In the situation oflndigenous students, the case is clear.Aspects of their cultures must be recognised, supported and integrated in the processes of education and training, notjust for their own success, but for the general quality of Australian preschools, schools and training institutions.
Of some significance is Boughton's (1997) study that analysed the complexity of the relationship between education and self-determination.Other reports cited in Boughton (2000) indicated that Indigenous students and education workers recognised the connection between education and control as a pedagogical as well as a political issue.This linking of self-determination in education with the content and delivery of curriculum matters and the improved leaming outcomes for students within an Indigenous cultural enviromnent has been discussed and documented by Duman and Boughton (1999) (cited in Boughton, 2000) and by Herbert et al (2000) and Bourke (2000).
These recommendations, particularly the recommendation supporting the recognition of home language in programme design, have particular implications for rural Indigenous communities and : ...... schools.In many schools in remote regions around Australia, English is a second and in some cases a third language.In these cases, including the school in which we did our research, an Indigenous language is spoken at home, in the community, in the schoolyard: everywhere in fact, except in the classroom.This further distances the school and the formal curriculum it is attempting to teach from Indigenous communities.
In spite of the rhetoric in policy documents, schools are structured so that there is little opportunity for parents to intervene in the formal education of their children except at a very peripheral level, in organizing sport or accompanying teachers on excursions, hearing students read, working on teacher constructed spelling programmes, working in the school canteen etc.Even participating in these activities places particular demands on Aboriginal parents who may feel that they have to 'act white' Education in Rural Australia, Vol. 13 (1) .. .44 (act 'white' or be marked 'black' is how one parent put it in a recent interview).This can involve dressing in a certain way, adopting particular manners of speech (having good English language skills or changing speech patterns \fom Aboriginal English to Standard Australian English, for instance), being prepared to be one' of very small group of Aboriginal people in a predominantly 'white' workplace and so on.There is generally very little interaction between Aboriginal parents and the teachers of their children.From the teacher perspective, it seems that Aboriginal parents are reluctant to participate in school affairs, a view derived from a discourse of deficit (Herbert et al, 2000).From the perspective of the parents, the world beyond the school fence can be seen as 'alien' territory.The Aboriginal Education Workers (AEWs) are often used as 'cultural translators' who act as intermediaries between Aboriginal parents and teachers, relieving the school of the need to construct culturally appropriate communicative structures.Kirkness andBemard (1991, cited in Herbert, 2000:11) have incorporated the cultural crossover that many children experience when moving from home to school and school to home, into the language of 'coming to: going to' educational institutions: language which suggests students enter sites where education is already structured.Students fit in -or don't.
The power relationships of schooling -whose knowledge and whose power?
Increasingly, the ways in which knowledge/power relations are socially and culturally constructed is recognised in a wide range of educational literature.That particular kinds of knowledge are deemed to be more valuable, worthy, useful or valid, over other kinds, and that only these are carried and thus endorsed by formal eurriculum and pedagogies is a way of understanding how relations of power become institutionalised within the schooling process through curriculum content and pedagogical practices.Those whose cultural, economic, community, social and symbolic forms of knowledge remain outside the mainstream, that is those whose knowledge is not 'carried' in/through formal curriculum are frequently positioned as subordinate and understood to be 'disadvantaged'.But it is the practice/experience of being positioned outside the dominant structures (an exercise of power) that creates the disadvantage -not the alternative forms of cultural, economic, social and symbolic knowledge themselves.
In interviews we did with the community who were part of this project, parents expressed the kinds of frustrations with schooling for Aboriginal children that are being expressed around the country: that there is no consistent teaching of mother tongue in the school, that cultural perspectives are not taught across the school and across subjects, that Indigenous knowledge is not acknowledged or accessed through schooling and that the skills and knowledge the children and young people bring to school is Education in Rural Australia, Vol. 13 (1) ...45 not acknowledged or utilised as a basis on which to build wider understandings and skills development.There are voices that are heard in schools and heard very clearly and there are voices that are part of a 'silenced dialogue' (Delpit, 1993:121).These differential power relations make it difficult for the school to become integrated into the community and for the various community groups that need to and want to access schools to become a part of the school.
A number of writers have discussed these power relations.Bourdieu for instance discusses the power of the school in terms of 'cultural capital'.Delpit refers to a 'culture of power' to frame her thinking ahout the power relations of schooling.Cultural capital theorists such as Bourdieu see schools as reproducing, constructing and valuing certain kinds of knowledge.This knowledge becomes social capital.The curriculum and assessment procedures for instance, in tenus of Bordieu's theorising, incorporate and construct social capital which then becomes 'symbolic' capital.Symbolic capital is a necessary condition for entrance to employment and further education and links into the preferred knowledges of the economically, socially and politically powerful (Thomson, 2002:4).Delpit's (1993:122) 'culture of power' reflects the rules of those who have power and include the teacher, who has power over students, the publisher of textbooks who has power to direct the thinking of both teachers and students, the power of the system and individuals within the system to determine 'normalcy' and 'intelligence' -both highly contested concepts.Ultimately, given the relationship between educational levels and access to work, then these power relations can have a long-term impact on life chances.
Both Aboriginal parents and Aboriginal students can have a contradictory relationship with the schooling system.That access to the skills and knowledge of the hegemonic curriculum will gain admittance to work and or further education is generally recognised by Aboriginal parents and more often tban not, by their children.However, a dilemma is created if the cost of acquiring the knowledge of one culture, the 'culture of power', means having to abandon the ways of being and the ways of knowing of their Indigenous culture.This potential contradiction is recognised in the MYCEETYA Report (1995:4) the National Strategy for the Education ofAboriginal and Torres Strait Island Peoples, which states that Indigenous Australians require an education, which enahles them to achieve their cultural and academic potential in Indigenous terms as well as in mainstream academic and technological skills.Herbert, et al., (2000:4) This suggests an education focus on the economic needs of Western (capitalist) societies, as well as the need for predominantly mono-culturalism.
Bordieu's theorising throws light on this relationship between economic and cultural power.
Accumulated cultural power begins in early childhood when children learn the 'right' way to dress, the 'right' way to speak', particularly in responses to adults and so on, An incident, which occurred in the school we were researching, illustrates this.A girl student, about year 3, came into the library and asked the Librarian very shyly "Can 1 have a book'!"The Librarian replied "No.May [ have a Book please, Ms F ..." The researchers had heard this child speaking in her own language outside the library only a few minutes before, in very powerful ways.However, her power was completely diminished and even eliminated as soon as she entered the Library because she did not know the 'codes', that is the rules of the game.These 'codes' and rules are generally referred to in the literature as 'social capital' .
'Social capital' is defined by the Centre for Research and Learning in Regional Australia as a key component in managing change (Kilpatrick and Abbolt-Chapman, 2002).However, the acquisition of social capital depends very largely, according to the theorising of Bordieu (1977;Bourdieu and Wacquant, 1991) on the congruence of individual and institutional cultural capital.The hegemony of particular cultural capital (knowledges, language, shared values, beliefs etc) as the most desirable social capital gives symbolic power to particular socially and economically constructed groups.Much of this symbolic power is acquired through the education system, to those who come into the system with the kind of knowledge and values that are valued by the school.Those whose values/belief systems are not consistent with those of the school will have to battle against the system, to use parents' terminology, (Munns, 1998:178) or be failed by the system of education they are attempting to access.
Teachers involved in this project were aware of the dilemmas inherent in the question of 'social capital'.For example, one teacher commented: We as school teachers sort of expect, with children coming from English-speaking background or non-Aboriginal backgrounds, that they've got a lot ofskills before they come to school, a lot ofschool skills.Whereas a lot ofthese students have other
,.
There is recognition here that such 'codes' need to be taught.However, incorporated in the teaching of these 'codes' can be an ideology of obedience, of deference for anyone in authority, recognition of some knowledges as superior to others etc.For Aboriginal children, obedience to white authority may not be one of the survival or cultural skills the child has learnt through interactions with family and community.The National Review ofAboriginal and Torres Strait Islander Education (1995) suggests that learning at school is not a culturally neutral activity.If as Bordieu suggests, a function of schooling is to legitimise the dominant culture, then children coming to school from families who have and pass on to their children the required cultural capital, benefit from schooling.Such cultural capital may include a top-down model of instruction, which fosters respect for authority, the knowledge of experts, discipline and good work habits.
Leaving it at the gate -a necessary part of the cnltural crossover
For Aboriginal children to succeed at school, ways of being in the Aboriginal community may have to be "left at the gate".This creates a definitive break with community knowledge to take up the 'official' knowledge of schooling.The movement from home to school and from school to home again can have particular meanings for Aboriginal students and can illustrate the separation of school and Aboriginal community.All students bring particular things to school, i.e. family language, cultural ways of doing things, particular ways of thinking about the world and how it is constructed and where they fit into it, what happened last night or this morning.They also take home from school a variety of information and meanings.These can reinforce or contradict community or family knowledge and meanings.The many sets of skills, knowledge and meanings may converge or they may come together in a partial way or they may not come together at all.Where there is little or no convergence at all there may be resistance to socialisation into the milieu of the school.Human agency manifesting itself as resistance is recognised in The Coolangatta Statement (1993).
Aboriginal people recognise that education, whether it is rural or urban cited, is a potential source of collective empowerment.However, education structured as schooling also has the potential to deny Indigenous people their heritage.
Education in Rural Australia,VoL13 (1) ... 48
The project: some findings for discussion A primary aim of this project was to listen actively to all key stakeholders in the educational process, ,.
particularly to Indigenous parents and Elders of the Aboriginal community, and to teachers and administrators at the Area School.Additionally, a number of services are resident and/or active, or have recently been active in the area including FAVS, Centrelink, ATSIC funded services, CDEP as well as the Crime Prevention Unit of the Attorney General's Office and the Aboriginal Services Division of the Department of Human Services.The research team consulted with these agencies, with members of the Indigenous community, with a small group of teachers at the local Area school, and with other service providers in the community.
Our aim was to draw upon the expertise of these diverse groups, to acknowledge their very different cultural perspectives and to try to find the commonalities as well as the differences in order to promote a more holistic approach to addressing the problem: how might key people in the community work together to improve the educational experiences of Aboriginal students?What starting points for changing unproductive relations, processes and programs (as evidenced by the high exit rates) can be designed together so that Indigenous youth can experience education as both personally meaningful and culturally satisfying?
As part of the consultative process that was central to Stage One of this project, we sought to listen closely to different groups' responses to three key questions.Each of the groups who participated in these consultations was asked to speak about: a) what they thought worked to keep Aboriginal children and young people involved in school, (i.e., What helps Aboriginal kids learn?What's good about school?What's keeping the kids at school?); b) why so many Aboriginal young people do not engage with or participate in educational experiences, i.e., concerns about current practices; and c) ideas for improving the educational opportunities for Indigenous children and young people within the local community.Elsewhere (Sanderson & Allard, in press) we discuss the methodological issues that emerged for us as we endeavoured to listen actively.
The research process was initiated in early 2001.In June, after two consultative trips to the region, we circulated an 'Interim Report' on the preliminary findings and our analysis, and returned again to gain feedback from all participants before completing the Final Report on the project in August, 2001.
For the purpose of this paper, and in order to explore the ways that knowledge/power relations are played out in cross cultural communications, and the ways in which a sense of 'community' can Education in Rural Australia, Vol. 13 (1) .. .49operate as an exclusionary rather than a connective process, we will focus On two issues that emerged in our discussions with key participants in the project: a) the 'issue' of 'small classes'; and b) the question of how and where Aboriginal parents might participate in their children's education.
Each of these issues seem to exemplify key differences between the ways in which Aboriginal parents understood schooling practices and the ways that teachers understood these.Firstly, that of 'small groups'.Making sense of these depended on who we spoke to.For example, the following is a discussion that took place with four Aboriginal mothers in response to the question 'So what do you think the school needs to do to help kids, to give them a spurt on?' 1 think they need to assess them early like give them Cl test on their ability, so that they can read and write at a very early age.They don't do that.
They do assess them, but they don't follow through, which is not fair and they're always It might sound like we're criticizing the school but all we want is learning.
We want the support we need.
We have beenputting this across to meetings and that- [. ..j We would prefer them all in the mainstream.We should have got [names another mother} in here too because she's one ofthe mothers who has her child in special class and that child has been there so long, Every year and she is getting pretty sick and tired of it.Why is she still in that class?1: How old is she?About 9 or 10.So they should be taking her out of there and putting her into the mainstream classes now.[Interview Aboriginal mothers,March,200 I] Alternately, and in response to the question, 'What's working well for Aboriginal students?' one ofthe teachers involved in setting up this program said: One of the things that seem to be going really well as jar as getting our kids to school and the kids to interact with each other and they really enjoy that, that sort ofsmall group ofstaffand, yeah, a sense ofsort ofbelonging 1suppose and a sense ofownership, having that class instead ... The idea of the small classes, they're basically-initially it was sort of special education classes but a lot of them, the children in them, the only thing lacking is their attendance since early years and that's why they are so Jar behind.So they sort of do more intensive literacy, numeracy ...And last year, Jour or five oJ our students who had been in a [small] class in the 6 to 9 (age group) actually went back into mainstream.So they had sort ofcaught up, you know, a fair bit in that time with the intensive thing ... [. ..] flooked at some statistics only a couple ofweeks ago on children in [small classes] f made a statement in the school report that fJelt that the small classes had certainly had an impact on the attendance and the principal said 'Well, I'll need statistics' so 1 looked upfive or so children.And one who'd had 90 unexplained absences for the year before had gone down to 14 [absences].And there were five children who had a very similar pattern.There was one child flooked at who'd-hers was not as good but there were lots oJfamily sort oJ issues going on with that particular child so ... (Interview with teacher, March, 2001).
Two different conversations concerning the same topic seem to be happening here, How do we 'make sense' of these very different narratives?How do we 'read' these different interpretations of what is (not) working for Aboriginal children as regards 'small classes'?
The very different views concerning 'small classes' presented by the Aboriginal mothers compared to that of the teacher is suggestive to us of a lack of cross-cultural communication, The main purpose of the 'small classes' according to the teacher seemed to be to give the Aboriginal children a 'sense of belonging' and of 'identification' in order for them to feel comfortable enough to want to attend school, Keeping the Aboriginal children together in small groups, rather than 'spreading' them across mainstream classes was a means of helping them 'adjust' to schooling, Intensive work on literacy and numeracy was part of these classes but not the main reason for their existence.That the small classes
and
Education in Rural Australia, Vat 13 (1) ... 43• Implement culturally sensitive teaching methodologies, which are based upon Aboriginal and Torres Strait Islander students' preferred ways of leaming as well as explicitly teaching them strategies from mainstream schooling.
recognise the socialising intent of compulsory education.In this historical context, State controlled schools were expected to transmit the beliefs and values of mainstream Education in Rural Australia, VoJ. 13 (1) ",46 f,.culture.It was recognised that legislation making education compulsory gave the State through schooling, 'access not just to children, but also to working class families through the schooling oftheir children.' skills but not necessarily [those] related to school...So they start offsort ofbehind Educationin Rural Australia,VoL 13 (1) ... 47 the eight ball but they've got a lot ofother things to offer.And even just things like being able to understand what classroom rules are and things like that, it will take them longer to adapt to classroom situations ...
putting them in special classes which we do not want our children in the special class.I don't know why they have the special class in the beginning.They thin them out.They don't put them in mainstream classes.1: Right.Why do they have the 'special classes '? Special classes like for those children to catch up, but the special classes offers them more activities rather than giving them curriculum work-all the English and Maths and all that kind ofstuff-education work.[. ..} A special class for me, like when 1 was going to school in [names regional city).they were the children that had a disability problem, not children that, you know, that's what you call a 'special class ', But today, they're just putting children in special classes just so they can-i-their education-it's too low.The kids are only going thatfar -ifthey can do what they want.They have more free time rather than getting down to the serious business ofeducation.[ ...J Education in Rural Australia,Vo!.13 (1) ... 50 They've got two big girls in the class and all the little ones sitting in it.Year 8s and 9s and primary school kids.Grade 3 and 4.
of being just one or two or three students in another class.[The small classes} are their classes so they've got a sense ofidentification.A lot of these students have other skills but not necessarily related to school.Like some of them have never seen, you know, maybe haven't got any books and things at home.So they Education in Rural Australia, Vol. 13 (1) ... 51 start offsort of behind the eight ball but they've got a lot oJ other things to offer.Even just things like being able to understand what classroom rules are and things like that, you know, that will often take [hem longer to adapt to classroom situations...we do sort of lots oJ small group work with 'the Aboriginal students.
|
v3-fos-license
|
2023-05-31T15:11:50.122Z
|
2023-05-29T00:00:00.000
|
258979041
|
{
"extfieldsofstudy": [],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.scielo.br/j/rgo/a/GKyzfp7WxQbwX9GL7RjqyxP/?format=pdf&lang=en",
"pdf_hash": "4c9b6182a96faf28e3235215f5be3d42bccee0b4",
"pdf_src": "Dynamic",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44359",
"s2fieldsofstudy": [
"Medicine"
],
"sha1": "55e1d1af0de909eca93725311f8b15ac3cc53448",
"year": 2023
}
|
pes2o/s2orc
|
Microbiological analysis of tongue dorsum coating in patients hospitalized in ICU
ABSTRACT Objective: ssess quantitatively and qualitatively tongue coating microbiota in ICU patients. Methods: Analytical observational study, convenience sample comprising 65 patients was included for medical report analysis and collection of general data, tongue coating assessment through visual inspection and microbiological sample collection for further laboratory analysis. The collection was performed by a single examiner using a sterile swab introduced and rubbing the posterior portion of the tongue close to the oropharynx. Results: Most patients (60%) belonged to the female sex, at mean age of 74.2 years. The main reasons for hospitalization were lung issues (26.2%) - prevailing associated comorbidities were diabetes (43.1%) and high blood pressure (66.2%). The mean length of stay in the ICU was one day. All patients presented tongue dorsum coating. There were Candida albicans (37%), Streptococcus parasanguinis (26.1%) and Streptococcus mitis (32.6%) in 1/3 of lingual extension. Streptococcus mitis (p=0,0265) was the most prevalent species. Conclusion: There was no significance between the amount of coating and number of observed species, although all assessed patients had presented coating. The most prevalent microorganisms were Candida albicans, Streptococcus parasanguinis and Streptococcus mitis.
INTRODUCTION
Lack of oral care remains a great challenge, as well as its implementation in the assistance routine available for patients in intensive care units (ICU), mainly when it comes to actions focusing on biofilm cleaning and disorganization [1,2].
Oral biofilm formation -with emphasis on teeth, tongue dorsum (coating) and artificial respirator tube (ventilator) -associated with mechanical ventilation in critical patients is closely related to lack of care, cleaning frequency and time in hospital [3,4].Tongue coating is considered a microbial reservoir of gram-positive and gram-negative bacteria, fungi and viruses.Thus, the efficient investigation of this biofilm allows greater microbial knowledge, a fact that favors the proper use of medication, and helps preventing opportunistic diseases and hospital infections [5][6][7][8].
It is important emphasizing that oral microbial changes take place from 48 to 72 hours after the patient is referred to ICU, and it favors the emergence of diseases such as nosocomial pneumonia (acquired after hospitalization), pneumonia associated with mechanical ventilation (PAV) and opportunistic diseases, mainly the ones of fungal origin [9,10].Oral medium promotion and adjustment, as well as a necessary care in hospital daily routines, are essential actions to prevent systemic diseases [11,12].
Accurate systemic investigations carried out by a multidisciplinary team lead to correct diagnostics and treatment plan.Tongue coating microbiological analysis can be an important strategy, but it is not performed as routine in ICU, be it due to lack of knowledge by hospital teams or to the cost of performing such an investigation for each hospitalized patient, since its hospitalization [4,13,14].
The aim of the current study was to investigate the oral microbiota in tongue dorsum coating of ICU patients and verify whether there is an association between microorganisms (type and number) and the amount of tongue coating.
All awaken patients (without any sedation type) and the legal guardians of patients under sedation were informed about the study and signed the Informed Consent Form (ICF) when they agreed on participating in the study.
Collected general data and microbiological analysis results were ethically preserved in order not to cause any sort of embarrassment.Individual information of each patient was provided to the medical team in charge of the study, in case it was necessary.
Experimental design
Observational analytical study about the microbiota in tongue dorsum coating of ICU patients in a reference hospital on care provided to systematically compromised and high systemic complexity patients in Brasília City (DF -Brazil).
Sample features
The study included a convenience sample of 65 ICU patients undergoing high-complexity treatment.The coating of the dorsum of the tongue was collected by the calibrated researcher.
Patients diagnosed with COVID were also evaluated.
It is important to emphasize that in the evaluated ICU there is no routine and effective oral hygiene measures.
Because the study was carried out in the ICU, the convenience sample was adopted because the patients have several associated medical complexities and specific hospitalization conditions, not favoring the organization of standardized patient groups.
Inclusion and exclusion criteria
Inclusion criteria were ICU patients up to 24-96 hours after admission; awaken patients (without any sort of sedation) who agreed in participating in the study and signed the Informed Consent Form (ICF); legal guardians of sedated ICU patients, or who needed ventilatory support (tracheostomy and mechanical ventilation); and who were instructed about the coating collection, agreed on participating and signed the ICF for such a purpose.Exclusion criteria were patients who presented severe systemic condition, which did not make the clinical evaluation possible.And patients submitted to oral hygiene before evaluation and microbiological collection.
Clinical procedures
Lingual coating collection was carried out with the aid of a single previously-calibrated examiner in order not to impair the necessary-care routine and ICU's logistics, from July 2020 to March 2021.The calibration of the tongue coating sample was previously performed with 15 patients admitted to the ICU -patients who were not part of the study sample.Patients' general data were collected from their medical reports and through clinical evaluations.The following data were collected: sex, hospitalization reason, presence of associated comorbidities; visual inspection over the presence and extension (amount) of tongue dorsum coating [5,6]: 0 -absence/subclinical; 1 -1/3 of lingual extension; 2 -2/3 of lingual extension; 3 -whole lingual extension; age and hospitalization time (1 day = 24 hours).
Methods that respect biosafety, and infection control methods of the hospital, were adopted for coating collection.Sterile swab (medium Stuart -4ml, Absorve) -provided by the specialized hospital -was introduced in the posterior portion of the tongue (vallate papilla region) close to the oropharynx through friction and further stored in test tube filled with reagent solution -performed by a single examiner.Each collection was stored in protection container (Styrofoam box), which was taken to specialized laboratory within 1 hour after preparation, at most.The method of collection, storage and sending for microbiological analysis were done correctly, contributing to the non-loss of the sample.
In the specific mass spectrometry cultivation method, microorganisms are placed on a plate that contains a polymer matrix.The plate is irradiated with a laser that vaporizes the sample, ionizing the molecules that will be aspirated and elevated to a detector.Depending on the molecule, the time of arrival will be different (time of flight).The data obtained through graphs representing these readings will be compared to an algorithmic database that contains a large number of species of clinical relevance -including aerobic microorganisms, anaerobes, mycobacteria, yeasts and filamentous fungi.The procedure is very quick and results are obtained in minutes.
The application of mass spectrometry using the MALDI-TOF technique to clinical microbiology is undeniable.It is a simple, fast and highly reliable tool that replaces conventional phenotypic methods for bacterial and fungal identification in the clinical laboratory routine, minimizing the time to perform fundamental diagnoses and optimizing antimicrobial therapy.
The automated antibiogram is a system that guarantees excellence in routine microbial identification and antimicrobial susceptibility testing (ATS) in microorganisms isolated from clinical samples.
Through this examination, it is possible to observe which antibiotics the bacteria found in the analyzed material are sensitive or resistant to, that is: the antibiogram will allow the identification of the most appropriate antibiotic for the treatment of the infection presented by the patient.This analysis was performed in a specialized private laboratory with the support of funding from the development study -a condition of greater logistical ease and support for the results in less time.
Statistical analysis
All data were collected from both the medical reports of patients and of tongue coating microbiological analysis.They were then organized in spreadsheet (Excel software) for further statistical analysis.
Data were analyzed through Chi-square non-parametric and Fisher's exact tests at 5% significance level.The dependent variable was the amount of tongue coating and the independent ones were sex, chronic disease and type of microorganism found in the coating.
Descriptive analysis of all data were carried out.Subsequently, the prevalence of each bacterial species in the total sample and in the group presenting the least and largest amount of coating was calculated.Chi-square test and Fisher's exact tests were used to analyze the association between the presence of species and the amount of tongue coating.The amount of coating was also compared between groups with and without each bacterial species; it was done through Mann Whitney test.Kruskal Wallis test was applied to compare the amount of coating based on the number of observed bacterial species.All analyses were carried out in R statistical software, at 5% significance level.R Core Team (2021).R: A language and environment for statistical computing.R Foundation for Statistical Computing, Vienna, Austria.
RESULTS
The sample comprised 65 patients at mean age 74.2 years (43 years old, minimum and 97 years old, maximum), and 60% of it was formed by individuals of the female sex (table 1).Mean hospitalization time was one day, and it could vary from less than one day, up to four days.The main reasons for hospitalization were associated with lung issues.All patients had tongue dorsum coating, and it prevailed in 1/3 of the lingual extension (70.8%), in 2/3 of the lingual extension (24.6%) and in the whole lingual extension (4.6%).
There was no significant association between the amount of coating and the number of observed bacterial species (p>0.05)(table 2).The prevalence of patients with at least one species reached 83.1%.The prevalence of patients with two species, or more, recorded 44.6%.The prevalence of three, or four species, was 6.2%.
Table 3 shows the prevalence of bacterial species in the total sample, based on the amount of coating.In total, 15 different bacterial species were found in the coating (figure 3), and the most common ones were Candida albicans (38.5% of patients), Streptococcus parasanguinis (26.2% of patients) and Streptococcus mitis (23.1% of patients).The following species were also found: Streptococcus vestibularis, Staphylococcus aureus, Kleibsella pneumoniae, Candida tropicalis and Streptococcus Salivarius in more than 5% of patients.Species Streptococcus mitis was only found in patients recording the lowest amount of coating (1/3 of the lingual extension); it was observed in 32.6% of them (p<0.05).The whole lingual extension There was no significant difference in the amount of coating due to the number of observed species (p>0.05)(table 4).
The amount of coating was significantly lower in patients with Streptococcus mitis than in the ones lacking this bacterium (p<005) (table 5).It is important reinforcing the fact that this species was more common in patients recording the lowest amount of coating (p<0.05).
DISCUSSION
Hospitalization is a huge issue for critical hospitalized patients who suffer with some sort of systemic disorganization or with changes in control of existing comorbidities.It is essential pointing out that most patients in intensive care units are elderlies with some type of systemic complexity who need full support in order to help treatment and recovery [15,16], as shown in the current study.The difficulty in having a standardized oral hygiene routine in ICU, with emphasis on tongue dorsum associated with hospitalization time, favors coating accumulation, which is seen as a complex bacterial niche closely associated with hospital infections [5,6,9,11,13].
After 24 to 72-hour hospitalization in ICU, one finds oral microbiological change due to the prevalence of gram-negative bacteria and to bacteria associated with systemic conditions featured by respiratory-profile infections.Therefore, it is necessary intensifying oral healthcare right at patient's admission or, mainly, when they are referred to ICU [17][18][19][20].
The presence of associated comorbidities is another relevant factor for critical and elderly patients, since any sort of disorganization can lead to new systemic losses and impair their recovery process [16][17][18][19].
As for the current study, the total sample comprised elderly, it emphasized the main systemic issues associated with diabetes and high blood pressure, which are the most prevalent conditions in the age group and they need interdisciplinary and personal assistance, mainly in ICU [6,9,18].
Since the study was carried out within the most critical period of the pandemic (COVID-19), the access to intensive therapy units and to critical patients was limited and biosafety strategies were strict [21].However, it is very difficult measuring the oral hygiene of infected patients, since lack of proper cleaning was evidenced by coating accumulation on tongue dorsum (2 patients).
The proper microbiological investigation of the oral biofilm as hospital routine, mainly in ICU patients, is not yet a routine.It is only requested when there is any doubt about the diagnostic or the need of specific investigation [4,13,19].Thus, this microbiological investigation strategy can contribute to better pharmacological management and treatment strategies.
Biofilm complexity, as well as the performance of conducts aimed at being less harming to intensive care unit patients is a valuable path for clinical and research activities [7][8][9][10]19,22].It is so, because only few studies have emphasized such an oral association with hospital infections, mainly in Brazil.
The possibility of using high diagnostic power techniques and resources, at short response time, allows more accuracy and evidence in microbiological results of the investigated patients.The use of specific culture through spectrometry and the evaluation of anti-microbial sensitivity are effective in the general analyses of bacterial species, and can be investments in hospitals, as well as used based on the support by specialized laboratory [23][24][25].
The routine performance of tongue coating microbiological analysis can help changing the provided care, innovate accurate diagnostic and the implementation of individual hygiene routines focused on excellent care provided to ICU patients -this condition is not observed in the current study, because all assessed patients had tongue coating [4,9].
The presence of opportunistic microorganisms, such as Candida albicans -which was found in most of the assessed patients [26], Candida tropicalis and Candida Glabrata -in ICU patients due to low immunity and to the use of medication with high modification power are strategies to defend the human body.
Many microorganisms belong to the oral ecosystem, such as Streptococcus parasanguinis, Streptococcus vestibularis; Streptococcus salivarius; Streptococcus gordoni; Streptococcus cristatus, were found in patients in the present study [2].They must be there, since they allow greater balance and favor biofilm formation (Streptococcus parasanguinis, Streptococcus vestibularis; Streptococcus salivarius; Streptococcus gordoni; Streptococcus cristatus -all found in patients in the present study) [27].
Nosocomial pneumonia is the most common hospital infection, it is acquired after hospitalization and is associated with mechanical ventilation (PAV), which are mainly related to bronchoaspiration of gram-negative microorganisms found in biofilm and in tongue coating.Therefore, actions focused on adjusting the oral medium and biofilm disorganization, such as using chlorhexidine 0.12% as routine during hospitalization, can help improving the care provided to ICU patients and decreasing mortality rates [3,11,12,22,28,30].
The complexity of tongue coating is featured by the variety of microorganisms in it, with emphasis on the need of specific investigations, as approached in specific study to help improving the oral care provided to ICU patients.It can avoid infection conditions, opportunistic diseases and the worsened systemic conditions of these patients [6,7,19].
Deficiency in hygiene contributes to the accumulation of biofilm and microbial reservoir associated with nosocomial infections.It is important to emphasize that there are professionals who are unprepared to perform this activity, requiring effective educational actions and constant training in the hospital [4,6,9,22,29].
Based on the herein observed microbiological overview of the oral cavity, it is important emphasizing the need of implementing more investigative measures regarding tongue coating microbiology, as well as of relating the need of implementing guidelines/orientation concerning activities aimed at the correct cleaning of critical patients' tongue dorsum.
The specific study presented some limitations such as the moment of the pandemic in which it was carried out, the difficulty of access to patients admitted to the ICU for a larger sample, the acceptance of to participate in the study because they thought that the microbiological analysis could be a procedure more invasive and the length of stay in the ICU, considering that the most important changes in oral microbiology are observed with prolonged hospitalizations.
CONCLUSION
All assessed patients presented tongue coating on the dorsum, but its extension and location have varied.However, there was no significance between the amount of coating and the number of observed species.RGO, Rev Gaúch Odontol.2023;71:e20230015 The most prevalent microorganisms found in tongue coating of the assessed patients were Candida albicans, Streptococcus parasanguinis and Streptococcus mitis.This last species was found in 1/3 of the lingual extension.
CollaboratorsAF
Miranda and ALF Arruda, methodological organization, data collection, interpretation of statistical analyses and manuscript writing.DC Peruzzo methodological organization, interpretation of statistical analyses and manuscript writing.Source of FundingFAPDF (Fundação de Apoio à Pesquisa do Distrito Federal), Brasília -process n. 193.001504/2017.
Table 1 .
Descriptive analysis of features recorded for participants in the sample (n=65).
Table 2 .
Prevalence of the number of bacterial species in the microbiological analysis of participant's tongue coating (n=65).
Table 3 .
Prevalence of bacterial species detected in the microbiological analysis of patients' tongue coating (n=65).
Table 4 .
Amount of tongue coating due to the number of bacterial species (n=65).
|
v3-fos-license
|
2019-04-02T13:05:43.385Z
|
2017-03-16T00:00:00.000
|
90276727
|
{
"extfieldsofstudy": [
"Biology"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://doi.org/10.1515/tjb-2016-0140",
"pdf_hash": "7f9c32a86aaf671f7520bc9e389640b348bd51dd",
"pdf_src": "ScienceParsePlus",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44360",
"s2fieldsofstudy": [
"Biology"
],
"sha1": "323be13d96e3d0d23a6c6cad1710d0d1ccdce908",
"year": 2017
}
|
pes2o/s2orc
|
Concentrations of circulating adiponectin and adipocyte-fatty acid binding protein in patients with three-vessel coronary artery disease: a comparison with coronary lesion complexity as characterized by syntax score
Objective: In this study, we investigated the correlation between coronary lesion complexity as characterized by syntax score (SS) with circulating adiponectin and adipocyte-fatty acid binding protein (A-FABP4) concentrations in the presence of stable coronary artery disease affecting three coronary vessels (three-vessel stable CAD). Methods: The study groups consisted of 41 control subjects (28 males and 13 females, non-CAD group) and 115 affected subjects (79 males and 36 females, three-vessel stable CAD group). We divided into tertiles the three-vessel stable CAD group according to SS and estimated circulating concentrations of adiponectin and A-FABP4. Results: We did not find any correlation between the coronary lesion complexity with either the adiponectin and/ or A-FABP4. We found lower the A-FABP4 of the non-CAD group than those of the groups with three-vessel stable CAD (p < 0.001). Adiponectin were lower in DM subjects (p < 0.05 for each group); though A-FABP4 were found to be higher (p < 0.05 for each group) according to non-DM subjects in intra-group comparisons. Conclusion: Adiponectin is not a suitable parameter for demonstrating the existence of CAD or predicting coronary lesion complexity. A-FABP4 is more useful for the proof of the presence of CAD but A-FABP4 are not correlated with the severity of CAD.
Introduction
Coronary artery disease (CAD) is a progressive inflammatory disease, with atherosclerosis playing a role in its etiology. Cardiovascular diseases (CVDs), including CAD, are the most prevalent diseases worldwide and the leading causes of mortality in developing countries and in Turkey [1]. The prevalence of CAD is reported to be between 4.4% and 10% in Turkey [2]. Accumulating evidence supports a critical role for inflammation in the pathogenesis of CAD and other manifestations of atherosclerosis [3]. Systemic blood markers of inflammation such as white blood cell count and C-reactive protein have emerged as conventional and powerful predictors of coronary events [4]. The search for more convenient and specific parameters related to CAD continues, and recently, increasing interest has been developing in the role of adipokines in the triggering of this systemic inflammatory response.
Adipose tissue has been traditionally considered a fatstorage organ, but it is now known that adipose tissue is a complex and highly active metabolic and endocrine organ [5]. It expresses and secretes a variety of metabolites, hormones, and cytokines, including adipokines. These adipokines can target distant organs and have major effects on body weight, energy storage, insulin sensitivity, glucose regulation, and the inflammatory response. Evidence also supports the notion that the adipose tissue of different body compartments has different adipokine secretion profiles [6]. Among the fat storage compartments in the body, visceral fat has been found to be an important source of proinflammatory adipokines [7]. In recent years, there has been a growing interest in the potential role of adipokines in contributing to the inflammatory processes involved in the development of CAD [8].
A-FABP4 is a cytoplasmic protein that binds with saturated and unsaturated fatty acids to control the distribution of fatty acids in various inflammatory response and metabolic pathways [15]. It is one of the most abundant proteins in adipocytes and macrophages [16,17]. Recent studies showed that A-FABP4 plays an essential regulatory role in energy metabolism and inflammation [18].
Clinical studies indicate that serum and plasma concentrations of adiponectin are lower and those of A-FABP4 are higher in individuals with CAD [19,20]. The relationships between these parameters and the severity of CAD have been investigated previously. Different scoring systems have been used to measure the degree of severity of CAD [20,21]. Syntax score (SS) is currently a widely used scoring system. In the literature, we observed that there are insufficient numbers of studies examining the relationship between adiponectin and A-FABP4 concentrations with SS [22]. The purpose of this study was primarily to evaluate the correlation, if any, between the SS, which represents the coronary lesion complexity in three-vessel stable CAD groups, with adiponectin and A-FABP4 concentrations. SS is used to estimate the extent and severity of CAD through an assessment of the number of angiographically detected coronary lesions, their functional effects, locations, and complexity. The following variables are taken into consideration for SS estimates: coronary dominance; location at bifurcation, trifurcation, or ostial lesions; tortuosity; calcifications; the content of the thrombus; presence of diffuse disease; and elongated lesions [22]. Patients with three-vessel stable CAD can be divided into tertiles: a low score, defined as ≤ 22 (low SS group); an intermediate score, defined as 23-32 (intermediate SS group); and a high score, comprising individuals with scores ≥ 33 (high SS group).
The association of diabetes mellitus (DM) with CAD is a well-known phenomenon. Moreover, it has been reported that there is a relationship between insulin sensitivity and adipokine concentrations [19]. Consequently, we aimed to evaluate serum adiponectin and A-FABP4 concentrations in CAD patients and its interaction with the presence of DM. Furthermore, we investigated whether there were correlations between adiponectin, and/or A-FABP4 concentrations and SS values, along with gender, age, body mass index (BMI), current smoking, blood pressure, triglyceride (TG), total cholesterol (TC), low density lipoprotein-cholesterol (LDL-C), high density lipoprotein-cholesterol (HDL-C), fasting blood glucose (glucose), creatinine (Cr) and high sensitive C-reactive protein (hsCRP).
Systematic search strategy
In our cross-sectional study, the 3-sCAD group consisted of 115 subjects (79 males, 36 females) who were evaluated at the Department of Cardiology, Koşuyolu Hospital, as meeting the following definition of symptoms: no episodes of angina at rest but angiographically demonstrated organic stenosis of > 50% in three of the main coronary arteries. The non-CAD group consisted of a total of 41 subjects (28 males, 13 females), none of whom had any cardiac disorder or the coronary disease. The study was approved by the Ethics Committee of Haseki Hospital. We carried out all procedures according to institutional ethical standards and obtained written consent of the volunteers.
Excluding criteria
None of the cases of this study had acute coronary disease or acute myocardial infarction (MI). In addition, we excluded patients who had (underwent) percutaneous coronary intervention (PCI)/coronary artery by-pass grafting (CABG) or had evidence of hemodynamically significant valvular heart disease, surgery, or trauma within the previous month, known cardiomyopathy, known cancer, advanced liver and/or kidney diseases, infection and inflammatory diseases.
Syntax score analysis
The SS was calculated using dedicated software (SYNTAX Score Calculator Version 2.11; Cardialysis Clinical Trial Management-Core Laboratories Company, Rotterdam, The Netherlands) that integrates the following characteristics: first, the number of lesions, with their specific weighting factors based on the amount of myocardium distal to the lesion according to the score of Leaman et al. [23], and second, the morphologic features of each single lesion, as previously reported [24]. All angiographic variables pertinent to the SS calculation were computed by two experienced cardiologists who were blinded to the baseline clinical characteristics, procedural data, clinical outcomes, and previously calculated SS. From the diagnostic angiogram, each coronary lesion producing ≥ 50% diameter stenosis in vessels ≥ 1.5 mm is scored separately and added together to provide the overall SS. The scores of the three-vessel stable CAD group were then divided into tertiles, as follows: a low score defined as ≤ 22 (low SS group), an intermediate score as 23-32 (intermediate SS group) and a high score as ≥ 33 (high SS group) [25].
Patient characteristics and angiographic findings of all groups
Information regarding the genders, ages, smoking habits of the patients, as well as the medicines they use, was collected through standardized face-to-face interviews performed by a single physician. Smoking status was classified as positive if the patient was currently a smoker. Anthropometrical measurements, including height and body weight, were performed according to standardized procedures, and the BMI was calculated as weight divided by height squared (kg/m 2 ). The presence of DM in all groups was determined as defined by the World Health Organization study group. After the blood pressure of all subjects was measured with an automatic sphygmomanometer (using a cuff size fitted to the upper arm perimeter), right arm blood pressure was measured twice, and the average value was calculated for the analysis. Measurements were performed after a minimum of 5-min rest. Medical history was collected through a standardized questionnaire [26]. Selective coronary angiography of patients was performed by two experienced cardiologists using the standard Judkins technique via the right femoral approach, and the angiographically determined coronary artery lesions were identified.
Blood sampling and measurement of analytes
For the patients' serum adiponectin, A-FABP4 and other routine biochemical measurements, serum and plasma samples were drawn vacutainers containing gel separator and K 2 EDTA (Becton, Dickinson and Company, NJ, USA), respectively after a period of 12 h of fasting. Within 30 min after being obtained, the samples were centrifuged (1000 × g, 10 min). The eluted sera and plasmas were aliquoted into portions and preserved until the day of analysis under the laboratory conditions at − 70°C. All the biochemical examinations except hsCRP were performed spectrophotometrically method in an AU2700 biochemical auto-analyzer (Beckman Coulter, Inc., USA), while hsCRP was analyzed by the immunoturbidimetric method in same auto-analyzer; HbA 1c analysis was performed using the ARKRAY ADAM A1c HA-8180V Automatic Glycohemoglobin Analyzer (ARKRAY Co. Ltd., Inc. Japan), which exploits the principles of ion exchange high performance liquid chromatography for its analyses. Adiponectin and A-FABP4 were analyzed using an ELX-50 micro-plate washer and an ELX-800 ELISA absorbance micro-plate reader (BioTek U.S, Winooski, VT, USA), according to the enzyme linked immune sorbent analysis (ELISA) methodology. The analytic sensitivity, intra-assay CV, and interassay CV for adiponectin were 0.026 μg/mL, 4.9%, and 6.5%, respectively. The analytic sensitivity, intra-assay CV, and inter-assay CV for A-FABP4 were 0.05 ng/mL, 2.5%, and 3.9%, respectively.
Statistical analysis
The findings from our study were evaluated using the SPSS (Statistical Package for Social Sciences) 21 package program (IBM, New York, NY, USA). The mean; standard deviation (SD); and median, minimum (min) and maximum (max) values were calculated as descriptive statistics. The quantitative data were independent and conformed to normal distribution characteristics, as verified by the Kolmogorov Test, and the groups were homogenous according to the Levene Test. Thus, one-way ANOVA was applied to investigate the differences between the groups in terms of the measured parameters, with a 95% confidence interval. The probability value cut-off for significance (p) was set at < 0.05. The Tukey HSD and Student-t tests were applied for one-to-one group comparisons with a 95% confidence interval, and the p-value for significance was set at 0.017 for the three groups using the Bonferroni correction. The Pearson correlation analysis was applied to evaluate the relationship between the parameters. The power of this study was calculated using the PASS 12 package program (NCSS, UT, USA).
Descriptive analysis
First, the power of this study, which contains four groups, was calculated to be 1. The type I error was set at 80%, and the type II error (p) was set at 0.05. The information for each group and the descriptive statistics of the measured parameters are given in Table 1. In both groups, sex and age had no significant effect on measured parameters (p = 0.144 and p = 0.985, respectively). Moreover, among all groups, there were no differences that could be attributed to BMI (p = 0.782), current smoking (p = 0.976), hypertension (p = 0.848), DM (p = 0.951) or drug use.
When we evaluated all groups as to serum concentrations of adiponectin and A-FABP4, we did not found any significant difference among groups for adiponectin (p > 0.05) ( Figure 1A). On the other hand, it was found that the non-CAD group had statistically higher concentrations of A-FABP4 level compared to the CAD groups (p < 0.001) ( Figure 1B).
In terms of the presence of DM, while adiponectin concentrations in all groups were lower in DM subjects when compared to N-DM subjects (p < 0.05 for each group) ( Figure 2A); A-FABP4 levels were found to be higher (p < 0.05 for each group) ( Figure 2B). While there was no significant difference between the adiponectin concentrations of the non-CAD group with DM compared to the three-vessel stable CAD group with DM (p = 0.622), a difference was found between the A-FABP4 concentrations of the non-CAD group with DM and all of the three-vessel stable CAD groups (p < 0.001).
Group comparison
For all groups, within each group, the adiponectin (p < 0.001 for each group) and A-FABP4 (p < 0.001 for each group) concentrations of male patients were observed to be lower compared to the levels of the women. While there were no differences among the all groups regarding serum TC, LDL-C, HDL-C, glucose, HbA 1c and Cr concentrations (p < 0.001 for each of them), the serum TG concentrations of the non-CAD and low SS groups were found to be lower (p = 0.001) according to the other groups. Another observation was that the hsCRP concentration of the non-CAD group was lower than those levels of the threevessel stable CAD groups (p < 0.001). The results for adiponectin, A-FABP4 and other biochemical parameters of all of the patient groups and their statistical assessments are given in Table 2.
The intra-group and inter-group adiponectin and A-FABP4 concentrations of all the groups, including comparisons made according to presence or absence of DM, are presented in Table 3. ARBs, Angiotensin receptor blockers; BMI, body mass index; CAD, coronary artery disease; LAD, left anterior descending artery; LCX, left circumflex artery; Non-CAD, there is no CAD; SS, syntax score; (Low SS, syntax score ≤ 22; Intermediate SS, syntax score = 23-32; High SS, syntax score ≥ 33); LMCA, left main coronary artery; RCA, right coronary artery. All values are expressed as n (the number of persons) and percent (%) or mean ± standard deviation, p < 0.05 value is set as the significance level.
Discussion
We investigated the relationship between adiponectin and A-FABP4 concentrations and the severity of CAD. To that end, the three-vessel stable CAD group was divided into tertiles: a low score defined as ≤ 22 (low SS group), an intermediate score as 23-32 (intermediate SS group) and a high score as ≥ 33 (high SS group). The SS is an angiographic scoring system based on coronary anatomy and lesion characteristics, such as presence of total occlusion, bifurcation or trifurcation, angle and involvement of branch vessels, calcification, lesion length, ostial location, tortuosity and presence of thrombus. The SS not only quantifies lesion complexity but also predicts early and late outcomes after PCI in patients with multivessel disease [22]. In previous studies, the SS generally was used to predict early and late development of major adverse cardiac event(s) (MACE; defined 'as a composite of death, MI or any repeat revascularization' in patients undergoing either PCI or CABG for multivessel involvement) [27]. At the end of our study, a correlation between the patient groups' SSs with adiponectin and A-FABP4 concentrations could not be found. Our results are like those of some of the studies that have been conducted previously, in which no association was observed between adiponectin concentrations and coronary heart disease (CHD) events. However, our results differ from those of other previous studies that reported that high adiponectin concentrations in the circulation may be associated with an increased risk of CHD recurrence and all-cause/CVD mortality [19] and that CAD patients might have a lower concentration of adiponectin [28].
Our data have once again demonstrated that A-FABP4 concentrations of the groups with CAD are significantly different from the A-FABP4 concentrations of a control group [29] but the situation is different for adiponectin. Adiponectin concentrations of the groups with CAD are not significantly different from the control group. In some experimental studies, it was demonstrated that adiponectin displays insulin-sensitizing, anti-inflammatory, anti-thrombotic, anti-atherogenic and cardioprotective A-FABP4, Adipocyte-fatty acid binding protein 4; CAD, coronary artery disease; HDL-C, high density lipoprotein cholesterol; hsCRP, high sensitive C-reactive protein; LDL-C, low density lipoprotein cholesterol; Non-CAD, there is no CAD; SS, syntax score; (Low SS, syntax score ≤ 22; Intermediate SS, syntax score = 23-32; High SS, syntax score ≥ 33); All values are expressed as the mean ± standard deviation, p-value threshold is set at < 0.17. All values are expressed as the mean ± standard deviation. p < 0.05 is set as the significance level.
properties. The concentration of adiponectin is decreased in patients with DM and CAD. On the other hand, other CAD-related studies are inconsistent with the aforementioned observations, and some previous meta-analysis studies failed to demonstrate this effect [26]. In fact, some studies reported, surprisingly, that high adiponectin concentrations are associated with increased risk of recurrent cardiovascular events [30] and mortality in patients with myocardial infarction [31] and heart failure [32]. Therefore, it may be difficult to use adiponectin for individual patients to predict risk of cardiovascular disease or mortality.
In terms of the effect of DM, adiponectin concentrations of DM subjects in all groups were found to be lower than those of the N-DM subjects. According to the inter-group comparisons made, there was no difference detected between the non-CAD group with DM and any of the three-vessel stable CAD groups with DM. This result suggests that low adiponectin concentrations could be caused by the presence of DM, not of CAD. Similarly, in most of the studies conducted in the past, lower adiponectin concentrations were found in DM patients [33]. For all of the groups, when the intra-group A-FABP4 concentrations were compared according to the presence or absence of DM, A-FABP4 concentrations of the DM subjects were found to be higher than those of the N-DM patients. In the inter-group comparison, it was seen that the A-FABP4 concentrations of all of the three-vessel stable CAD groups with DM were higher than those of the non-CAD group with DM. According to this result, it was concluded that the increase in A-FABP4 concentration was influenced not only by DM, but was also affected simultaneously by CAD. In a previous study, it was found that concentration of A-FABP4 was higher in multi-vessel CAD patients than those of non-CAD and/or one vessel CAD as well as DM patients which have higher concentration of A-FABP4 than non-DM patients [34,35]. Moreover. it was determined a negative correlation between the adiponectin and A-FABP4 concentrations in the non-CAD group with DM in accordance with our study results revealing that the adiponectin levels of the DM subjects are lower while A-FABP4 concentrations of the DM subjects are higher than those of the N-DM patients in the non-CAD group.
In our study, we did not detect an association between adiponectin or A-FABP4 concentrations and the coronary lesion complexity of the three-vessel stable CAD groups. However, unlike adiponectin, the mean A-FABP4 concentrations of all patient groups were higher than the control group's mean A-FABP4 concentration. In other studies, A-FABP4 concentrations were reported to be higher in multi-vessel CAD patients and closely related to the development of atherosclerosis in humans. Adiponectin and A-FABP4 concentrations of the male subjects were found to be lower than those of the female subjects. Similar results had also been obtained in earlier studies [11,33]. HsCRP has been reported to be an acute phase reactant and inflammation marker associated with cardiovascular (CV) risk [36]. In this study, the level of hsCRP showed a negative correlation with the adiponectin concentration and a positive correlation with the A-FABP4 in all study groups. This result supports the roles of adiponectin and A-FABP4 in inflammation and is consistent with results obtained in other studies [1,37]. Associations were found between BMI and A-FABP4 concentrations (positive correlation) and with adiponectin concentrations (negative correlation). In addition, there was negative correlation between adiponectin and TGs in the three-vessel stable CAD groups, consistent with the results of previous studies [38][39][40]. Moreover, hsCRP, BMI, glucose and LDL-C were determined to be independent risk factors in terms of coronary lesion complexity.
In some of the studies, it was reported that the adiponectin concentration is particularly affected by gender, age, BMI, glucose, TC, LDL-C, DM and use of drugs such as statins, whereas A-FABP4 concentration is affected by gender, age, obesity, high blood pressure, DM and use of drugs such as aspirin, statins and antihypertensives. The characteristics-including age, gender, BMI, fasting blood glucose, TC, LDL-C, use of drugs, presence of DM and HT of the patients involved in this study, were similar among all groups.
We think that the contradictory results among the studies conducted on adiponectin concentrations in multivessel CAD may originate from the adiponectin having different molecular structures and/or from the uncertainties related to the active form of adiponectin in the circulation. As a matter of fact, previous studies reported that HMW adiponectin may be the active form of this protein.
On the other hand, a different study proposed that HMW adiponectin represented a precursor pool that can be activated, with the cleaved form, i.e. LMW adiponectin, being responsible for the effect on AMP kinase activity. In summary, biological activities among these isoforms are controversial. Moreover, in one study, patients with ACS were reported to have significantly lower plasma levels of adiponectin than those with stable CAD. According to this result, the absence of ACS patients in our group could also be another factor.
In contrast to the adiponectin results, all the studies, including ours, have demonstrated similar results with respect to A-FABP4 concentrations. We think that the agreement among these studies may reflect the fact that A-FABP4's structure is more homogenous than adiponectin's structure.
Thus, we concluded that there was no correlation between the coronary lesion complexities in three-vessel stable CAD subjects with adiponectin and A-FABP4 concentrations. In addition, we also determined that the adiponectin concentrations correlate with DM, but not with three-vessel stable CAD. However, we found that A-FABP4 concentrations were in correlation with both DM and three-vessel stable CAD and were much higher in the patients having both diseases together, that is, both threevessel stable CAD and DM. The correlation between adiponectin and A-FABP4 with hsCRP shows that these two analytes are related with inflammation.
To set forth the limitations of our study, we can primarily say that it was neither a study containing large patient groups nor carried out as a multi-center study. Additionally, we think that a prospectively designed study should include ACS subjects in addition to the stable CAD subjects; in such a study, the correlation between the SS, adiponectin and A-FABP4 and the predictive values in terms of MACE could be examined. Finally, it is especially important that future adiponectin-related studies are designed in a way that allows one to evaluate the effects of the various molecular forms of adiponectin.
In conclusion, for the three-vessel stable CAD subjects, no correlation could be found between the coronary lesion complexity and the adiponectin and A-FABP4 concentrations. In addition, adiponectin was only correlated with DM, not with stable CAD. Also, A-FABP4 was concluded to be correlated with both DM and stable CAD and to be at a higher concentration in three-vessel stable CAD subjects with DM. More studies should be performed to accumulate additional data on this subject.
|
v3-fos-license
|
2019-04-15T13:08:28.539Z
|
2016-12-25T00:00:00.000
|
55460587
|
{
"extfieldsofstudy": [
"Engineering"
],
"oa_license": "CCBYSA",
"oa_status": "HYBRID",
"oa_url": "http://www.insightsociety.org/ojaseit/index.php/ijaseit/article/download/1371/900",
"pdf_hash": "8e8040146b5f6d3c864d0815976b630c98f6986c",
"pdf_src": "MergedPDFExtraction",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44361",
"s2fieldsofstudy": [
"Computer Science",
"Business"
],
"sha1": "b3aee9b65c9be4c292f9bca43e5265284c26d536",
"year": 2016
}
|
pes2o/s2orc
|
An Empirical Study of Information Security Management Success Factors
— Information security management (ISM) is a continuous, structured and systematic security approach to managing and protect the organisation's information from being compromised by irresponsible parties. To ensure the information remains secure, many organisations have implemented ISM by establishing and reviewing information security (IS) policy, processes, procedures, and organisational structures. Regardless of the efforts, security threats, incidents, vulnerabilities, and risks are still plaguing many organisations. Lack of awareness of ISM effectiveness due to low understanding of the success factors is one of the major factors that cause this phenomenon. This study aimed to address this subject by firstly identifying the ISM key factors from existing literature and then by confirming the factors and discovering other related factors from practitioners’ perspective. This study used a qualitative method where it adopted semi-structured interviews involving nine practitioners. The data were analysed using content analysis technique. Through the analysis, the study validated several ISM factors and their elements that contribute to the success of ISM. The findings provide practitioners with the high understanding of ISM key factors and could guide practitioners in implementing proper ISM.
I. INTRODUCTION
In the era of globalisation, protection of information is critical in order to ensure business continuity [1]. Addressing security breaches become a challenge to organisations [2]. Information Security (IS) is a concept that is related to protecting information in order to preserve the value it has for organisations and individuals [3], [4]. Information's confidentiality, integrity, availability, authenticity, accountability, and reliability are ensured through IS [5], [6], [7], [8], [9], [10]. Organisations which are lacking in IS will usually prone to a large number of security breaches and incidents [11]. Recognising this, many organisations have put in place substantial efforts in managing and handling the security of their information. They have implemented Information Security Management (ISM) initiatives by reviewing IS processes, policies, procedures, controls and organisational structures. ISM is a comprehensive approach that involves the implementation of activities and controls to protect organisation's information assets from any intrusion [7], [12], [13], [14]. In spite of the efforts, organisations are still exposed to information security threats, incidents, vulnerabilities and risks [6], [8], [15]. One of the contributing reasons is the ineffective ISM current practices [16]. Organisations often emphasise on the technical aspects without appropriate considerations on the non-technical aspects when implementing ISM [17], [18]. They normally perpetrate into the initiatives without knowing the key factors that affecting its success [19]. Based on the above facts, there is a need to identify the key factors that contribute to the success of ISM. This paper aims to address this issue by identifying and collating the key factors from theoretical and empirical perspectives. The identified factors can be used as a guidance to organisations in improving their ISM practices. This paper is organised as follows. The next section presents ISM factors and elements that were gathered from the literature and the methodology used to collect and analyse the theoretical and empirical data. Section III presents the findings of the analysis. Finally, section IV concludes the paper by summarising the finding and outlining the future work.
II. MATERIALS AND METHODS
ISM is an ongoing process that involves planning, implementing, monitoring and improving IS activities [8], [9], [20]. In order to ensure the information is well maintained and the organisation's mission, vision and goals can be achieved, the organisation should have an effective ISM.
The main process in ISM is risk management which consists of risk assessment and risk treatment activities [9]. The purpose of risk management is to identify, analyse and evaluate IS risks, as well as implementing actions to modify and control the risks [24], [25], [26]. Besides risk management process, business continuity management (BCM) also contributes to the success of ISM [8], [22]. The goal of BCM is to ensure the continuity of organisation's business operations during or after adverse situations [27], [28], [29], [30]. BCM requires a comprehensive business continuity plan which is derived from business impact analysis and risk assessment [5]. The business continuity plan determines the processes, procedures, resources, roles and responsibilities involved. To ensure the BCM is effective and valid during the adverse situations, the organisation shall exercise and test the business continuity plan [5], [8], [31], [32].
ISM technical operation activities are carried out by the ISM team. The team is accountable for implementing ISM processes and controls by following the steps written in the ISM procedures. Thus, the procedures should be clear, complete and communicated to the ISM team [5], [8], [32]. The knowledge, commitment and technical skills of the ISM team are highly required in implementing IS processes, procedures and controls [5], [9], [10], [19].
A. Research Questions Formulation
The study focused on answering the following questions: i. What are the factors that contribute to the success of ISM? ii. What are the specific elements for each of these factors?
The questions acted as the basis for data collection during theoretical and empirical studies.
1) Theoretical Study
This study was initiated by analysing the existing literature. This theoretical study reviewed published and unpublished documents in multiple online databases. The findings of the study have been elaborated in [36].
2) Empirical Study
This study aimed to verify the factors that were derived from the theoretical study as well as discovering other relevant factors. This study used semi-structured interviews. A series of individual and focus group interviews with experienced ISM practitioners were conducted.
i. Sampling The sampling was based on the ability of informants to answer the research questions. Thus, purposive sampling method was adopted. For the individual interviews, five ISM practitioners who had actively experienced and involved in ISM from five different agencies were invited to participate in the study. The profiles of the five participants are shown in Table 2.
Meanwhile, the participants for the focus group interview comprised of a head of ICT department, ISM coordinator, ISM implementer and ISM auditor. All participants possessed at least five years' experience in ISM. Table 3 outlined the participants' profiles.
ii. Instruments Interview questions were used as the instruments for individual and focus group interviews. The questions were derived based on the findings of the theoretical study. The questions were broken into two parts, A and B. Part A covers the ISM implementation in participants' organisations as well as the participants' experience in implementing ISM. While the questions in part B revolve around twelve ISM success factors which are Top Management, ISM Team, IS Audit Team, Employees, Third Parties, IS Policy, IS Procedures, Competency Development & Awareness, Resource Planning, Risk Management, Business Continuity Management and IS Audit. Table 4 summarised and described the twelve factors that were included in the interview questions.
iii. Protocol For individual interviews, the participants' consents were obtained before conducting the sessions. The appointments were made in advance to set the date and time of interviews. The participants were provided with a brief description of the interview objectives. After obtaining the participants' agreement, formal invitations were sent to the participants. The interviews were conducted between February 2016 and May 2016. The participants were interviewed individually at their workplace which took an average of 90 minutes per person. Each session was recorded using a tape recorder and field notes.
Likewise, participants' agreements were also obtained before conducting the focus group session. Two weeks before the focus group session, an invitation letter containing information about the objectives, date, time and venue was sent to participants. The focus group session was conducted on 14 May 2016 at 10.00 am. The session was recorded using video recorder, audio tape recorder, and field notes. The session took almost three hours.
C. Data Analysis
The data gathered from the theoretical and empirical study were transcribed and analysed using content analysis. Content analysis is a qualitative research technique that has been widely used to analyse written, oral or visual communication messages [39]. The analysis involved identifying the frequent elements in the data. Later, the elements were categorised according to several logical groups of factors by using inductive and deductive reasoning technique. The deductive reasoning involved using the factors and elements identified in the theoretical study and later confirming or disapproving them by comparing with the data from the empirical study. The inductive reasoning recognised new emergent data from the empirical study and then abstracted the data as new factors or grouped it into the existing factors.
Factors Description Top Management
To verify whether top management should have full commitment and strong leadership in order to achieve ISM outcomes.
ISM Team
To confirm the team must have wide IS knowledge and be updated with the current security issues as well as be skilful and committed to implementing IS process and activities.
IS Audit Team
To substantiate whether the auditors should possess the required knowledge on the people and processes to be audited; technical skills for identifying problems, getting the information and reporting the audit results; and provide fully commitment to ensure the effectiveness and completion of the auditing process. Employees To affirm whether the awareness, motivation, and compliance of the employees impact the ISM success.
Third Parties
To confirm whether the awareness and compliance of the third parties affect the ISM success.
IS Policy
To confirm whether the policy must be comprehensive which covers the requirements and controls prescribed by the ISM standards; clear in describing IS objectives and the responsibilities of the parties involved; communicated to the employees and stakeholders and regularly reviewed to ensure it is significant to the recent needs.
IS Procedures
To identify the required characteristics of good quality procedures.
Competency Development & Awareness
To validate whether the competency development and awareness programmes are important to develop the competency of ISM team and employees.
Resource Planning
To confirm whether it is important to include resource planning process to support and carry out ISM activities. Resource planning comprises human and financial resources.
Risk Management
To substantiate whether the risk management, which consists of risk assessment and risk treatment, is a key to the success of ISM.
Business Continuity Management
To verify whether the Business Continuity Management plan and testing contribute to the success of ISM.
IS Audit
To affirm whether it is important to monitor, measure and evaluate the compliance of IS processes, controls, and activities in order to ensure the effectiveness of ISM. The main tasks relating to IS audit are audit programme and audit finding & reporting.
III. RESULTS AND DISCUSSION
The results of data analysis are presented in the following paragraphs. To support the results, a number of interview excerpts are provided. The elements pertaining to the respective factors are shown in bold.
A. People
People refer to the individuals or teams who are directly involved in the planning, implementing, monitoring and improving the ISM processes. Six factors identified in the people aspect are the Top Management, Coordinator Team, IS Team, IS Audit Team, Employees and Third Parties.
1) Top Management
The success of ISM in the organisation is strongly associated with the knowledge, leadership, and commitment of its top management. Top management should have a clear understanding regarding ISM governance, objectives, and issues. Top management is accountable for ensuring the policy, procedures, processes, and controls are established, implemented and complied by the entire organisation and the external parties. In addition, top management is also responsible for monitoring and reviewing the effectiveness of ISM as well as providing adequate resources to support ISM processes. Below are some of the comments from the participants: •
2) ISM Team
ISM Team consists of a designated staff involved in most IS activities. The knowledge, skills, commitment, willingness and cooperation of ISM team are desirable in carrying out the ISM processes. The team must always be updated with the current security issues and should own broad IS knowledge. Moreover, the team must be skilful, cooperate, and committed to their work tasks. They must be always willing to accept new directed tasks.
A number of comments from the participants are presented below: •
3) Coordinator Team
The coordinator team plays a major role in coordinating ISM activities. Major ISM documents and activities are managed by the team. The team acts as a liaison between top management, ISM team, IS audit team and employees. The team is responsible for organising the training and awareness programmes, managing the resources, harmonising ISM documents and presenting the progress of ISM to the top management. Thus, the team must own ISM knowledge, give a commitment in coordinating ISM activities and have good communication skills when communicating with other parties.
The statement is supported by the following participant's comment: • "The coordinator team is the owner of major ISM documents. They harmonise the documents and present the progress of ISM to the top management. They also coordinate ISM activities. Therefore knowledge is very important as the team must be familiar with the whole processes of ISM. Their commitment is required to conduct ISM activities such as training and awareness programmes. In order to deliver information, the team should be able to communicate effectively with all level of staff in the organisation. " -INF5
4) IS Audit Team
The IS audit team is accountable to ensure IS controls, processes, procedures, and activities are executed correctly. The team should have appropriate knowledge on the people, processes, and procedures that need to be audited. Moreover, auditing skills, communication skills, commitment and cooperation within team members are required throughout the auditing process.
The comments below express the perception of IS audit team: • "IS audit team need to be familiar with ISM objectives, designated ISM personnel, and ISM processes and procedures before implementing the auditing process. Auditing skills, commitment and cooperation among team members are essential to guarantee the effectiveness of the auditing process." -INF2 • "The IS audit team contributes to the success of ISM.
The compliance with IS policy and procedures can be monitored through auditing." -INF3 • "The team's commitment is necessary to complete the auditing task in the prescribed time.
5) Employees
The organisation's employees should have awareness on the IS policy, controls, threats, and risks. The employees have to comply with the IS policy, rules, and laws in order to reduce security incidents. The motivation of the employees enhances the success of ISM implementation.
The statement is supported by the following participants' comments: •
6) Third Parties
Third parties are referring to individuals or companies involved in providing services to organisations on a contract basis in a particular period of time. To ensure the organisation's information remain secure, the third parties must be aware and comply with security policy, laws, and contract.
The statement is supported by the following participants' comments: • "Awareness is not only important to the employees, but also to the third parties. Third parties' awareness contributes to the success of ISM. Third parties must be aware on IS policy and comply with the policy." -FG 1 • "Organisation receives services from third parties.
Therefore, third parties have to conform to the contract and the policy. They need to sign a nondisclosure agreement. If they violate the policy or contract, the organisation must take action against them." -INF 5 • "Third parties are affecting the success of ISM. They must comply with the organisation's security controls and laws. " -INF 4
B. Organisation
Organisation aspect refers to the strategic and technical documents that must be established and followed during the ISM processes. Two factors identified in organisation aspect are IS policy and IS procedures.
7) IS Policy
IS policy is a strategic document that consists of objectives, directions, and rules that must be established and followed by the entire employees and third parties. The policy must be clear in defining IS objectives, and the roles and responsibilities of the employees and third parties. It must be comprehensive which covers the requirements and controls set by the ISM standards and aligns with the organisation's mission and vision. IS policy shall be reviewed regularly to ensure it is relevant to the present needs and must be communicated to the employees, stakeholders and third parties.
A number of comments from the participants are presented below: • "The scope of IS policy should be broad which cover all IS requirements and the parties involved in the organisation. In addition, the policy should be reviewed regularly. It is not a static document. The policy must be communicated to the entire organisation through multiple channels such as organisation's website or pamphlets. "-INF5 • "IS policy is a strategic document and must be established before performing any IS activities. The policy is important to the success of ISM. The goals and objectives of the policy must be clear and understandable. The policy should be reviewed at least once a year and be communicated to entire employees, third parties and stakeholders." -FG1 • "A comprehensive security policy covers all security aspects. The periodic review must be done to make sure the policy is up to date. Most importantly, the policy must be communicated to everyone." -INF 2 • "In developing IS policy, each component in the policy must be identified thoroughly. It includes the control and responsibilities of the delegated personnel and employees. Based on international standards, the policy should also be revealed to the entire organisation. " -FG4
8) IS Procedures
IS procedures are the operating guidelines that contain a series of actions that explain how to perform IS processes. The procedures are directly derived from the IS policy. To ensure the implementation of ISM is executed appropriately and correctly, the procedures must be clear and completely describe the steps to accomplish IS processes or activities. The procedures should be reviewed periodically or when environment changes and must be communicated among IS team members.
Some of the comments from the participants are presented below: • "Recently, there are many IS procedures have been developed in the organisation, for example 'password change procedure'. All steps in the procedure need to be correctly followed. Thus, the procedure must be clear and complete to enable users to follow the prescribed steps." -FG3 • "The clarity and completeness of the procedure can be seen from the steps written in the procedure. It is more understandable if the procedure is complete and explaining in detail the steps to be taken. The objective, roles, responsibilities should be included in the procedure. " -FG4 • "The clarity of procedures is similar to the clarity of IS policy. However, the procedures must be more specific. The procedures need to be frequently reviewed and communicated to the team members as the members turn in and out of the organisation." -
9) Resource Planning
Resource planning is essential to support and perform ISM processes. Resource planning consists of financial resources and human resources. Financial resources comprise the cost of buying new assets and maintaining existing assets, the cost of manpower and the cost to perform IS activities. Meanwhile, human resources refer to the teams or individuals to be engaged in ISM activities.
The statements are supported by the following participants' comments: • "The more manpower is allocated, the faster tasks can be completed.
10) Competency Development and Awareness
The competency and awareness of ISM teams, IS audit team, employees and third parties can be gained through the training and awareness programmes. The purpose of the training programmes is to ensure that the people have knowledge and skills in each task handling. Meanwhile, the purpose of the awareness programmes is to ensure the people are aware of IS policy, threats, risks as well as their roles and responsibilities.
A number of comments from the participants are presented below: •
11) Risk Management
Risk management is the key process in ISM. Risk management is a process of measuring and analysing the risk levels and taking appropriate actions to control the risks. Two major components in risk management are risk assessment and risk treatment. Risk assessment involves sub-activities such as establishing the risk acceptance criteria, identifying assets and threats, determining the impacts and probability of risk occurrence and determining the risk levels. The risk treatment involves the activity of implementing the protection strategies based on the risk assessment results.
The statements are supported by the following participants' comments: • "Risk management is an important process in ISM.
12) IS Audit
IS audit is one of the requirements in ISM standards. Through the IS audit process, the compliance of IS policy, procedures, processes, controls and activities can be monitored, measured and evaluated. The components in audit process are audit programme which consists of audit planning, audit execution, and auditor training; audit findings and reporting; and follow-up audit to check the corrective and preventive actions that have been done. Below are some of the comments from the participants: • "IS audit is one of the requirements in ISM standards.
13) Business Continuity Management
Business continuity management ensures the organisation's businesses operate smoothly during and after the unintended events. When the unintended events occur, business continuity plan that outlines the resources, processes, procedures, and responsibilities should be activated. Organisation shall carry out periodic tests on the business continuity plan to ensure its validity and effectiveness. Below are some comments from the participants: • "The important thing in Business Continuity Management is the business continuity plan. The organisation should determines IS requirements and must be embedded in the business continuity plan. The plan outlines the processes, procedures, resources and responsibilities for controlling incidents or disasters." -INF4 • "Business continuity plan and simulations are closely related to each other. Business continuity plan should be developed, documented and approved by the top management. The plan must be tested to observe its effectiveness. "-FG3 • "Organisations whose adopt ISM standard must implement business continuity management. The purpose of business continuity management is to ensure the sustainability of organisation's operations during and after the unintended events. Business continuity plan should be activated when the unintended events occur. " -FG1 Table 5 lists the significant ISM success factors together with their corresponding elements that were found in the theoretical and empirical data. The factors and elements that were gathered in the theoretical or agreed in the empirical data are marked with '√'. The factors and elements that were not supported by theoretical or empirical data are marked with 'x'. The numbers in the brackets represent the number of participants who agreed or supported the existence of the data. For example, 3/9 means three out of nine participants agreed on the factor and element. The factors and elements were categorised into three aspects, which are People, Organisation and Process.
The empirical study has confirmed that most factors found in the theoretical study are relevant to the success of ISM. There are several new factors and elements added in people, organisation, and process aspect. The new elements added in people aspect are the knowledge of top management, cooperation and willingness of ISM team, and cooperation and communication skills of the audit team. In addition, people aspect includes one new factor namely coordinator team. The elements under the coordinator team are knowledge, commitment and communication skills.
In terms of organisation aspect, reviewed procedures are the new element considered in IS procedures factor. Meanwhile, in the process aspect, follow-up audit is the new element of IS audit factor.
The finding indicates that IS policy, competency developments and awareness, and risk management are the most factors agreed by the participants. Simultaneously, majority agreed that leadership and commitments of top management; knowledge, skills and commitment of ISM team; and knowledge of IS audit team are essential for ISM initiatives. In addition, resources planning and business continuity management are also highlighted by the participants. On the other hand, the knowledge, commitment and communication skills of coordinator team, as well as the cooperation of IS audit team, are less supported in the empirical study.
|
v3-fos-license
|
2020-11-12T09:10:13.451Z
|
2020-11-01T00:00:00.000
|
226304105
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.mdpi.com/1422-0067/21/21/8377/pdf",
"pdf_hash": "05cc5009ae6c0c9e70b6fc25104d7c353bea89c4",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44364",
"s2fieldsofstudy": [
"Medicine",
"Environmental Science",
"Biology"
],
"sha1": "665d05ddb5ce64052ac0ac0db8eb61eb0c30145d",
"year": 2020
}
|
pes2o/s2orc
|
Exogenous Oestrogen Impacts Cell Fate Decision in the Developing Gonads: A Potential Cause of Declining Human Reproductive Health
The increasing incidence of testicular dysgenesis syndrome-related conditions and overall decline in human fertility has been linked to the prevalence of oestrogenic endocrine disrupting chemicals (EDCs) in the environment. Ectopic activation of oestrogen signalling by EDCs in the gonad can impact testis and ovary function and development. Oestrogen is the critical driver of ovarian differentiation in non-mammalian vertebrates, and in its absence a testis will form. In contrast, oestrogen is not required for mammalian ovarian differentiation, but it is essential for its maintenance, illustrating it is necessary for reinforcing ovarian fate. Interestingly, exposure of the bi-potential gonad to exogenous oestrogen can cause XY sex reversal in marsupials and this is mediated by the cytoplasmic retention of the testis-determining factor SOX9 (sex-determining region Y box transcription factor 9). Oestrogen can similarly suppress SOX9 and activate ovarian genes in both humans and mice, demonstrating it plays an essential role in all mammals in mediating gonad somatic cell fate. Here, we review the molecular control of gonad differentiation and explore the mechanisms through which exogenous oestrogen can influence somatic cell fate to disrupt gonad development and function. Understanding these mechanisms is essential for defining the effects of oestrogenic EDCs on the developing gonads and ultimately their impacts on human reproductive health.
Introduction
Gonadal sex determination is the process through which the bi-potential gonad differentiates into either an ovary or testis. This leads to the development of corresponding female or male secondary sex characteristics and has profound effects on the subsequent physiology and behaviour of the organism. The bi-potential gonad is comprised of the machinery required to follow one of two fates-ovary or testis-and this is under the control of well-defined molecular pathways [1,2]. The somatic cells of the gonad are integral for influencing the overall fate of the gonad such that the differentiation of these cells is the critical first step in the development of the reproductive tract. Mouse models have demonstrated that these somatic cells display plasticity, where loss or gain of key gonadal genes can drive granulosa (ovary) or Sertoli (testis) cell differentiation, independent of chromosomal sex [3][4][5][6][7][8][9][10]. Interestingly, oestrogen is able to influence these pathways in XY mammalian Sertoli cells to promote granulosa-like cell fate [11,12]. Even brief disruptions to testicular signalling pathways can impact Sertoli cell patterning, disrupting the development and function of the testis. This is of particular concern given our increasing exposures to endocrine disrupting chemicals (EDCs) that can interact with native oestrogen receptors and the decline in human reproductive health over recent decades [13].
The Impact of Oestrogenic Endocrine Disrupting Chemicals on Reproductive Health
Over the last 50 years, reproductive health has rapidly declined as a result of both increasing infertility and occurrence of reproductive birth defects. In males, a 50% decrease in sperm counts has been observed [14], alongside increasing rates of testicular cancer [15] and abnormalities in the development of the reproductive tract known as differences of sexual development (DSDs) [16,17]. DSDs are some of the most common birth defects in humans, affecting gonadal and anatomic sex development and occurring in up to 1:200 live births [18]. Testicular dysgenesis syndrome (TDS) comprises some of these conditions, including hypospadias, cryptorchidism, testicular cancer, and poor semen quality [19]. TDS is thought to arise from disruptions to the development and functioning of the testis during early fetal life, leading to compromised differentiation of the reproductive tract [20]. Hypospadias is one of the most frequently occurring birth defects in males, affecting 1:125 live male births in Australia [21]; however, only 30% of hypospadias cases can be attributed to genetic factors [22], suggesting a substantial environmental component is involved in the development of this condition. Furthermore, the increasing prevalence of TDS-related conditions has occurred too rapidly to be caused by genetic mutation alone and instead has been linked to our continued exposure to endocrine disrupting chemicals (EDCs) [23][24][25][26][27][28].
EDCs are defined as "an exogenous substance or mixture that alters function(s) of the endocrine system and consequently causes adverse health effects in an intact organism, or its progeny, or (sub) populations" [29]. EDCs can target specific hormonal pathways by interacting directly with receptors; for instance, some EDCs are capable of binding to native oestrogen receptors (ERs) to trigger the ectopic activation of oestrogen-responsive signalling pathways [30]. These oestrogenic EDCs are some of the most pervasive in our environment and include compounds, such as bisphenol A (BPA; a plasticiser), 17α-ethynylestradiol (a component of the contraceptive pill), oestrogenic phthalates (DEHP, DBP, DBP [31][32][33]; plasticisers and present in cosmetics), and genistein (a phytoestrogen naturally occurring in soy and subterranean clover; Figure 1). Aberrant activation of oestrogen signalling is detrimental to development as the correct levels of oestrogen are imperative for sexual differentiation of both the male and female reproductive tract. Several studies have demonstrated that reproductive development requires a delicate balance of androgens and oestrogens [34][35][36]; furthermore, the embryonic mammalian gonad expresses oestrogen receptors throughout development [37][38][39] and is therefore a direct target of oestrogenic chemicals. However, the predominant oestrogen receptor subtype appears to differ between mammalian species [40].
The correct patterning of the gonad is crucial for establishing the cells that produce androgens and oestrogens and contribute to the differentiation of the urogenital tract. Studies in humans and mice have demonstrated that EDCs can interfere with gonad function and subsequently the differentiation of the male reproductive tract ( Figure 1). In mice, exposure to the oestrogenic endocrine disruptor diethylstilbestrol (DES) in utero leads to increased rates of hypospadias and reduced anogenital distance [41,42], a marker of androgen output during development [43,44]. Such results have also been confirmed in vitro, where DES causes reduced testosterone output in mouse and rat gonad cultures [45] and BPA impairs testosterone production in human fetal testis culture [45,46]. Reduced synthesis of testosterone indicates impaired testis function, which can lead to disruption of the overall patterning and differentiation of the male reproductive tract.
Associations between EDCs and TDS in humans are more difficult to elucidate given the lack of controlled conditions, but many studies have demonstrated a link between exposure to EDCs, such as genistein and BPA, in utero and the development of hypospadias and TDS-related conditions [25,28,[47][48][49]. The detrimental effects of EDCs are also of concern in adulthood, where a high level of BPA in urine is associated with reduced sperm counts and motility [50,51] and elevated exogenous oestrogen levels during adulthood negatively affect testis function in humans [52,53]. Together, these data demonstrate the ability of EDCs to target the testis in mammals, causing decreased testis function and subsequent disruption of male reproductive tract differentiation and fertility ( Figure 1).
Figure 1.
Exposure to oestrogenic endocrine disrupting chemicals, such as genistein (present in soy and subterranean clover), bisphenol A (a plasticiser), ethynylestradiol (a synthetic oestrogen present in the contraceptive pill), or phthalates (plasticisers, present in some cosmetics), can disrupt gonad development and function, leading to negative reproductive outcomes. Exposure of the bi-potential gonad as it undergoes differentiation into either an ovary or testis can disrupt somatic cell specification, an important step in establishing gonad fate. Exposure of the differentiated gonad can impact steroidogenesis, gametogenesis and the ongoing maintenance of somatic cell fate, disrupting the continuing function of the gonad. Either periods of exposure can contribute to the development of premature ovarian insufficiency (POI), polycystic ovary syndrome (PCOS), or cause delayed pubertal timing in females. In males, such exposures have been linked to testicular dysgenesis syndrome, comprising of hypospadias, cryptorchidism, decreased sperm counts, and testicular cancer.
Females can similarly be impacted by excess oestrogen signalling, as the early development of the female urogenital tract occurs in the absence of any hormones [54] such that exposure to oestrogen at this time is also ectopic, leading to the development of conditions associated with ovarian dysgenesis syndrome [55]. The best characterised case of exogenous oestrogen signalling impacting female reproductive development is the daughters of DES treated women. DES was prescribed to pregnant women between 1938 and 1975 to prevent miscarriage or premature birth [56,57]. 5-10 million women were prescribed DES in the U.S. alone [58], and the drug was also widely used throughout Europe, Australia and the UK. Not only was DES ineffective in preventing miscarriage, but it caused an increased incidence of reproductive tract cancers, infertility and recurrent miscarriage in the daughters of women exposed to DES [59,60]. Thus, it is clear that exogenous oestrogen signalling is also detrimental to female reproductive health.
Aberrant oestrogen signalling during critical periods of development and even in adulthood can impact the function of the ovary. The age of onset of puberty in girls has decreased in the U.S., Denmark, India, and China [61][62][63][64] and is thought to similarly be linked to increasing oestrogenic EDC exposure. Sex steroids play a crucial role in pubertal timing, and disruption of this timingsuch as via exposure to EDCs-can have long-term reproductive consequences [65].
Exposure to oestrogenic EDCs has also been linked to conditions caused by compromised ovarian function and depletion of ovarian reserve (Figure 1), such as polycystic ovary syndrome (PCOS) [66] and primary ovarian insufficiency (POI) [67,68]. PCOS affects between 15-20% of women of reproductive age and is the most commonly occurring endocrine disorder in women [66,69]. PCOS is characterised by hyperandrogenism, ovulatory dysfunction, and polycystic ovaries, alongside an Exposure to oestrogenic endocrine disrupting chemicals, such as genistein (present in soy and subterranean clover), bisphenol A (a plasticiser), ethynylestradiol (a synthetic oestrogen present in the contraceptive pill), or phthalates (plasticisers, present in some cosmetics), can disrupt gonad development and function, leading to negative reproductive outcomes. Exposure of the bi-potential gonad as it undergoes differentiation into either an ovary or testis can disrupt somatic cell specification, an important step in establishing gonad fate. Exposure of the differentiated gonad can impact steroidogenesis, gametogenesis and the ongoing maintenance of somatic cell fate, disrupting the continuing function of the gonad. Either periods of exposure can contribute to the development of premature ovarian insufficiency (POI), polycystic ovary syndrome (PCOS), or cause delayed pubertal timing in females. In males, such exposures have been linked to testicular dysgenesis syndrome, comprising of hypospadias, cryptorchidism, decreased sperm counts, and testicular cancer.
Females can similarly be impacted by excess oestrogen signalling, as the early development of the female urogenital tract occurs in the absence of any hormones [54] such that exposure to oestrogen at this time is also ectopic, leading to the development of conditions associated with ovarian dysgenesis syndrome [55]. The best characterised case of exogenous oestrogen signalling impacting female reproductive development is the daughters of DES treated women. DES was prescribed to pregnant women between 1938 and 1975 to prevent miscarriage or premature birth [56,57]. 5-10 million women were prescribed DES in the U.S. alone [58], and the drug was also widely used throughout Europe, Australia and the UK. Not only was DES ineffective in preventing miscarriage, but it caused an increased incidence of reproductive tract cancers, infertility and recurrent miscarriage in the daughters of women exposed to DES [59,60]. Thus, it is clear that exogenous oestrogen signalling is also detrimental to female reproductive health.
Aberrant oestrogen signalling during critical periods of development and even in adulthood can impact the function of the ovary. The age of onset of puberty in girls has decreased in the U.S., Denmark, India, and China [61][62][63][64] and is thought to similarly be linked to increasing oestrogenic EDC exposure. Sex steroids play a crucial role in pubertal timing, and disruption of this timing-such as via exposure to EDCs-can have long-term reproductive consequences [65].
Exposure to oestrogenic EDCs has also been linked to conditions caused by compromised ovarian function and depletion of ovarian reserve (Figure 1), such as polycystic ovary syndrome (PCOS) [66] and primary ovarian insufficiency (POI) [67,68]. PCOS affects between 15-20% of women of reproductive age and is the most commonly occurring endocrine disorder in women [66,69]. PCOS is characterised by hyperandrogenism, ovulatory dysfunction, and polycystic ovaries, alongside an increased risk of diabetes and cardiovascular disease [69]. POI is less widespread, with a global prevalence estimated to be 3.7%-a rate that has increased in recent years [70]-and is defined as cessation of menstruation prior to the typical age of menopause, contributing not only to fertility difficulties, but also an increased likelihood of cardiovascular disease, osteoporosis and depression [71,72]. Both PCOS and POI are characterised by a loss of oestrogen signalling and hyperandrogenism. Indeed, continued oestrogen signalling is essential for the maintenance of the ovary in mammals [73]; however, studies in rodents and cell lines have suggested that aberrant oestrogen signalling either during development or in adulthood can cause a reduction in the oestrogenic output of the ovary [67].
Numerous studies have demonstrated the ability of BPA and genistein to reduce the steroidogenic output of the ovary and impact folliculogenesis, raising concern about the harm of these chemicals on ovarian function [67,74]. High BPA blood levels are associated with PCOS in women [75] and exposure to BPA during development leads to formation of PCOS-like phenotypes during adult life in rats [76] and mice [77]. Similarly, exposure to either genistein or a mixture of oestrogenic and anti-androgenic EDCs can cause a reduction in follicular reserve and POI-like phenotypes in rats [78,79]. It is hypothesised that the development of these phenotypes is due to reduced oestrogen output and compromised ovarian function; indeed, follicles or granulosa cells cultured in the presence of BPA show a decrease in oestrogen production [80,81] and exposure to genistein decreases expression of critical steroidogenic pathways in human granulosa cells [82]. Overall, these results suggest a disruption to the key pathways involved in maintaining oestrogenic output and therefore ovarian identity.
The impact of EDCs on reproductive health is concerning, particularly their ability to affect the development of the gonad and urogenital tract during fetal life, contributing to the rise in prevalence of DSDs. Their impact on reproductive health after birth and into adulthood is of further concern, where exposure to EDCs has been linked to premature puberty, PCOS and POI in females, and reduced sperm counts in men, together contributing to an overall decline in fertility. These issues primarily stem from the ability of oestrogenic chemicals to target the testis and ovary. The gonad harbours and nurtures the germ cells that will go on to form sperm and oocytes, and eventually the next generation. Additionally, the gonad synthesises the majority of sex hormones in males and females, which are essential for directing sexual differentiation and maintaining reproductive function. Examining the effect of oestrogen on development and maintenance of gonad fate (ovary or testis) and the molecular pathways that drive this process is critical to understanding how EDCs may target this system.
The Function of Oestrogen in the Mammalian Gonad
Oestrogen has a critical role in mediating ovarian differentiation in non-mammalian vertebrates, regardless of the sex determining mechanism. An increase in oestrogen-through changes to endogenous or exogenous oestrogen levels-can consistently promote male-to-female sex reversal, demonstrating the plasticity of gonadal sex and ability of oestrogen to promote ovarian fate [83][84][85][86]. In contrast, the role of oestrogen in early gonadal development and its ability to promote differentiation is less clear in mammals.
Exposure to exogenous oestrogen prior to gonad differentiation can cause sex reversal in two marsupials, the opossum [87] and tammar wallaby [88], despite their clear genetic sex determination system. This suggests that, similar to non-mammalian vertebrates, oestrogen can override the genetic predisposition of the gonad to become a testis. At present, the effect of exogenous oestrogen on other mammalian species is less clear, but there is a known role for the hormone in maintaining ovarian fate. Oestrogen is also essential for ovarian differentiation in goats [89], sheep [90,91], and cows [92], where aromatase promotes the synthesis of oestrogen from testosterone in the fetal ovary.
In mice, the presence of oestrogen is not essential to induce the bi-potential gonad to actively differentiate into an ovary, but it is still necessary for the maintenance of somatic cells. Mice deficient for Cyp19 (encodes aromatase) undergo normal early ovarian differentiation, illustrating that oestrogen is not required for initial development [3]. However, shortly after birth, the germ cells of these mice are lost and the gonad shows testis-like morphology, where the somatic cells change fate from granulosa (ovarian) to Sertoli (testis). Administration of oestrogen rescues this phenotype, demonstrating that the hormone can trigger cell fate change and is necessary for ongoing maintenance and function of granulosa cells [73]. Further demonstrating this requirement of oestrogen for ovarian maintenance, mice lacking oestrogen receptor α (ERαKO) have normal ovarian development until adulthood, when the ovary does not successfully complete folliculogenesis [93]. These mice still have some oestrogen signalling as they express ERβ, but these findings demonstrate the requirement of ERα for normal ovarian function.
In general, the role of oestrogen in early eutherian gonad development is downplayed given the presence of a strong genetic sex determination (GSD) system and the fact that the process of sex determination occurs in utero, where there could be exposure to maternal oestrogens. Given this, it has been assumed that oestrogen would have no impact on gonad differentiation and that the developing gonad would be resistant to the influence of any maternal oestrogens [94]. Despite this, ERs are widely expressed in the indifferent gonad of all mammals [37][38][39], making them susceptible to exposure to endocrine disruptors that can interact with ERs. Indeed, oestrogenic EDCs can cross the placenta and increase the typical levels of oestrogen in the uterine environment [95], bypassing any resistance provided by the placenta. Furthermore, the link between increasing oestrogenic EDCs and infertility and DSDs suggests that the gonad is a target of exogenous oestrogen.
While the precise role for oestrogen in directing early ovarian differentiation in mammals appears to be variable across species, it plays a highly conserved role in ovarian and granulosa cell maintenance. To further understand the function of oestrogen in regulating somatic cell fate, it is essential to understand the core pathways critical for mammalian gonad differentiation and examine where oestrogen can potentially influence this system.
Molecular Control of Gonad Differentiation
Gonad development begins with the initial emergence of the bi-potential gonad, an indifferent structure that can form either an ovary or testis [2,96]. At embryonic (E) day 10.5 in mice (equivalent to the 6th week of gestation in humans), the bi-potential gonad emerges on the mesonephros, a process under the control of Wt1, Sf1, Cbx2, Lhx9, and Emx2 [97]. Within the indifferent gonad are the supporting somatic cells, which can form either a testis-specific (Sertoli) or ovary-specific (granulosa) cell.
Testis Development
Sertoli cells are the first cell type to differentiate in the male gonad and are considered the orchestrators of subsequent testis development [98]. Following the formation of a testis, Sertoli cells are involved in supporting steroidogenesis, spermatogenesis and maintenance of testis identity [99]. A minimum number of Sertoli cells is required for the development of a testis to continue [100], and because of this essential threshold of Sertoli cell number, the recruitment of Sertoli cells is an important process to ensure that testis development occurs correctly. Sertoli cell determination is marked by the temporally controlled expression of the Y chromosome gene sex-determining region Y (Sry) at E11.5 [101]. Sry is the molecular switch required for testis formation [102], and both the correct timing [103] and level [104] of its expression are necessary for testis development to occur. Indeed, the initial Sertoli cell recruitment and subsequent maintenance of the required Sertoli cell number is supported by expression of key testis factors downstream of Sry.
Once levels of Sry reach a critical threshold at E11.5, SRY-box transcription factor 9 (Sox9) transcription is initiated (Figure 2). Prior to this at E10.5, SOX9 is present in the cytoplasm of XX and XY indifferent gonad somatic cells [105]. Upon expression of Sry in XY mice embryos, SOX9 translocates to the nucleus; however, in the absence of Sry, the cytoplasmic pool of SOX9 dissipates [105]. SOX9 shows the same localisation pattern in humans [106] and this sex-specific regulation of SOX9 is the key trigger for testis differentiation in both species. Indeed, ectopic expression of Sox9 in the indifferent XX mouse gonad is able to trigger testis differentiation [5,6], and the absence of Sox9 in XY mice leads to formation of an ovary [7,8]. Sox9 activity is sufficient to trigger all downstream testis development, even in the absence of Sry [107]. Furthermore, heterozygous mutations for SOX9 in humans can lead to XY sex reversal [108,109]. Consequently, SOX9 is considered a critical testis-determining gene and major emphasis has been placed on understanding its regulation and downstream role as a transcription factor. testis development, even in the absence of Sry [107]. Furthermore, heterozygous mutations for SOX9 in humans can lead to XY sex reversal [108,109]. Consequently, SOX9 is considered a critical testisdetermining gene and major emphasis has been placed on understanding its regulation and downstream role as a transcription factor. In XY mouse gonads, expression of sex-determining region Y (Sry) reaches a peak at embryonic day (E) 11.5 and triggers the nuclear translocation of SOX9, where it promotes expression of prostaglandin D synthase (Ptgds), fibroblast growth factor 9 (Fgf9), anti-Müllerian hormone (Amh), and itself, together contributing to the differentiation of a testis. In XX gonads, in the absence of Sry, SOX9 remains cytoplasmic. R-spondin 1 (Rspo1) and Wnt family member 4 (Wnt4) are expressed specifically from E12.5, and β-catenin is stabilised in the nucleus, while the cytoplasmic pool of SOX9 disappears. The activity of these ovary-specific genes triggers expression of other genes forkhead box L2 (FoxL2), follistatin (Fst), and bone morphogenetic protein 2 (Bmp2) to promote ovarian differentiation. SOX9 further promotes testis development in males by inhibiting β-catenin and FOXL2 to ensure ovarian development is suppressed. Conversely, β-catenin and FOXL2 inhibit SOX9 to promote ovarian differentiation. WNT4 and FGF9 also exhibit antagonism.
The necessity of SOX9 to direct testis development relies on its ability to initiate transcription of downstream targets that further support testis formation and function. These downstream targets include fibroblast growth factor 9 (FGF9), prostaglandin D synthase (PTGDS), and anti-Müllerian hormone (AMH; Figure 2). FGF9 is a secreted signalling molecule, and, during embryonic mouse development, Fgf9 shows a sex-specific pattern of expression [110] before becoming restricted to XY gonads [111]. Fgf9 forms a feed-forward positive loop with Sox9 and suppresses the ovarian gene Wnt4 [112] to promote testis formation. Fgf9 null mice exhibit XY sex reversal in some, but not all, genetic backgrounds [113] and it has been hypothesised that this sex reversal is due to reduced proliferation rate and differentiation of pre-Sertoli cells [111]. These results demonstrate the role for FGF9 in recruiting Sertoli cells to the threshold required for formation of a testis, the failure of which results in sex reversal in mice [100]. Interestingly, a mutation in FGFR2 (which encodes the FGF9 receptor) has been reported in an XY gonadal dysgenesis patient, suggesting that FGF9 signalling is also important for human testis development [114].
Similar to FGF9, Ptgds forms a feed-forward loop with SOX9 [115,116]. Ptgds produces PGD2, a paracrine factor secreted by Sertoli cells that promotes their differentiation and maintenance. PGD2 In XY mouse gonads, expression of sex-determining region Y (Sry) reaches a peak at embryonic day (E) 11.5 and triggers the nuclear translocation of SOX9, where it promotes expression of prostaglandin D synthase (Ptgds), fibroblast growth factor 9 (Fgf9), anti-Müllerian hormone (Amh), and itself, together contributing to the differentiation of a testis. In XX gonads, in the absence of Sry, SOX9 remains cytoplasmic. R-spondin 1 (Rspo1) and Wnt family member 4 (Wnt4) are expressed specifically from E12.5, and β-catenin is stabilised in the nucleus, while the cytoplasmic pool of SOX9 disappears. The activity of these ovary-specific genes triggers expression of other genes forkhead box L2 (FoxL2), follistatin (Fst), and bone morphogenetic protein 2 (Bmp2) to promote ovarian differentiation. SOX9 further promotes testis development in males by inhibiting β-catenin and FOXL2 to ensure ovarian development is suppressed. Conversely, β-catenin and FOXL2 inhibit SOX9 to promote ovarian differentiation. WNT4 and FGF9 also exhibit antagonism.
The necessity of SOX9 to direct testis development relies on its ability to initiate transcription of downstream targets that further support testis formation and function. These downstream targets include fibroblast growth factor 9 (FGF9), prostaglandin D synthase (PTGDS), and anti-Müllerian hormone (AMH; Figure 2). FGF9 is a secreted signalling molecule, and, during embryonic mouse development, Fgf9 shows a sex-specific pattern of expression [110] before becoming restricted to XY gonads [111]. Fgf9 forms a feed-forward positive loop with Sox9 and suppresses the ovarian gene Wnt4 [112] to promote testis formation. Fgf9 null mice exhibit XY sex reversal in some, but not all, genetic backgrounds [113] and it has been hypothesised that this sex reversal is due to reduced proliferation rate and differentiation of pre-Sertoli cells [111]. These results demonstrate the role for FGF9 in recruiting Sertoli cells to the threshold required for formation of a testis, the failure of which results in sex reversal in mice [100]. Interestingly, a mutation in FGFR2 (which encodes the FGF9 receptor) has been reported in an XY gonadal dysgenesis patient, suggesting that FGF9 signalling is also important for human testis development [114]. Similar to FGF9, Ptgds forms a feed-forward loop with SOX9 [115,116]. Ptgds produces PGD2, a paracrine factor secreted by Sertoli cells that promotes their differentiation and maintenance. PGD2 has also been implicated in the ability of XY somatic cells to recruit XX somatic cells to express Sox9 when cultured together in vitro [115]. This demonstrates that, like FGF9, PGD2 is required for maintaining the threshold of Sertoli cells required for testis development. Ptgds is expressed in a male-specific manner in embryonic mouse gonads from E11.5 to E14.5 [116,117], and loss of Ptgds in XY mice leads to reduced Sox9 transcription and delayed testis cord formation [118]. Interestingly, culture of XX gonads in the presence of PGD2 can induce testicular cord formation and expression of testis-specific genes [117], further illustrating it has a strong testis-promoting function.
Sox9 also initiates expression of Amh and works with steroidogenic factor 1 (Sf1) to maintain production of the hormone in Sertoli cells [119,120]. AMH is responsible for the regression of the Müllerian ducts, a structure that, when present, is a key characteristic of female development [121]. Transgenic female mice chronically expressing Amh develop abnormally, with complete absence of a uterus or oviducts and disrupted ovarian function [122]. Amh is therefore critical for establishing normal sexual differentiation and promoting male development. Together, the expression of SOX9, FGF9, PTGDS, and AMH work to establish the specification and proliferation of Sertoli cells, contributing to the initial differentiation of the testis and ultimately a functioning male reproductive system.
Ovarian Development
In XX gonads, ovary-specific genes are expressed following the disappearance of cytoplasmic SOX9 [105]. This includes R-spondin 1 (Rspo1) and the Wnt/β-catenin pathway, which become specific to granulosa cells at E12.5 [123,124]. RSPO1 has more recently been considered to be the critical female-determining gene. The requirement for RSPO1 in ovarian determination was initially discovered by linking human RSPO1 mutations to XX gonadal dysgenesis [124]. Similarly, Rspo1 null mutant XX mice exhibit masculinisation of the gonad and some expression of Sox9 [10]. Rspo1 can stabilise β-catenin (encoded by Ctnnb1) [125], leading to activation of the Wnt4/β-catenin pathway that is essential to drive ovarian differentiation in early development [10] (Figure 2). β-catenin has similar ovary-promoting effects and, when stabilised, can enter the nucleus and act on target genes by increasing expression of Lef1 in a female-specific pattern [10]. Ectopic stabilisation of β-catenin in XY gonads can cause male-to-female sex reversal in mice [4], demonstrating it can promote ovarian differentiation in the presence of SOX9.
WNT/β-catenin signalling activates numerous downstream targets that are essential for ovarian development; in particular, increased β-catenin activity can induce expression of FoxL2 [126]. FoxL2 is expressed in XX gonads from E12.5 and is necessary for the specification and maintenance of granulosa cell fate [127]. Loss of FoxL2 has no impact on the early development of the ovary, suggesting it is not the critical ovary-determining gene; however, its ablation in adult mouse ovaries leads to transdifferentiation of granulosa cells to a Sertoli cell phenotype and upregulation of Sox9 [9], demonstrating it has a strong antagonistic relationship with Sox9 and is required for maintaining granulosa cell fate. Furthermore, overactivation of β-catenin in mice testes during development leads to increased expression of FoxL2 and drives transformation of Sertoli cells to granulosa-like cells [128], while ectopic expression of FoxL2 in embryonic mouse testes represses Sertoli cell differentiation and causes partial male-to-female sex reversal [129].
FOXL2 appears to have a role in ovarian maintenance in humans, as mutations in the gene cause premature ovarian insufficiency [130]. This role of FOXL2 in granulosa cell maintenance is similar to that of oestrogen [73]. Interestingly, the absence of FOXL2 in goats causes XX sex reversal [131], suggesting there exists a more critical role for the gene in ovarian determination in some mammals. Oestrogen is also required for the early differentiation of the ovary in goats [89], further suggesting a relationship between FOXL2 and oestrogen in mammals. FOXL2 is important for ERβ signalling in mouse ovary [132], and it has been established that ERs have a close relationship with other forkhead box transcription factors, as well [133,134]. In particular, ER transcriptional activity in breast cancer is dependent on its binding to forkhead box A1 (FOXA1) [135]. Thus, it is likely there exists a similar interaction between ERs and FOXL2. Together, these ovarian genes establish the identity of granulosa cells and their continued maintenance, working to suppress the male developmental pathway, while promoting ovarian differentiation and function.
Antagonism between Pro-Testis and Pro-Ovarian Factors Drives Sex Determination
Numerous pro-ovary and pro-testis factors in the gonad determination pathway exhibit opposing effects (Figure 2). This pathway antagonism has led to the establishment of a 'push-and-pull' model, wherein the somatic cells of the gonad are plastic in nature and their fate is dependent on the level of pro-ovary or pro-testis factors. Indeed, the ability of oestrogen to impact somatic cell fate relies on this plasticity and takes advantage of the push and pull between gonad developmental pathways.
Wnt4 has an antagonistic relationship with the testis-specific gene Fgf9 and this negative feedback is thought to be an integral mechanism in establishing either an ovary or testis [112]. However, loss of Fgf9 does not always cause sex reversal [113] and overexpression of Wnt4; therefore, suppression of Fgf9, in XY embryonic mouse gonads, affects the formation of testis vasculature and steroidogenesis but ultimately does not cause sex reversal [136]. The absence of Wnt4 does not significantly change Sox9 expression, suggesting that, when present, Wnt4 is not suppressing the male pathway [10,112,123,137]. In contrast, loss of Rspo1 does lead to upregulation of Sox9, suggesting the expression of Rspo1 and its downstream action on Ctnnb1 and Wnt4 is critical for suppression of the male pathway. Similarly, FoxL2 ablation in adult ovaries allows for upregulation of Sox9 in the somatic cells [9], demonstrating an antagonistic relationship between these factors. β-catenin, which lies downstream of Rspo1, is suppressed by SRY in vitro in NTERA-2 clone D1 (NT2/D1) cells, a surrogate human Sertoli cell line [138]. SOX9 can similarly inhibit β-catenin in chondrocytes [139], but this has not been demonstrated in Sertoli cells. Conversely, β-catenin can also suppress transcription of Sox9 in embryonic mouse gonads [4] and decrease the abundance of both SOX9 and AMH in NT2/D1 cells and embryonic mouse gonads [140]. Overall, this antagonistic relationship between SOX9 and β-catenin presents as a key regulator of gonad differentiation.
More recently, mitogen-activated protein kinase (MAPK) pathways have been revealed to have a role in sex determination as mediators of the antagonistic relationship between SOX9 and β-catenin [141]. MAPK cascades are three-tiered, involving initial activation of a MAP kinase kinase kinase (MAP3K) by extracellular stimuli; activated MAP3Ks phosphorylate MAP kinase kinases (MAP2Ks), which in turn activate MAP kinases (MAPKs). The three classical MAPK pathways are extracellular signal-regulated protein kinases (ERK), c-Jun N-terminal kinases (JNK) and p38 MAP kinases. Two pathways, MAP3K4 and MAP3K1, have an interesting role in promoting or suppressing SOX9 or β-catenin, ultimately impacting the fate of the gonad [141].
MAP3K4 is responsible for a cascade of signalling leading to the initial expression of Sry in mouse gonads and mice deficient for Map3k4 exhibit male-to-female sex reversal as a result of a decrease in Sry transcription [142]. Growth arrest and DNA damage-inducible protein γ (GADD45γ) is a binding factor of MAP3K4 [143] and facilitates the regulation of Sry transcription by the subsequent phosphorylation of p38 and GATA binding protein 4 (GATA4), allowing GATA4 and FOG2 to bind to the Sry promoter to upregulate its transcription [144,145]. Thus, the correct activation of MAP3K4 is required for the establishment of the testis pathway. In contrast, the loss of Map3k1 in the mouse has little impact on testis development [146], suggesting it is not required for testis determination.
It is unknown what impact loss of MAP3K4 has on testis development in humans, as mutations are likely embryonic lethal [141]; however, in human testis-derived cells, MAP3K4 can rescue the suppression of SOX9 caused by gain-of-function mutations in MAP3K1 [147], demonstrating it can promote the testis developmental pathway. The gain-of-function mutations in MAP3K1 that result in suppression of SOX9 account for 13-20% of human gonadal dysgenesis cases [141]. These mutations lead to increased phosphorylation of p38 and ERK1/2 and increased binding of Ras homolog family member A (RHOA), Rho-associated coiled coil containing protein kinase (ROCK), FRAT regulator of Wnt signalling pathway 1 (FRAT1), and MAP3K4, as well as decreased binding of Rac family small GTPase 1 (RAC1) to MAP3K1. Together, these changes cause stabilisation of β-catenin and decreased expression of SOX9-thus, the activation of MAP3K1 can promote a shift to ovarian development [147][148][149]. This model demonstrates the complex role of MAP3K signalling and related factors in sex determination [141,147] (Figure 3). expression of SOX9-thus, the activation of MAP3K1 can promote a shift to ovarian development [147][148][149]. This model demonstrates the complex role of MAP3K signalling and related factors in sex determination [141,147] (Figure 3). Research into the core pathways involved in mammalian gonad development has demonstrated that there are distinct genetic pathways required for the determination of either an ovary or testis. The expression and activity of these pathways is under the control of numerous factors, including the MAP3K1 and MAP3K4 cascades. While in normal circumstances these factors work in concert to reinforce the pre-existing gonad fate, extracellular changes, such as increased oestrogen signalling, can interfere with their activity. The antagonism between testis and ovary factors further reinforces the switch in somatic cell fate and altogether demonstrates that the fate of somatic cells in the gonad is plastic and that they can be influenced to form either a Sertoli or granulosa cell.
Targets of Oestrogen in the Gonad
Oestrogen signalling has critical roles in both male and female reproductive development. Oestrogen can promote a tilt in somatic cell fate from testis to ovary in many vertebrate species, even in the presence of genetic sex determination mechanisms [150,151]. Mammalian gonad development follows a robust genetic program and the initial determination of the ovary occurs in the absence of oestrogen; however, oestrogen is essential for the maintenance of granulosa cell fate and can have impacts on male reproduction when aberrant oestrogen signalling occurs, demonstrating the plasticity of these somatic cells. Thus, it is likely oestrogen has a conserved role in mammals in directing somatic cell fate away from a Sertoli cell and towards that of a granulosa cell.
Oestrogens are steroid hormones that require the binding of intracellular ERs to exert their widespread effects on cell function. Three types of ERs exist, the nuclear acting ERα (ESR1) and ERβ (ESR2), and the membrane bound G protein coupled receptor (GPER). ERα is the primary ER and Research into the core pathways involved in mammalian gonad development has demonstrated that there are distinct genetic pathways required for the determination of either an ovary or testis. The expression and activity of these pathways is under the control of numerous factors, including the MAP3K1 and MAP3K4 cascades. While in normal circumstances these factors work in concert to reinforce the pre-existing gonad fate, extracellular changes, such as increased oestrogen signalling, can interfere with their activity. The antagonism between testis and ovary factors further reinforces the switch in somatic cell fate and altogether demonstrates that the fate of somatic cells in the gonad is plastic and that they can be influenced to form either a Sertoli or granulosa cell.
Targets of Oestrogen in the Gonad
Oestrogen signalling has critical roles in both male and female reproductive development. Oestrogen can promote a tilt in somatic cell fate from testis to ovary in many vertebrate species, even in the presence of genetic sex determination mechanisms [150,151]. Mammalian gonad development follows a robust genetic program and the initial determination of the ovary occurs in the absence of oestrogen; however, oestrogen is essential for the maintenance of granulosa cell fate and can have impacts on male reproduction when aberrant oestrogen signalling occurs, demonstrating the plasticity of these somatic cells. Thus, it is likely oestrogen has a conserved role in mammals in directing somatic cell fate away from a Sertoli cell and towards that of a granulosa cell.
Oestrogens are steroid hormones that require the binding of intracellular ERs to exert their widespread effects on cell function. Three types of ERs exist, the nuclear acting ERα (ESR1) and ERβ (ESR2), and the membrane bound G protein coupled receptor (GPER). ERα is the primary ER and can signal via numerous kinase pathways and transcriptional targets [152]. There are two distinct types of oestrogen signalling: genomic and non-genomic. Genomic oestrogen signalling is considered the classical pathway and involves either the direct binding of ligand-activated ERs to oestrogen response elements (EREs) in target DNA sequences [152], or the binding to transcription factors to form a complex that can then bind to DNA [153]. Non-genomic signalling involves ligand binding to plasma membrane-bound ERs that can rapidly activate kinase signalling, such as the MAPK pathway [154].
Non-Genomic Targets of Oestrogen in the Gonad
The non-genomic action of oestrogen has been well studied and both ERα and GPER have been implicated in the activation of numerous kinases [155]. There is a breadth of pathways that can be controlled by non-genomic oestrogen signalling but given that activation of ERK1/2 is able to promote ovarian fate by stabilising β-catenin [147], it presents as a potential target of oestrogen to suppress the male developmental program in this system. ERK1/2 is present in Sertoli cells, where it has a role in proliferation, among many other signalling pathways [156]. ERK1/2 can be activated by oestrogen in a non-genomic manner in breast cancer, bone, and neural cells [157][158][159][160]. Brief oestrogen treatment can also rapidly activate ERK1/2 in NT2/D1 cells to promote the cytoplasmic retention of SOX9 [161], demonstrating oestrogen can mediate SOX9 on both a non-genomic and genomic level. These results suggest oestrogen activates ERK1/2 in Sertoli cells to promote ovarian fate through stabilisation of β-catenin and inhibition of SOX9 (Figure 3). ERK1/2 is highly conserved [162]-thus, activation of ERK1/2 may be an ancestral mechanism through which oestrogen can direct somatic cell fate in vertebrates. Indeed, in the tammar wallaby, exposure of the developing gonad to oestrogen leads to increased expression of MAP3K1 [163], which lies upstream of ERK1/2 and is a critical regulator of the gonad developmental programs. Mice lacking membrane-bound oestrogen receptors are protected from the impacts of exogenous oestrogens, such as DES [164], demonstrating this rapid response to oestrogen via membrane-bound ERs is likely the major way through which oestrogen impacts gonad development.
Oestrogen can similarly regulate the ovarian factor β-catenin through non-genomic mechanisms. In neurons [165], human colon cancer cells, and breast cancer cells [166], short term oestrogen treatment leads to the direct association of ERα with β-catenin to promote the activation of β-catenin. Furthermore, oestrogen treatment can dissociate β-catenin from the inhibitor glycogen synthase kinase 3β (GSK3β), eventually leading to decreased activity of GSK3β through activation of AKT serine/threonine kinase (AKT) signalling [167]. This suggests oestrogen can target GSK3β to reduce its inhibitory action on β-catenin. AKT signalling can also lead to direct activation of β-catenin via phosphorylation at serine (Ser)552 (Figure 3), increasing its transcriptional activity [168]. Oestrogen treatment rapidly activates AKT in breast cancer cells [169] and neurons [170] through the transmembrane oestrogen receptor GPER [171]-thus, it is possible AKT may also be activated in Sertoli cells exposed to oestrogen.
Protein kinase A (PKA) also promotes transcriptional activity of β-catenin via phosphorylation at Ser552, as well as Ser675 [172]. PKA activity is dependent on the levels of cyclic adenosine monophosphate (cAMP) [173] and can be induced following brief exposure to oestrogen in breast cancer and uterine cells [174]. PKA further promotes the activity of ERα via phosphorylation [175,176], suggesting it has a unique relationship in mediating ERα activity. p21 (RAC1) activated kinase 1 (PAK1) can also phosphorylate β-catenin at Ser675 (Figure 3) in colon cancer cells [177] and can be activated by oestrogen in breast cancer cells [178], while its transcription is also oestrogen responsive [179].
Altogether, the activation of ERK1/2, AKT, PKA, and PAK1 present as potential targets of oestrogen to promote ovarian fate in Sertoli cells ( Figure 3); however, it is difficult to predict how these kinases may respond in a different cell type and what impacts their activation would have on other aspects of the cell. The findings that oestrogen can rapidly activate ERK1/2 to suppress SOX9 [161] demonstrates how essential assessing the effects of oestrogen on non-genomic targets is, as this type of signalling often establishes the changes required for genomic signalling to occur. Furthermore, these signalling pathways are critical for spermatogenesis and have been linked to male infertility [180], further supporting the impacts of exogenous oestrogen on non-genomic pathways and declining male reproductive health.
Genomic Targets of Oestrogen in the Gonad
Oestrogen can directly inhibit transcription of SOX9 in the red-eared slider turtle (Trachemys scripta) [181], chicken [182], and the broad-snouted caiman (Caiman latirostris) [183]. In mammals, the best example of the ability of oestrogen to impact gonad somatic cell fate on a genomic level comes from research in marsupials. In the tammar wallaby, oestrogen exposure of XY embryonic gonads for 5 days does not decrease transcription of SOX9; however, it does lead to the cytoplasmic retention of SOX9 protein [11,12] (Figure 4). This suppression of SOX9 activity causes sex reversal and transdifferentiation of Sertoli cells to granulosa-like cells. These granulosa-like cells exhibit upregulation of ovarian markers FOXL2 and WNT4 and reduced expression of SRY and AMH [11,12] In contrast, there is less evidence to demonstrate that oestrogen can promote expression of ovarian factors. As mentioned above, FoxL2 works in conjunction with oestrogen receptors to inhibit Sox9 expression in the adult mouse and its expression is significantly increased following oestrogen treatment in wallaby and NT2/D1 cells. FoxL2 KO mice show a decrease in expression of aromatase [194], further suggesting a link between oestrogen signalling and FoxL2 expression. Long term oestrogen treatment can increase Ctnnb1 transcription in mouse prostate [195] and uterus [196], and can reduce the transcriptional activity of AXIN1 (a member of the β-catenin degradation complex) in breast cancer cells, overall suggesting oestrogen can promote stabilisation of β-catenin [197]. Wnt4 is activated in rat neurons following oestrogen exposure [198] but this has not been examined in gonads. There is little evidence that exogenous oestrogen can activate RSPO1 or FST expression in humans and mouse and these genes did not respond to oestrogen treatment in the tammar wallaby [11,12]. However, it is possible β-catenin activation by oestrogen could lead to their upregulation in humans and mouse. Overall, it is highly likely some of these genes are responsive to oestrogen, as their Exogenous oestrogen similarly affects SOX9 subcellular localisation in human testis-derived NT2/D1 cells, leading to suppression of SOX9 target genes FGF9, PTGDS, and AMH and activation of WNT4 and FOXL2 [11] (Figure 4). These results demonstrate that oestrogen can influence the key gonadal factors involved in determining somatic cell fate of the human gonad. The cytoplasmic retention of SOX9 by oestrogen presents as a mechanism through which oestrogenic EDCs can impact Sertoli cells and testis development and function. In humans, the requirement for SOX9 nuclear localisation to drive testis differentiation is well established, and mutations affecting SOX9 import are associated with DSDs [184]. This mechanism may contribute to infertility in adult males with elevated oestrogen levels [52]. These findings are important for understanding how disruption to ovarian steroidogenesis may impact granulosa cell fate and ovarian maintenance. A loss of oestrogen signalling-such as in POI and PCOS-could lead to an increase in SOX9 activity and disruption of granulosa cell fate.
There is further evidence to suggest oestrogen can impact the transcriptional profile of gonad somatic cells in mice. In adult mouse ovaries, Sox9 transcription can be suppressed by the combined action of activated ERα and FoxL2 on the SOX9 enhancer TESCO, and this is an important step in maintaining granulosa cell fate [9]. FOXL2 can also directly activate Esr2 (ERβ) transcription to suppress Sox9 transcription and promote granulosa cell fate in adult mouse ovaries [132].
The expression of some downstream targets of SOX9 are oestrogen responsive-FGF9 and its receptor FGFR1 have oestrogen response elements [185,186] and their transcription can be directly targeted by oestrogen, while AMH undergoes differential regulation in response to oestrogen depending on cell type. In mature granulosa cells, ERα activation upregulates Amh [187] and its expression is essential for folliculogenesis in mice and humans [188,189]. In contrast, exposure of male rats to oestrogenic endocrine disruptors causes a decrease in Amh mRNA levels [190,191], alongside disruption in testis function. This effect may be due to suppression of Sox9; however, these results demonstrate Amh expression is a good indicator for disruptions to testis development. Another downstream target of Sox9, Ptgds, can similarly be inhibited by increased oestrogen signalling in mouse Leydig cells [192] and hypothalamus [193]. Together, these data demonstrate that oestrogen can target key testis pathway genes, however, inhibition of SOX9 presents as the most detrimental to testis development given it is the orchestrator for expression of the essential testis genes.
In contrast, there is less evidence to demonstrate that oestrogen can promote expression of ovarian factors. As mentioned above, FoxL2 works in conjunction with oestrogen receptors to inhibit Sox9 expression in the adult mouse and its expression is significantly increased following oestrogen treatment in wallaby and NT2/D1 cells. FoxL2 KO mice show a decrease in expression of aromatase [194], further suggesting a link between oestrogen signalling and FoxL2 expression. Long term oestrogen treatment can increase Ctnnb1 transcription in mouse prostate [195] and uterus [196], and can reduce the transcriptional activity of AXIN1 (a member of the β-catenin degradation complex) in breast cancer cells, overall suggesting oestrogen can promote stabilisation of β-catenin [197]. Wnt4 is activated in rat neurons following oestrogen exposure [198] but this has not been examined in gonads. There is little evidence that exogenous oestrogen can activate RSPO1 or FST expression in humans and mouse and these genes did not respond to oestrogen treatment in the tammar wallaby [11,12]. However, it is possible β-catenin activation by oestrogen could lead to their upregulation in humans and mouse. Overall, it is highly likely some of these genes are responsive to oestrogen, as their continued expression is required to maintain granulosa cell fate and therefore to support the production of oestrogen.
Conclusions
Defining the mechanisms through which oestrogenic EDCs impact the gonads is essential for understanding the aetiology of DSDs and how these chemicals can impact reproductive development. The rapid decline in human reproductive health has been unequivocally linked to increasing exposure to oestrogenic chemicals in our environment. Here, we have described the known pathways through which gonadal fate decisions are made and the many ways these pathways can be impacted by exposure to oestrogenic chemicals. It is now clear that exogenous oestrogen can target both non-genomic and genomic pathways in the somatic cells of the gonad to affect cell fate decisions and their long-term maintenance. In particular, oestrogen impacts the somatic cells through alterations to MAPK signalling and the subcellular localisation of SOX9, leading to suppression of testis genes and activation of ovarian genes. These effects ultimately disrupt both the development and function of the gonad. Clearly any EDC that alters oestrogen signalling will profoundly impact gonad development and function.
Conflicts of Interest:
The authors declare no conflict of interest. Glycogen synthase kinase 3β FRAT1
|
v3-fos-license
|
2023-06-02T15:18:04.774Z
|
2022-01-29T00:00:00.000
|
259014282
|
{
"extfieldsofstudy": [],
"oa_license": "CCBY",
"oa_status": "HYBRID",
"oa_url": "https://radjapublika.com/index.php/MORFAI/article/download/721/606",
"pdf_hash": "34201187ad1cc091d34ac6a4e006aac3db8efbf5",
"pdf_src": "Anansi",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44365",
"s2fieldsofstudy": [
"Business"
],
"sha1": "0e24be8c38d0a86917123b2d9fa71650b28e3a35",
"year": 2022
}
|
pes2o/s2orc
|
THE RELATIONSHIP BETWEEN WORK FATIGUE AND JOB SATISFACTION WITH THE WORK PRODUCTIVITY OF NURSES IN EFARINA HOSPITAL INPATIENT WARDS, 2016
Fatigue is a problem that needs attention. All types of work, both formal and informal, cause work fatigue. Work fatigue will reduce performance and increase work errors. Decreasing performance means decreasing work productivity. Job satisfaction is also an important target in human resource management because it directly or indirectly affects the work productivity of employees in an organization or company. analytic with a cross-sectional design to analyze the relationship between work fatigue and type of research. Tengku Mansyur Tanjungbalai Measurement of work fatigue, job satisfaction and work productivity respectively using a questionnaire. Work fatigue uses KAUPK2, job satisfaction uses a job satisfaction questionnaire while measuring work productivity uses a work productivity questionnaire. The results showed that there were 28 nurses who were tired (56.6%), the work productivity of nurses who were not suitable were 24 people (51.1%) and there were 4 people who were suitable (7.7%). There were 19 nurses (40.4%) who were not tired, 8 nurses (17.0%) who were not fit for work, and 11 nurses (23.4%) who were fit. There were 25 nurses (53.2%) who were dissatisfied, 25 nurses (53.2%) who were suitable for work productivity and none (0%) did not have suitable nurse productivity. There are 22 nurses who are satisfied (46.8%), the work productivity of nurses who are not suitable there are 7 people (14, 9%) and the work productivity of nurses who did not fit there were 15 people (31.9%). The relationship between work fatigue and job satisfaction with work productivity shows significant results (p <0.05). It is hoped that further research will be carried out on the factors that affect work productivity other than fatigue and satisfaction factors.
INTRODUCTION
Republic of Indonesia Law No. 13 of 2003 concerning Manpower article 68 paragraph 1 states that every worker or laborer has the right to obtain protection for occupational safety and health, morals and decency and treatment in accordance with human dignity and values and religious values.the workforce is in the best possible harmony, which means that health conditions and productivity can be guaranteed as high as possible, so there needs to be a favorable balance of workload factors, additional burdens due to the work environment and work capacity.
According to Lumenta, one of the important efforts made in development in the health sector is to provide health services.The most important part of the overall health service is nursing care.And nurses are the largest workforce compared to other workers working in hospitals.This large number will be meaningless if there is no effort to improve the quality of the nurse's professionalism (Ambar, 2006) According to Setyawati (2003) the human factors that greatly influence labor productivity are sleep problems, biological needs, and work fatigue.It is even stated that the decline in labor productivity in the field is largely caused by work fatigue.Work fatigue is a pattern that arises in a situation, which generally occurs in everyone.who are no longer able to carry out activities (Sedarmayanti, 2009) Volume 1 No.2 (2022)
Nurmala
Fatigue is a problem that needs attention.All types of work, both formal and informal, cause work fatigue.Work fatigue will reduce performance and increase work errors.Decreasing performance means decreasing work productivity.If the level of productivity of a worker is disrupted due to physical and psychological fatigue, then the result will be felt by the company in the form of a decrease in company productivity.(Ambar, 2006) Job satisfaction is quite an interesting and important issue, because it has proven to have great benefits for both individual and industrial interests.For individuals, research on the causes and sources of job satisfaction allows for efforts to increase their life happiness.For the industry, research on job satisfaction is carried out in the context of efforts to increase production and reduce costs through improving the attitudes and behavior of its employees.(Sutrisno, 2009)
1.1.Formulation of the problem
Based on the background above, it can be formulated that the problem to be studied is that there is no known relationship between work fatigue and job satisfaction with the work productivity of nurses in the inpatient room of Efarina Berastagi Hospital in 2016
2.1.Types of research
This type of research uses an analytic survey research method with a cross-sectional design (Soekidjo, 2005), namely to analyze the correlation between risk factors and effect factors of research data, namely work fatigue and job satisfaction with work productivity of nurses
2.2.Location and Time of Research
The research location was in the inpatient room of Efarina Berastagi Hospital and the research was conducted in July -September 2016.
The considerations for carrying out research at that place were because the same research had never been carried out at that place and the facilities and support provided to conduct research.
2.3.Population
Efarina Berastagi Hospital has 88 permanent nurses and 96 apprentice nurses on duty in the Inpatient Room.The research population was all permanent nurses who worked in the inpatient rooms of Efarina Berastagi Hospital, namely 88 people.
2.4.Sample
Determining the number of samples if the population is smaller than 10,000, then sampling can be calculated using the Tarro Yamane formula in the theory of Notoadmojo (2005)
2.5.Data analysis
Data analysis in this study includes: 1. Univariate analysis, which is an analysis that describes the independent and dependent variables in the form of a frequency distribution.2. Bivariate analysis, which is a follow-up analysis to see the relationship between the two independent and dependent variables using the chi square test and a 0.05 at the 95% level of confidence
3.1.Respondents Work Fatigue
Frequency distribution of respondents based on work fatigue in the Inpatient Room of Efarina Berastagi Hospital in 2016, 28 people (59.6%) were tired, 19 people (40.4%) were not tired.Based on the research results obtained from the Work Fatigue Measurement Tool Questionnaire (KAUPK2), it can be seen that workers who experience the most fatigue are in the tired category.Feelings of fatigue are usually felt after completing work activities and at work.
The value of work fatigue obtained based on the measurement results using KAUPK2, namely the distribution of respondents based on feelings about the symptoms of fatigue in the questions in KAUPK2 shows that the majority of respondents, namely 30 people (63.8) stated that they felt difficult to think, 30 people (63, 8%) felt tired talking, 34 people (72.3%) felt nervous about something, 33 people (70.2%) felt that they never concentrated on dealing with work, 29 people (61.7%) felt they didn't have attention on something, 39 people (83.3%) tend to forget something, 23 people (48.9%) feel less confident about themselves, 22 people (46.8%) feel not diligent in carrying out their work, 24 people (51.1 %) felt reluctant to look people in the eye, 30 people (63.8%) felt reluctant to work fast, 31 people (66,0%) felt uneasy at work, 46 people (97.9%) felt tired all over, 33 people (70.2%) felt they were acting slowly, 29 people (61.7%) felt they couldn't walk anymore, 31 people (66.0%) felt that before work they were tired, 35 people (74.4%) felt that their thinking power had decreased, 38 people (80.9%) felt anxious about something.
Based on this information, the fatigue category of nurses in the ward of Efarina Berastagi Hospital was as much as 59.6% indicating that inpatients experience symptoms of fatigue such as tired talking, lack of concentration in dealing with something, drowsiness while working, feeling heavy in the head, confused thoughts.and others but the intensity of emergence is still rare.This situation should not be allowed to continue because these symptoms can then lead to chronic fatigue (Suma'mur, 1996).If someone suffers from severe fatigue continuously, it will result in chronic fatigue with symptoms: fatigue before starting work.If fatigue continues and causes: headaches, dizziness, nausea and so on, then the condition is called clinical fatigue which will result in being absent or lazy to work.(Sedarmayanti, 2009) Most of the respondents claimed to have low job satisfaction with their jobs.Most of the respondents only felt satisfied with the aspect of work, namely the way of supervision carried out by superiors on the implementation of tasks / jobs as many as 26 people (55.3%), satisfied with communication and cooperation between nurses who had different shifts as many as 24 people (51%), satisfied with regulations existing staffing for nurses is as many as 26 people (55.3%).This is a picture of a negative reaction because along with a decrease in their job satisfaction, the productivity and quality of work as a nurse will also decrease because individuals who do not have high job satisfaction have a negative attitude towards the job.
3.2.Respondent Work Productivity
The work productivity value obtained is based on the measurement results using a work productivity questionnaire.Data on work productivity is in the form of performance appraisal, namely work evaluation using a work productivity questionnaire which includes quality of work, promptness, initiative, capability and communication.Measuring the level of labor productivity in this study was carried out by giving productivity questionnaires to respondents, namely the head nurse in the inpatient room of Efarina Berastagi Hospital in 2016.Most of the respondents, namely 26 people (55.3%) were declared not suitable for achieving the targets set by the hospital, 27 (57.4%)people stated that they were appropriate in completing assignments on time, 24 people (51. Inappropriate work productivity can be caused by employee fatigue which can affect work productivity.Mental and physical fatigue is a very important thing to pay attention to, because tired mental and physical conditions have a close relationship with work productivity.The higher the level of physical and mental work fatigue, the more it can reduce productivity (Sedarmayanti, 2009)
3.3.The Relationship between Work Fatigue and Respondents' Work Productivity
The analysis that has been carried out to prove the relationship between work fatigue and work productivity of nurses in the inpatient room of Efarina Berastagi Hospital in 2016 is the chi square test where Ho is rejected if the probability is smaller than the significance level of 0.05.
The probability obtained in the chi square test is 0.006 which means the probability is less than 0.05 (0.006 <0.05)Then Ho is rejected and Ha is accepted.The conclusion obtained is that there is a relationship between work fatigue and work productivity of nurses in hospital inpatient rooms Efarina Berastagi 2016.From the results of the analysis it can be seen that there is a significant relationship between fatigue and labor productivity.This relationship indicates that an Volume 1 No.2 (2022)
THE RELATIONSHIP BETWEEN WORK FATIGUE AND JOB SATISFACTION WITH THE WORK PRODUCTIVITY OF NURSES IN EFARINA HOSPITAL INPATIENT WARDS, 2016
Nurmala increase in fatigue is followed by a decrease in labor productivity or vice versa, namely a decrease in fatigue followed by an increase in labor productivity.
3.4.The Relationship between Job Satisfaction and Respondents' Work Productivity
The analysis that has been carried out to prove the relationship between job satisfaction and work productivity of nurses in the inpatient room of Efarina Berastagi Hospital in 2016 is the chi square test where Ho is rejected if the probability is smaller than the 0.05 significance level.
The probability obtained in the chi square test is 0.000 which means the probability is less than 0.05 (0.000 <0.05).Then Ho is rejected and Ha is accepted.The conclusion obtained is that there is a relationship between satisfaction and work productivity of nurses in inpatient rooms at Efarina Berastagi Hospital in 2016.Thus the results of this study are the same as those of Marsono (2001).where the variables of satisfaction and work motivation have a significant influence on employee work productivity.This study is also the same as Jarwadi's (2001) study, in which work motivation variables have a significant influence on employee work productivity.In general we can assume that job satisfaction and productivity are closely related to one another, if an employee has high work performance he will get a satisfaction in work.On the other hand, if he does not get satisfaction, the resulting performance is low.For this reason, companies need to pay attention to and continuously improve job satisfaction and work productivity of their employees.(Yanto, 2007)
4.CONCLUSION
From the results of research that has been conducted on nurses in the Inpatient Room of Efarina Berastagi Hospital in 2016, the following conclusions and suggestions are obtained: 1. Of the 47 respondents, based on the age group, the most were the 25-34 year age group (57.4%).Based on gender, the most were women, 36 people (76.6%).Based on marital status, the most were already married, namely 41 people (87.2%).Based on years of service, the most were working <5 years as many as 18 people (27.7%).2. 28 nurses (59.6%) felt tired and 19 nurses (40.4%) did not feel tired 3. Nurses who were dissatisfied were 25 people (53.2%) and 22 people who were satisfied (46.8%) 4. Work productivity is not appropriate as many as 33 people (70.2%) and work productivity is appropriate as many as 14 people (29.8%). 5.There is a significant relationship between job fatigue and job satisfaction with work productivity.
THE RELATIONSHIP BETWEEN WORK FATIGUE AND JOB SATISFACTION WITH THE WORK PRODUCTIVITY OF NURSES IN EFARINA HOSPITAL INPATIENT WARDS, 2016
Nurmala
|
v3-fos-license
|
2016-05-12T22:15:10.714Z
|
2013-10-12T00:00:00.000
|
15697467
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://bmcoralhealth.biomedcentral.com/track/pdf/10.1186/1472-6831-13-53",
"pdf_hash": "991746734ba823e4e66bc4c593d63e6bebe2c9fe",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44368",
"s2fieldsofstudy": [
"Medicine"
],
"sha1": "991746734ba823e4e66bc4c593d63e6bebe2c9fe",
"year": 2013
}
|
pes2o/s2orc
|
Developmental delays and dental caries in low-income preschoolers in the USA: a pilot cross-sectional study and preliminary explanatory model
Background Anecdotal evidence suggests that low-income preschoolers with developmental delays are at increased risk for dental caries and poor oral health, but there are no published studies based on empirical data. The purpose of this pilot study was two-fold: to examine the relationship between developmental delays and dental caries in low-income preschoolers and to present a preliminary explanatory model on the determinants of caries for enrollees in Head Start, a U.S. school readiness program for low-income preschool-aged children. Methods Data were collected on preschoolers ages 3–5 years at two Head Start centers in Washington, USA (N = 115). The predictor variable was developmental delay status (no/yes). The outcome variable was the prevalence of decayed, missing, and filled surfaces (dmfs) on primary teeth. We used multiple variable Poisson regression models to test the hypothesis that within a population of low-income preschoolers, those with developmental delays would have increased dmfs prevalence than those without developmental delays. Results Seventeen percent of preschoolers had a developmental delay and 51.3% of preschoolers had ≥1 dmfs. Preschoolers with developmental delays had a dmfs prevalence ratio that was 1.26 times as high as preschoolers without developmental delays (95% CI: 1.01, 1.58; P < .04). Other factors associated with increased dmfs prevalence ratios included: not having a dental home (P = .01); low caregiver education (P < .001); and living in a non-fluoridated community (P < .001). Conclusions Our pilot data suggest that developmental delays among low-income preschoolers are associated with increased primary tooth dmfs. Additional research is needed to further examine this relationship. Future interventions and policies should focus on caries prevention strategies within settings like Head Start classrooms that serve low-income preschool-aged children with additional targeted home- and community-based interventions for those with developmental delays.
Background
Dental caries is the most common disease in children [1]. Recent epidemiologic data from the U.S. National Health and Nutrition Examination Survey (NHANES) suggest that dental caries prevalence among preschool children ages 2-5 years increased by 15.1% (from 24.2% in 1988-1994 to 27.9% in 1999-2004) [2]. Furthermore, from 1999-2004, 47.8% of preschoolers from low-income households experienced caries and 35% had untreated caries (compared to 11.4% and 6% of preschoolers from higher income households, respectively) [3,4]. These data underscore the association between poverty and poor oral health [5][6][7] in preschoolers and raise public health concerns, particularly in regards to the U.S. Healthy People 2020 objectives that call for reductions in the percentage of preschoolers with dental caries experience and untreated dental decay to 33.3% and 23.8%, respectively [8].
Poor oral health is associated with school absenteeism, learning problems, and pain [9] as well as systemic disease, hospitalization, and in rare cases death [10]. Oral diseases during early childhood are likely to have health consequences over the life course [11], which highlights the importance of caries prevention strategies, particularly for low-income preschoolers.
The U.S. Head Start program was promulgated in 1965 to address disparities in school readiness for low-income preschoolers. Head Start emphasizes cognitive and social development as well as health promotion and nutrition [12]. The program focuses on low-income preschoolers and was founded on the premise that improving nutritional intake and health outcomes can help to reduce disparities in school readiness [13]. At the start of each school year, all Head Start enrollees are evaluated by an education specialist to identify special health care needs, which are defined as "deafness, speech or language impairments, visual impairments including blindness, serious emotional disturbance, orthopedic impairments, autism, traumatic brain injury" or developmental delays [12]. For Head Start children identified with a developmental delay, Lead Education Agencies are responsible for providing tailored Individualized Education Programs (IEPs) [14]. IEPs are written documents that describe the child's specific delay, skills that need to be developed, services the school will provide, and where the services will take place. In 2009, there were over 900,000 Head Start enrollees in the U.S. [15] and 12% of enrollees had an IEP [16].
In regards to dental care, nearly 85% of Head Start enrollees received preventive dental care and 88% had a dental examination in the 2010-2011 program year [16]. These data suggest that Head Start has reduced some of the documented barriers to dental care for low-income preschoolers [17,18]. However, dental caries prevalence among Head Start enrollees remains high, ranging from 38% in Connecticut to 86% in Florida [13][14][15][16][17][18][19][20][21][22][23]. A 2005 prospective study reported that providing dental care coordination services to the caregivers of Head Start enrollees improved dental use for children but did not improve oral health status [24]. Collectively, these findings suggest that interventions focusing solely on increasing dental care utilization are insufficient in preventing dental disease in low-income preschoolers served by the Head Start program.
Targeted interventions, such as school-based sealant programs, have the potential to improve the oral health of children at greatest risk for poor oral health [17]. Anecdotal evidence suggests that preschoolers with developmental delays are at increased risk for dental caries, but there are no published studies to support this hypothesis. The current pilot study was guided by an adapted version of Patrick's sociocultural oral health disparities model [25], which posits that the determinants of dental caries in vulnerable children are multifactorial. We tested two hypotheses: 1) low-income preschoolers with developmental delays have greater dental caries prevalence (measured by dmfs) than those without developmental delays; and 2) other factors are associated with dental caries in low-income preschoolers.
Methods
Study Design, Participants, and Location. This was a cross-sectional pilot study based on secondary data. The study focused on preschoolers ages 3-5 years in two Head Start classrooms in Washington, USA (N = 115). Both classrooms were located in Kittitas County, a rural county in eastern Washington. Over 92% of Kittitas County is White compared to 82.0% for Washington state [26]. The median household income was $42,769 and 22.3% of individuals were below the Federal Poverty Level ($58,890 and 12.5%, respectively, for Washington state) [26]. We received human subjects approval to conduct this study from the University of Washington Institutional Review Board.
Conceptual model
A sociocultural model on oral health disparities presented by Patrick and colleagues was adapted to generate a preliminary conceptual model [25]. This model posits that social and cultural factors from multiple levels influence oral health outcomes for vulnerable populations, including low-income preschool-aged children. The original model posits that these multilevel factors interrelate directly and indirectly to produce oral health disparities within vulnerable populations. Our parsimonious model conceptualized covariates as direct correlates of dental caries and each covariate was classified into one of four domains: Ascribed factors (immutable individual-level demographic characteristics: age, sex, race); Proximal factors (modifiable individual-level behavioral characteristics: communication difficulties; dental home); Immediate factors (family-level interpersonal factors: primary caregiver's education; primary caregiver's employment status; family structure; home health environment); Distal factors (system-level environment: community water fluoridation).
Data sources
There were two data sources: Head Start enrollment and health history forms. The enrollment form contained demographic information about the child (e.g., age, sex, race, Individual Education Program [IEP] participation) and the primary caregiver (e.g., education, employment, household structure). The health history form contained information on whether the child had difficulties communicating, had a dental home (or a place to take their child for dental care), lived in a smoke-free household, and lived in a fluoridated community. All data were from the 2010-2011 Head Start school year.
Outcome measure
The outcome measure was the number of decayed, missing, or filled surfaces (dmfs) on primary teeth, a composite measure of dental caries and treatment experience. We used the National Institute of Dental and Craniofacial Research (NIDCR) Early Childhood Caries Collaborating Centers (EC4) criteria [27], which are based on the World Health Organization (WHO) methods [28]. The WHO methods define decay on pit and fissure or smooth surfaces as "an unmistakable cavity, undermined enamel, or a detectable softened floor or wall" [28]. To account for trauma and natural exfoliation, a surface was classified as missing only if the tooth was missing because of caries. Surfaces restored with amalgam, composite, glass ionomer, or stainless steel crowns were classified as filled. Sealed surfaces were classified as sound. Consistent with EC4 criteria, if there was uncertainty about the status of a tooth surface, the surface was classified into the more conservative category. Surface-level caries data were collected by a single trained and calibrated pediatric dentist. Five-percent of the study population was randomly selected for a second caries exam to allow for an assessment of intrarater reliability. The Kappa statistic was used to assess for intrarater consistency in the caries data. The intrarater reliability for the caries exam data was found to be Kappa = 0.69 (95% CI: 0.61, 0.76; P < .001), which indicates substantial agreement.
Predictor variable
The main predictor variable was the child's developmental delay status, defined as whether the child had an Individualized Education Program (IEP) (no/yes). While there are limitations associated with using IEP as a proxy for developmental delays (e.g., under-identification of disabilities), nearly 20% of enrollees in our study had an IEP, which approximates the 12% of Head Start children nationally with an IEP [16] and the 33% prevalence estimate of delay from a previous study [29].
Model covariates
There were 10 model covariates hypothesized as correlates of dmfs or as confounders of the relationship between developmental delays and dmfs. These covariates were classified into four domains (see Conceptual Model subsection).
There were two binary proximal covariates (no/yes): communication difficulty and dental home. Communication difficulty, assessed by a Head Start teacher, was measured using the communication subsection of the Ages and Stages Questionnaire, 2nd Edition (ASQ). The ASQ is a validated age-specific screener used to assess multiple developmental domains such as communication, motor, problem solving, and personal-social skills [32]. Children scoring greater than 38 points, 39 points, or 31 points on the communication subsection of the 36-, 48-, and 60-month ASQ, respectively, were classified as having no communication difficulties. Remaining children were classified as having communication difficulties. Dental home was assessed by asking the caregiver whether they needed assistance finding a dentist (no/yes) and measured whether the child had a place to go for regular preventive care and restorative dental when needed.
There were four caregiver-reported immediate covariates: caregiver education (less than high school; high school; greater than high school) [30]; caregiver employment status (unemployed; in school/training; employed) [33]; family structure (defined as whether the child lived in a single parent or two parent household) [34]; and whether the child lived in a smoke-free home (no/yes) [35], a proxy for the home health environment.
There was one caregiver-reported distal covariate: whether the child lived in a community with fluoridated water (no/yes) [36].
Statistical analyses
We did not calculate statistical power based on previous work cautioning against power calculations for retrospective studies [37]. After generating descriptive statistics, we used the Pearson chi-square test to assess the relationships between model covariates and the main predictor variable (developmental delay status) (α = 0.05). Because the outcome was not normally distributed, we used the Wilcoxon-Mann-Whitney U test to compare median dmfs rates across model covariates. A multiple variable Poisson regression model was generated to test our hypothesis that the dmfs prevalence rate would be higher in children with developmental delays (GENLIN function with log link). Poisson regression results were presented as regression parameters (i.e., beta coefficients) with standard errors and prevalence ratios. There was no evidence of collinearity between model covariates (e.g., developmental delays and communication difficulties) and all covariates were included in the final regression model. We used PASW Statistics version 18.0 for Windows (Chicago, IL).
Descriptive statistics
There were 115 preschoolers in our study and 17.4% were identified with a developmental delay (Table 1). Thirteen percent of preschoolers had a communication problem and 91.3% had a dental home. Caregiver education level was evenly distributed across the three categories and 27.8% of caregivers were unemployed. Nearly 90% of preschoolers lived in a smoke-free home and 64% lived in communities with fluoridated water. Significantly larger proportions of preschoolers with a developmental delay were male compared to preschoolers without a developmental delay (75% and 44.2%, respectively; P = .012). Nearly 49% of children had zero dmfs (data not shown). The mean dmfs was 5.8 (standard deviation: 11.2 dmfs; median: 1.0 dmfs; maximum: 65 dmfs). The mean number of decayed, filled, and missing surfaces was 1.3, 4.0, and 0.5, respectively. There were no significant differences in the median dmfs rates across all model covariates ( Table 2).
Regression models
The covariate-adjusted Poisson regression model indicated that developmental delays were significantly associated with dmfs (Table 3). Preschoolers with developmental delays had a dmfs prevalence ratio that was 1.26 times as high as children without developmental delays (95% CI: 1.01, 1.58; P < .04). Of the 10 remaining model covariates, six covariates across all four model domains were significantly associated with dmfs (age, communication difficulties, dental home, caregiver education level, caregiver unemployment, and living in a community with fluoridated water). Older preschoolers as well as preschoolers with communication difficulties (ascribed and proximal factors, respectively), those with caregivers who finished high school or less (an immediate factor), and children with an unemployed caregiver (also an immediate facto) had increased dmfs prevalence ratio. Preschoolers with a dental home (a proximal factor) and those living in communities with fluoridated water (a macro factor) had significantly decreased dmfs prevalence ratios.
Discussion
This is the first published study to examine the relationship between developmental delays and dental caries in low-income preschool-aged children. We tested two hypotheses within a population of preschoolers in the Head Start program. The first hypothesis was that the dmfs prevalence ratio would be higher for Head Start preschoolers with developmental delays than for Head Start preschoolers without. Our findings support this hypothesis. There are no studies to which we can directly compare our results, but there are three potential explanations. First, preschoolers with developmental delays may not cooperate with home care behaviors such as toothbrushing, which leads to plaque accumulation and limited exposure to topical fluorides. Second, preschoolers with developmental delays may be exposed more frequently to fermentable carbohydrates (e.g., medications, sugar sweetened beverages, sweets). Third, caregivers of preschoolers with disabilities may experience higher levels of caregiver stress [38], which could exacerbate the preceding factors that contribute to poor oral health. These findings suggest that low-income preschool-aged children with developmental delays are a vulnerable subgroup among low-income preschoolers.
The second hypothesis was that other factors would be related to dmfs. Six model covariates were significantly associated with caries: age, communication difficulties, not having a dental home, lower caregiver education, unemployment, and living in a community with nonfluoridated water. Previous studies support our findings regarding age [27]. In terms of the significant proximal factors (communication difficulties and dental home) there are no studies to which we can directly compare our findings. However, two studies suggest a relationship between child temperament and caries [39,40]. In our study, there was low correlation between communication difficulties and developmental delays, suggesting that these measures capture different aspects of child behaviors. Additional research is needed to identify the mechanisms by which communication difficulties can lead to increased caries. Furthermore, in regards to the dental home variable, a preliminary study reported that young children have less tooth decay when their mothers have a dental home [41]. Our findings are the first to suggest an association between children having a dental home and lower caries experience rates. While dental homes are considered to be important by parents and dentists [42,43], there are few relevant studies to which we can compare our findings. Children with a dental home may have caregivers with good oral health behaviors (e.g., prevention-oriented dental care use, healthy eating, regular home oral hygiene). We recognize the limitations associated with our operationalization of the dental home, which is an area of dental research that requires additional attention. Future research should continue to test different operationalizations of the dental home concept, evaluate clinical outcomes associated with dental homes, and identify the specific features of dental homes that lead to good oral health. There were also two significant immediate factors: caregiver education level and employment status. There is extensive literature on the oral health effects of low caregiver education, which is associated with low health literacy, negative oral health-related behaviors, and social disadvantage [44][45][46][47]. In terms of employment effects, compared to preschoolers with an unemployed caregiver, preschoolers with an employed caregiver had significantly fewer caries whereas preschoolers with a caregiver in school had significantly greater caries. A potential explanation is that employed caregivers may have greater flexibility to take time off from work to take their child to the dentist. Caregivers in school may rely on relatives for caretaking responsibilities, leaving them fewer opportunities to oversee enforcement of toothbrushing and healthy eating. Our findings conflict with a recent study from Australia, which reported no relationship between employment and caries in 20-month old children but reported a significant interaction between employment and family structure [33]. Broadly, there is growing recognition that addressing the social determinants of pediatric health such as caregiver education and employment, has the potential to improve various health outcomes, including oral health [48]. Our findings underscore the importance of identifying the specific factors associated with employment that could promote child health outcomes such as time-flexible work policies [49] and examining how children's oral health is influenced by interactions between employment and family-level factors. Dental health professionals also have a responsibility to partner with the health policy and public health communities to help craft social and economic policies that seek to improve the upstream determinants of health as a way to achieve oral health equity in vulnerable populations.
The only macro factor in our model (living in a community with fluoridated water) was significantly associated with fewer caries. There are numerous studies that support the benefits associated community water fluoridation [36,50,51]. Because segments of the population are concerned with the safety of or oppose community water fluoridation [52], there is a need for continued research on the behavioral and social determinants of opposition to water fluoridation. Policies and interventions must be developed to ensure that health professionals have the resources to inform patients and the public about the importance of community water fluoridation.
Also of interest are the two immediate factors (family structure and living in a smoke-free home) and the two ascribed factors (sex and race) that failed to reach statistical significance in our regression model. Our finding that family structure was not associated with caries is inconsistent with a previous study reporting that children from one-parent families had significantly higher caries rates than those from two-parent families [34]. Our results are also inconsistent with previous studies that link caregiver smoking and caries [35,[53][54][55]. One potential explanation is social desirability bias regarding reliable reporting of smoking status [56]. We would not expect differences in dental caries prevalence by sex, as demonstrated in our model, but a previous study found that female infants had greater odds of developing severe caries [31]. Furthermore, in our model, race failed to reach statistical significance, which is inconsistent with previous findings [57]. A possible explanation is low variance in regards to non-White children in our study population, most of whom were of Hispanic or Latino descent. Future research should continue to examine how features associated with households, home health environment, and race/ethnicity are related to dental disease in young children.
Collectively, our findings support a preliminary conceptual model on dental caries for low-income preschoolers enrolled in the Head Start program (Figure 1). There are two features of this model. The first is that the correlates of primary tooth dmfs are found at multiple levels. Our model suggests that reducing dental caries in low-income preschool-aged populations requires complex interventions that reach beyond single-level approaches such as ensuring dental homes or community water fluoridation [58]. Limited financial and human resources coupled with persisting caries prevalence rates among vulnerable populations indicate the need for innovative strategies that address the multilevel determinants of poor oral health. This is related to the second feature of the modelthe mutability of model covariates. Some of the model features (e.g., developmental delays, caregiver education level, employment) are immutable in the short-term, which represents opportunities to implement targeted interventions and policies. For instance, if children with developmental delays are at greater risk for dmfs, as demonstrated in our study, targeted interventions should focus on these preschoolers rather than all Head Start enrollees. Other model features (e.g., dental home, community water fluoridation) are mutable and may serve as active ingredients in a targeted intervention. For example, an intervention focusing on children with developmental delays could include case managers who work with caregivers and community dentists to ensure that the child is seen regularly by a dentist for checkups and treatment as necessary and behavioral interventions that reinforce use of fluoridated water, regular toothbrushing with fluoride toothpaste, and healthy diet. Additional research is needed to refine and validate our preliminary dental caries model so that appropriate interventions and policies can be developed and tested.
Increases in dental caries prevalence in preschool-aged children in the U.S. have renewed interest in populationbased strategies to prevent and manage dental disease in young children [59]. Intensive multilevel interventions implemented within Head Start classrooms coupled with community-and home-based strategies for the highest risk children may be needed to achieve meaningful health improvements [60]. Head Start programs should implement and test preventive strategies within classrooms (e.g., twice daily toothbrushing with fluoridated toothpaste, diet control, iodine and fluoride varnish applications) [61,62]. A recent study conducted within Head Start classrooms suggests that fluoride-xylitol toothpastes are not more efficacious than fluorideonly toothpastes [63]. Research is needed to evaluate the efficacy and acceptability of additional preventive strategies that could be implemented within Head Start classrooms such as toothbrushing with higher concentration fluoride products and distributing snacks containing therapeutic levels of xylitol [64,65]. Head Start teachers and caregivers will require training about dental disease prevention and how to properly implement these strategies [66][67][68]. Beyond the classroom setting, there are promising opportunities to implement caregiver-, household-, and community-level interventions that target Head Start enrollees with developmental delays [69,70]. These efforts will require rigorous evaluation so that interventions can be modified as needed and disseminated to other settings.
This study has a number of strengths including adaptation of a conceptual framework that guided all stages of the study, assessment of intrarater reliability for the clinical caries data, and blinding of the caries examiner. However, as with all studies, there were limitations. The first is that our conceptual model is likely to be incomplete. Because of data limitations, we were unable to include all cultural, social, and environmental factors from Patrick's model (e.g., cultural attitudes toward oral health, norms, social capital, social disadvantage, arealevel poverty). Future work could investigate additional cultural and biopsychosocial factors related to dental caries in young children [71]. Second, the data were cross-sectional and there is no assumption of causality. Longitudinal studies are needed to better understand how risk factors influence oral health outcomes overtime. Third, the study focused on two Head Start classrooms in a rural county, which limits external generalizability of our study findings. There is a need to conduct larger studies that include Head Start classrooms from a variety of geographic settings.
Conclusions
Based on the results of this pilot study, we draw two conclusions. There was a significant positive association between developmental delays and dmfs prevalence in low-income preschool-aged children served by Head Start. In addition, factors such as having a dental home and living in a community with fluoridated water were associated with significantly lower dmfs prevalence ratios. Additional studies are needed to further examine the relationship between developmental delays and primary tooth caries in preschoolers, the mechanisms underlying this relationship, and multilevel strategies to reduce oral health disparities in vulnerable preschool-aged children.
|
v3-fos-license
|
2022-09-03T06:18:27.285Z
|
2022-09-02T00:00:00.000
|
252009416
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": null,
"oa_url": null,
"pdf_hash": "150d72138fbad4e1172e207befc3f0a703b51e6f",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44372",
"s2fieldsofstudy": [
"Political Science"
],
"sha1": "13cb57d6fd519bb2885cf8ba1751d8f0b242ce60",
"year": 2022
}
|
pes2o/s2orc
|
An exploration of Canadian government officials’ COVID-19 messages and the public’s reaction using social media data
Governments can use social media platforms such as Twitter to disseminate health information to the public, as evidenced during the COVID-19 pandemic [Pershad (2018)]. The purpose of this study is to gain a better understanding of Canadian government and public health officials’ use of Twitter as a dissemination platform during the pandemic and to explore the public’s engagement with and sentiment towards these messages. We examined the account data of 93 Canadian public health and government officials during the first wave of the pandemic in Canada (December 31, 2019 August 31, 2020). Our objectives were to: 1) determine the engagement rates of the public with Canadian federal and provincial/territorial governments and public health officials’ Twitter posts; 2) conduct a hashtag trend analysis to explore the Canadian public’s discourse related to the pandemic during this period; 3) provide insights on the public’s reaction to Canadian authorities’ tweets through sentiment analysis. To address these objectives, we extracted Twitter posts, replies, and associated metadata available during the study period in both English and French. Our results show that the public demonstrated increased engagement with federal officials’ Twitter accounts as compared to provincial/territorial accounts. For the hashtag trends analysis of the public discourse during the first wave of the pandemic, we observed a topic shift in the Canadian public discourse over time between the period prior to the first wave and the first wave of the pandemic. Additionally, we identified 11 sentiments expressed by the public when reacting to Canadian authorities’ tweets. This study illustrates the potential to leverage social media to understand public discourse during a pandemic. We suggest that routine analyses of such data by governments can provide governments and public health officials with real-time data on public sentiments during a public health emergency. These data can be used to better disseminate key messages to the public.
Introduction
The novel coronavirus (COVID-19, SARS-CoV-2) was first documented in December 2019 in Wuhan City, Hubei Province, China [2]. The virus spread rapidly around the world and by March 11, 2020, the World Health Organization declared the COVID-19 outbreak a pandemic [3]. Worldwide, as of June 20, 2021, a total of 178,433,920 cases of COVID-19 and 3,864,731 deaths were confirmed in 192 countries and regions [4], with cases surpassing those of the Middle East Respiratory Syndrome (MERS), the Severe Acute Respiratory Syndrome (SARS) and the previous H1N1 epidemics [5][6][7].
Unfortunately, the COVID-19 pandemic has led to difficult situations such as deaths, post-COVID psychological damage, or substantial pressure on the healthcare systems. It presents critical risks for individuals and the communities. This pandemic can be viewed as an unfortunate event that has caused direct harm to individuals [8]: it is a global catastrophe, it cannot be easily eradicated, and it brings new risks such as virus mutations that are unknown to the scientific community. As cases increased and community spread worsened, government and public health officials worldwide adopted measures to reduce transmission including handwashing [9], social distancing, and self-isolation following exposure [10]. In this context, governments needed to use all the required means to mitigate the risks associated with COVID-19. To this end, many officials turned to social media, in addition to traditional media sources (e.g., television, radio), as a platform to disseminate relevant health information to the public [11,12]. By all these measures and actions, the objective of governments has been to reduce all risks that are related to the pandemic.
The purpose of our study is to provide insights on how Canadian federal and provincial/territorial governments and Canadian and provincial/territorial public health officials used Twitter as a platform to disseminate COVID-19 information and to determine the public's engagement with and reaction to these messages. Specifically, our objectives are to 1) determine the engagement rates of the public with the Canadian federal and provincial government and public health officials' Twitter posts; 2) assess hashtag trends and topic shift related to the Canadian public discourse on Twitter regarding COVID-19; 3) conduct an in-depth sentiment analysis of Twitter users' responses to government and public health officials' COVID-19-related Twitter posts.
The paper is organized as follows: Section 2 provides the theoretical background of the study, Section 3 describes the adopted methodology. Section 4 is devoted to the results that are discussed in Section 5. Section 6 concludes the paper.
Theoretical background
In this paper, we consider the COVID-19 pandemic as an unfortunate event that has caused direct harm to individuals including deaths, admission in intensive care for those with severe COVD-19, restrictions for people to travel, people losing their jobs, etc. In this context, governments should use all means available to mitigate risks associated with the COVID-19 pandemic. To this end, they need to communicate all the mitigation strategies to their citizens and make them aware of all faced risks. The importance of risk communication during public health emergencies has been consistently highlighted in the literature [13][14][15][16]. Risk communication describes any purposeful exchange of information about risks between interested parties (e.g., such as from a government organization to the public) [17]. In risk communication, there are three components to consider [18]: 1) the message features, which define the characteristics of the message to be shared with people, 2) the messengers, which are the entities that send the message, and 3) the audience, who are the people to whom the message features are sent. Based on these components, governments (who are the messengers) communicate to the public (who is the audience) on the COVID-19 pandemic (which is the message).
This communication can be conducted in different ways and should be a two-way communication [8]. In order not to fail, governments need to provide information to mitigate the risks and people perceptions of (and reactions to) these risks. To this end, governments turned to social media platforms since they allow this bidirectional communication between governments and people. In fact, in previous pandemic situations where social media has been used (such as the H1N1 outbreak and the Ebola outbreak), social media has been recognized as being an excellent vector to communicate with people [17,19]. Specifically, successful risk communication strategies are built on the trust of public health officials [20]. Hence, it is very important for governments to provide people with the confidence in their preparedness to address the pandemic; they need to ensure that they are providing accurate information and that people are convinced by the measures put in place [21]. It then becomes interesting to investigate how governments used social media to disseminate messages to their citizens to better inform them about the risks that COVID-19 presents to society as a whole and to see how people reacted to these messages.
The study conducted in this paper takes place in Canada. Canada is a federal country where power is shared between the federal government and provinces/territories governments. There is a paucity of data describing how the public reacted to COVID-19-related public health messages provided by varying levels of government in Canada. The purpose of this study is therefore to 1) determine the engagement rates of the public with Canadian federal and provincial/ territorial governments and public health officials' Twitter posts; 2) conduct a hashtag trend analysis to explore the Canadian public's discourse related to the pandemic during this period; 3) provide insights on the public's reaction to Canadian authorities' tweets through sentiment analysis.
Twitter as the target data platform
We chose Twitter as our research social media platform. Twitter is a microblogging and social networking platform on which registered users post and interact with 280-character messages known as tweets. We chose Twitter as the target research social media platform since it is one of the world's largest social media platforms and allows users to disseminate real-time information to a wide audience [22]. Since its launch, Twitter has become an important channel for public communication [23]. Its usefulness, efficiency, and impact has particularly been demonstrated in the contexts of politics [24], natural disasters crises [25], brand communications [26], and everyday interpersonal exchanges [27]. Twitter has evolved as a key public health dissemination tool, as was observed during the COVID-19 pandemic. Furthermore, Twitter has been increasingly used to conduct research, as it allows for the study of large-scale, worldwide-web communications [28].
Conceptual framework
In the context of this research, we will look at the data posted by Canadian public officials addressed to the public and how the public reacted to Canadian public officials. In the first case, we will analyze the information communicated by governments to the public that we refer to as trends analysis. In the second case, we will look at the degree to which the public engaged with the messages posted by government officials (engagement) and how they reacted to them (sentiment). The conceptual model of this research is depicted in Fig 1.
Government and public health officials' social media accounts
The messengers are the Canadian public officials that we canvassed in four categories of Twitter accounts: provincial/territorial and federal government officials' accounts, respectively, and provincial/territorial and federal public health officials' accounts, respectively. We selected these accounts since government and public health authorities led the dissemination of COVID-19-related information to the public in Canada.
For the provincial/territorial and federal government official accounts, we downloaded tweets from the official department/organizational accounts (e.g., @Canada, @ONgov, @GouvQc), as well as the individual accounts of the corresponding organizations' leaders (e.g., Canada's Prime Minister Justin Trudeau, Ontario's Prime Minister Doug Ford, and Quebec's Prime Minister François Legault). We replicated this model for the public health officials' accounts, downloading tweets from organizational handles (e.g., @GovCanHealth, @ONThealth, @sante_qc) as well as the leaders of these organizations (e.g., Canada's Minister of Health the Honourable Patty Hajdu, Ontario's Minister of Health the Honourable Christine Elliot, and Quebec's Minister of Health and Social Services Christian Dubé). In Appendix A in S1 File, we present in Tables 1, 2 in S1 File the full list of Twitter handles included to obtain study data.
To ensure the validity of selected accounts, we limited our data collection to Twitter-verified accounts [29]. Verified accounts are often destined for well-known organizations or individuals and are indicated by a blue verified badge that appears next to the account holder's name. It is important to note that Twitter does not endorse posts from any account-verified or unverified.
Data access and format
The messages to be analyzed are tweets. Twitter offers two relevant Application Programming Interface (API) components to access tweets, data, and metadata. These applications are the Representational State Transfer (REST) API, used to retrieve past tweets matching established criteria within a search window available for Twitter searches; and the streaming API, used to subscribe to a continuous stream of new tweets matching the criteria, delivered via the API as soon as they become available. Each of these two APIs is offered by Twitter on three different levels, known as the standard API, the premium API, and the enterprise API [30]. To inform our study, we accessed tweets through the REST component of the premium API. We describe all the steps of data collection, filtering, and processing in Section 3.5.
Our data analysis was conducted using R, which is a programming language and free software for statistical computing and graphics commonly used by statisticians and data miners [31]. We used rtweet, which is a community-maintained R client for accessing Twitter's REST and stream APIs [32] in order to access the data and metadata needed to perform our analysis.
Mining tweets through the Twitter REST Premium API provided us with the text of the tweets as well as with several metadata, including the sending user's Twitter name and numerical ID, the time of posting, geolocation information (when available), and various data points which relate to the sender's Twitter profile settings that we briefly describe in Appendix B in Table 3 in S1 File [30].
Data collection, filtering, and preprocessing
Between December 31, 2019, and August 31, 2020, we archived all the Twitter posts, replies, and associated metadata published by Twitter accounts presented in Appendix A in S1 File with no restriction. The dates of collected data are in the timeframe of the first wave of the COVID-19 pandemic in Canada, which was confirmed using Google Trends data [33,34]. After the first step of data collection, we assessed 65,793 archived tweets and 80,256 archived replies.
As a second step, we filtered the collected tweets to retain those related to COVID-19. To this end, we filtered collected tweets based on the hashtags present in the tweets' metadata. To facilitate this, we established a list of hashtags related to COVID-19 (see Appendix C in S1 File for the full list of included hashtags). To develop the list of COVID-19-related hashtags, we used relevant literature [35] and social media tools and guides [36][37][38]. Some of these hashtags were related specifically to the COVID-19 pandemic (e.g., #COVID-19, #2019nCov), others were related to public health messaging, to COVID-19 impacts (e.g., #StayHome, #StayHome-SaveLives), or related topics (e.g., #N95, #PPE which describe the required personal protective equipment needed during the pandemic). Included tweets were required to have at least one of these COVID-19 related topics to be retained in the dataset. Once the tweets were filtered, we collected all replies related to the retained tweets.
Finally, as a third step, we preprocessed the retained tweets and replies in order to use them in our data analysis. To this end, we first eliminated any non-English or non-French language tweets and replies from our dataset. Next, we removed retweets to reduce repetition in the dataset. Finally, we converted all tweets and reply text to lowercase to avoid duplication due to text case. At the end of these three steps, we have had a total of 24,550 tweets and 46,731 replies. The three-step workflow is depicted in Fig 2. To give context to the analysis performed on the tweets and replies, we collected data related to the number of COVID-19 confirmed cases in Canada on a daily basis from December 31, 2019, to August 31, 2020. These data were obtained from the COVID-19 Data Repository maintained by the Center for Systems Science and Engineering (CSSE) at Johns Hopkins University [39].
We mention that in the context of this research, we were aware of the legal and ethical implications of collecting data from social media platforms. To this end, we first note that we used public data from Twitter. Second, we applied for permission from Twitter Inc. to have access to Twitter data. Finally, this study is designed based on anonymous feature account data rather than user-specific data. The data used was anonymous, and no personal data has been gathered or exploited for any purpose.
Data analysis
We performed three types of data analysis on our filtered dataset of COVID-19-related tweets (see Fig 3): 1. Analysis of the Canadian public's engagement with government and public health officials' tweets by looking at the number of retweets, the number of likes, and the ratio of interest.
2. Hashtag analysis to illustrate the evolution of the Canadian public discourse during the pandemic's first wave.
3. Sentiment analysis to provide insights on the public's reaction to the Canadian authorities' tweets.
To compare reach of federal accounts (which target all people living in Canada) versus provincial/territorial accounts (which target people living in that region), we present aggregate findings across provincial/territorial government and public health accounts, respectively.
3.6.1. Engagement metrics analysis. We established three engagement metrics with tweets: the number of retweets, the number of likes, and the ratio of interest. A retweet is a reposting of a tweet; Twitter's retweet feature helps users to quickly share a tweet with all their followers. Retweets can be considered as a sign of value as a user finds a tweet valuable enough to share it with their audience. Participants can also "like" Tweets by clicking on the heart icon. Likes can be considered as a sign of appreciation that users can express towards tweets. These metrics demonstrate the rate at which a tweet has been approved and shared by Twitter users. While the number of retweets and the number of likes are at the tweet level, the ratio of interest is at the user level. This ratio evaluates the interest to the tweets of a messenger. The ratio of interest is computed as the ratio of the number of interactions to the number of tweets, where the number of interactions is the sum of the number of likes and the number of retweets [40,41]. The ratio of interest provides the average number of interactions for each tweet. To provide context to the data, we transposed these tweets against the evolution of COVID-19 confirmed daily cases in Canada during the study period.
Hashtag trends analysis.
A hashtag is used to index keywords or topics on Twitter. This function allows Twitter users to easily follow topics they are interested in. The hashtag trends analysis aims to show how the COVID-19 discourse evolved during the study period. The hashtag analysis was conducted by presenting the most popular hashtags in our dataset and by eliminating all the hashtags that were used to filter the tweets that we presented in Appendix C in S1 File. In doing so, we were able to observe the topic shift in this period. To contextualize the data, we crossed the evolution of trending hashtags over time with the number of daily COVID-19 confirmed cases in Canada.
3.6.3. Sentiment analysis. The objective of sentiment analysis is to understand the emotions underlying a data source. Sentiment analysis is the computational and automated study of people's opinions, sentiments, emotions, and attitudes towards products, services, issues, or events [42]. In this sense, sentiment analysis should allow understanding and tracking of the public's "mood" about a particular entity to create actionable knowledge. This knowledge could be used to dig, explain, and predict social phenomena [43,44]. Sentiment analysis can be conducted using an automated mining of attitudes, opinions, views, and emotions from text, speech, and database sources [45]. This mining is produced through Natural Language Processing (NLP) and Machine Learning (ML) techniques [46].
In this study, we capture the public's (the audience as a third component of risk communication) sentiments in response to government and public health officials' tweets. The public response shows the importance of a two-way risk communication where we not only study Governments-Citizens communication channels but we also study the replies and reactions of the public to governments' posts [8] We based our classification of tweet replies on the 10 sentiment categories identified in Chew & Eysenbach's 2010 study of Twitter data during the H1N1 pandemic (See Appendix D in, Table 4 in S1 File) [47]. We adopted the sentiments identified by Chew & Eysenbach's 2010 study since, to the best of our knowledge, it is the most complete classification of sentiments, and it was applied in a pandemic context which is the same as the context of this study. We refined these 10 sentiment categories to ensure that they captured the full spectrum of sentiments in our dataset. To do so, we first conducted a training phase, followed by an automated text classification phase. We used MonkeyLearn [48], an online machine learning platform, to perform this analysis (see Fig 4).
In the training phase, two researchers (AC, AJP) manually coded 1,424 randomly selected tweets replies from our existing dataset using the 10 sentiment categories. Where needed, the coders added new categories to capture new sentiments. The coders continued to conduct iterative rounds of coding until a Cohen's Kappa of>0.6 was reached, which indicates a good level of coder agreement [49]. In the first round, using a dataset of 500 tweets from the 1,424 tweets, the Kappa coefficient was 0.46. The coders reconciled discrepancies and refined the coding framework using consensus, which included the addition or removal of sentiment codes. Using another set of 924 tweets, the coders independently categorized the sentiments and achieved a Cohen's Kappa of 0.67 [50].
Following the training phase, we defined two new sentiment categories and adapted four categories from the Chew & Eysenbach framework. The two categories that we created are "racism and stigma" and "information sharing." For the sentiment "racism and stigma," we observed several tweets with racist and discrimination-based expressions. For the "information sharing" sentiment, we observed that users, in some replies, used to share news and information referring to the pandemic. In addition to these two new sentiments, we modified four categories from the Chew & Eysenbach framework as described hereafter: • We changed the label of the category "misinformation" with "distrust." We observed that the replies we initially categorized as misinformation were better represented as distrust (e.g., distrust of authority or the media).
• We changed the category labelled as "questions" with "information requests and inquiries," as most of the questions that we observed in our data included help or clarification requests.
• We observed that replies of the category "personal opinion or interest" frequently expressed personal opinions along with suggestions (e.g., suggestions for COVID related policy changes or suggested public health measures). To capture both aspects, we revised the label of this category to "personal opinion or suggestion." • Initially, the category "resources" referred to replies pointing to additional information, beyond the tweet, such as a link to an article. However, in many instances, we found tweets providing additional information but without providing a link to an external resource. As a result, we revised this category to encompass both "information sharing and resources." The final 11 sentiment categories used in this paper, including definitions and examples, are listed in Table 1.
Once we established our new sentiment analysis coding framework, we launched the trained machine learning classification model to analyze the remaining dataset. We first validated the machine algorithm using an agreement assessment and a machine learning classifier performance evaluation.
For the agreement validation, we compared manually coded tweets to machine-coded tweets. The Cohen's Kappa coefficient value was 0.47; discrepancies were mainly attributed to the sentiments of "Concern" and "Frustration" which were not differentiated by the machine model. We enriched the training dataset with additional sentences for these categories and reran the analysis. Strong agreement between the manual and automated coding was achieved with a Cohen's Kappa coefficient value of 0.74. Finally, for the machine learning validity evaluation, we measured the performance of the classifier using the following metrics: • Accuracy: which is the number of correct predictions the classifier has made divided by the total number of predictions [51]. In our case, accuracy is 79%.
• Precision: which states the proportion of texts that were predicted correctly out of the ones that were predicted as belonging to a given category or tag [52]. Precision was computed at 65%.
• Recall: which is the ratio of texts that are correctly classified as belonging to a given category to the total number of texts of that category [52]. It shows the completeness of a given category with respect to each category [52]. The value of recall is 78%.
• F1 score: which is identified as a combination and harmonic means of precision and recall. This metric is widely adopted to evaluate the performance of the classification for each of the categories [52]. The greater the F1 score, the better the performance of our model. The F1 measure is 83%.
Engagement metrics
In Fig 5 and in Table 2, we present the results related to the engagement metrics. In Appendix E in S1 File, we present in Figs 1-4 in S1 File the evolution of the number of tweets, likes, and retweets for the four different categories of accounts. Provincial/territorial government officials of Canada tweeted a total of 6,840 COVIDrelated tweets from the 30 Twitter accounts analyzed (an average of 228 tweets per account during the study period). Federal government officials tweeted a total of 1,090 COVID-related tweets for a total of 3 Twitter accounts analyzed (an average of 363 tweets per account). The calculated ratio of interest of the federal government officials' accounts was 652.65, which is
Concern
Replies that express COVID-19-related fear, anxiety, worry, or sadness for self or others. May also express skepticism.
"omg ppl stay home for the love of god" #stayathome
Distrust
Replies that contradict the reference standard or contain unsubstantiated information. May make speculations or express distrust of authority or the media. May include conspiracy theories or misinformation.
"Deflecting much? You lied about masks! It was all BS! You told Canadians we didn't know how to safely wear mask" "#CoronaVirus #CoVid19 LancetGate: Big Pharma Corruption And Their COVID-19 Lies"
Downplay
Replies that attempt to de-emphasize the potential risks of COVID-19 or bring it into perspective. May also express a lack of concern or disinterest.
"there's nothing to be afraid of." "don't forget to tell everyone that the normal flu has 2x the cases and 8 deaths this year"
Frustration
Replies that express anger, annoyance, scorn, or volatile contempt. May include coarse language.
"This team should be fired! Shame on you!!" "You are a disgrace and a fraud #RESIGN" Humour or sarcasm Comedic or sarcastic replies. "You look so funny when you want to be credible" "you have a funny way of showing your appreciation"
Information requests and inquiries
Replies that include questions, demand clarifications or help. "Here is a question: The man who died at his home of covid19 (not hospital), was he tested for covid 19?" "What measures exactly?"
Information sharing and resources
Replies containing COVID-19 news, updates, or any related information. May be a title or summary of a linked article.
"there are 598 cases in continuing care facilities, 921 cases at [location]"
Personal experiences
Replies where users mention a direct (personal) or indirect (e.g. family or acquaintance) experience with COVID-19.
"me and my wife are both feeling sick. sore throat. tired. minor cough. chest tightness. we work at [location]. so lots of exposure to the public. the phone line is busy."
Personal opinion or suggestion
Replies where users express opinions about the COVID-19 pandemic (i.e., their perceptions of the SARS-CoV-2 virus, the COVID-19 situation or news) and provide suggestions.
"help the front-line staff and give them proper equipment including n95 masks. please communicate with the health minister" "while social distancing may be happening. self isolation isn't. that is disappointing. please make sure people who are sick have the space they need to heal away from other people." Racism and stigma Replies related to racist and discrimination-based expressions "CCP restricted Wuhan ppl to go to Beijing in Jan 2020. Why? Because CCP knew that Wuhan Coronavirus was dangerous. CCP allowed Wuhan ppl to go to Canada, USA, etc in Jan 2020. Why? Because CCP used Wuhan Coronavirus as a bioweapon to attack the West. #ChinaLiedPeopleDied"
Relief
Replies that express joy, happiness, or sense of peace. May also express gratitude and acknowledgement.
"Please keep the great job that you are doing" "thank you! so glad you and your family are well. this is a reassuring message and so appreciated." https://doi.org/10.1371/journal.pone.0273153.t001 higher than that of the provincial/territorial government officials of 88.34, meaning that the number of interactions per tweet were greater for federal government officials' accounts compared to provincial/territorial governments' accounts. This was observed for the entire duration of the study, with a peak of engagement with federal government officials' accounts during the month of March. Provincial/territorial health officials tweeted a total of 11,199 COVID-related tweets from 56 Twitter accounts analyzed (an average of 200 tweets per account). Federal health officials tweeted a total of 5,421 COVID-related tweets from 4 Twitter accounts analyzed (an average of 1355 tweets per account). The calculated ratio of interest of the federal health officials' accounts was 215.90, which is higher than that of the provincial/territorial health officials at 128.65, meaning that the public in Canada engaged more with the federal health officials than with the provincial/territorial health officials. The periods that recorded the highest engagement with the federal health officials were the months of March and April. Furthermore, we observe that the ratio of interest for tweets generated by federal government officials was greater than the federal public health officials. However, we observe that the ratio of interest was higher for the provincial/territorial health official accounts as compared to provincial/territorial government officials.
Overall, the public demonstrated a greater level of engagement with federal Twitter accounts with an overall ratio of interest of 165.90 as compared to provincial/territorial Twitter accounts with an overall ratio of interest of 157.11. We also observe that, on average, each federal Twitter account tweeted more than a provincial/territorial Twitter account.
Hashtag trends
In Table 3, we present the results of the hashtag analysis, depicting the hashtag trends per month used by the Canadian officials. In Appendix F in S1 File, we present the top 10 hashtags for the four different categories of accounts. Table 3 illustrates the topic shift over time in the COVID-related Canadian officials. We observe that, generally, the COVID-19-related Canadian discourse was consistent throughout the first wave of the pandemic, focusing on COVID-19 mitigation messages. We observe an immediate shift in public discourse from the period preceding the first wave of the COVID-19 pandemic (January and February) to the period of the first wave (beginning March 2020) with top trending tweets related to COVID-19 mitigation strategies (e.g., #SocialDistancing, #Tes-tAndTrace and #StayHome) and COVID-19 mitigation goals (e.g., #FlattenTheCurve, TOGETHERAPART (4) TOGETHERAPART (3) https://doi.org/10.1371/journal.pone.0273153.t003 #PlankTheCurve and #StopTheSpread); while during provincial lockdowns, we saw trends such as #StayAtHome emerge. Additionally, we observed changes in language over time; for instance, #SocialDistancing, which was trending in March 2020 was replaced by #PhysicalDistancing for the remainder of the study period. Throughout the period, we observed messages of solidarity and encouragement such as #StrongerTogether and #TogetherApart.
Ratio of interest
When looking at the trends data from each of the four account sources, we observed that provincial/territorial governments used more COVID-19-related hashtags compared to the federal government (443 versus 135 respectively). While all government accounts used hashtags related to COVID-19 mitigation strategies such as #PhysicalDistancing, we observed some trend differences. For instance, the federal government used hashtags on the economic fallout of the COVID-19 pandemic more often than provincial/territorial accounts (e.g., #Eco-nomicResponse mentioned 21 versus 4 times by federal and provincial/territorial accounts, respectively). Additionally, provincial/territorial government accounts demonstrated greater use of hashtags related to mental health as compared to federal accounts (e.g., #MentalHealth mentioned 47 versus 10 times by provincial/territorial versus federal government accounts, respectively). Finally, we looked at the top 10 used hashtags during the period of data collection for the four types of accounts. We observed that federal public health officials used more often their top 10 COVID-19 hashtags compared to provincial/territorial public health officials (1,437 hashtags versus 1,280 hashtags, respectively). Similarly, all public health officials used hashtags related to COVID-19 mitigation strategies; however, there were slight differences. For instance, federal health officials highlighted the importance of testing and screening (e.g., #TestAndTrace and #Depistage) more often than provincial/territorial health authorities (374 versus 54 times, respectively). Table 4 presents the results related to the sentiment analysis of the overall collected tweets. In Appendix G in S1 File, we present in Figs 5-8 in S1 File the sentiment analysis for the four different categories of accounts.
Sentiment analysis
As stated in Section 3.6.3, we identified 11 sentiments in response to Canadian officials' COVID-19-related tweets. The proportions of the overall sentiments were stable during the study period. The most commonly reported sentiment was concern at 22%, followed by information requests and inquiries at 16%. These were followed by personal opinions or suggestions (12%), relief (12%), frustration (11%), information sharing and resources (11%), and personal experiences (10%). Sentiments related to downplay and stigma were found in 2% of tweets, respectively, and sentiments related to distrust and sarcasm were found in 1% of tweets, respectively. When looking at these sentiments in federal versus provincial/territorial government accounts, we observe that the public expressed slightly more concern towards federal accounts (29% versus 23%, respectively). Users also demonstrated more frustration towards the federal government accounts (18% versus 12%). In contrast, sentiments for information sharing and resources were greater in response to provincial/territorial government accounts compared to the federal government (11% versus 6%, respectively). On the other hand, we observe that Twitter users expressed slightly more sentiments of concern towards provincial/territorial health officials (24%) compared to federal health officials (18%). Additionally, the public posted more information requests and inquiries in response to federal health officials' tweets (20%) compared to provincial/territorial health officials (14%). Finally, when comparing federal accounts, we observed that the public expressed more concern and frustration towards federal government officials (29% and 18%, respectively) compared to federal public health officials (18% and 9%, respectively). The positive sentiment of relief was expressed slightly more for federal health officials (14%) compared to provincial/territorial health officials (10%). This sentiment was comparable for provincial/territorial government and public health officials (12% and 13%). Rates of concern expressed in response to all provincial/territorial accounts were comparable (23% and 24%), representing one quarter of all sentiments. We present in Appendix G in S1 File the detailed results that depict the evolution of sentiments over time.
General observations
Our results about the hashtag trends analysis showed that COVID-19 discourse in Canada generally remained stable during the first wave of the pandemic. We observed subtle differences between government and provincial/territorial accounts, mostly related to frequency of use of certain hashtags (e.g., provincial/territorial governments placed greater emphasis on #MentalHealth messaging as compared to federal officials). We also observed subtle changes in language over time; for instance, the hashtag of #SocialDistancing was quickly replaced with #PhysicalDistancing, which was sustained for the duration of the study period.
Regarding the sentiment analysis, we made three main observations. First, we identified that distrust is 1% to 2% of the sentiment analysis. This ratio is quite low compared to emergent data from countries such as the United Kingdom where surveys showed 31% of the population did not trust the government to control the spread of the pandemic [53]. Balaet et al. further showed that 22% of study participants believed there were "ulterior motives" behind the government's COVID-19 response, with minority populations also tending to show distrust towards the government [54]. For instance, our results are well aligned with a larger study that took place on Twitter, showing that Canada is one of the most trusted countries by its citizens in terms of COVID'19 management (score of 4.1 with 5 representing the highest scoring metric) [55]. Further understanding the factors that drive trust or distrust towards governments during public health emergencies can strengthen response efforts going forward. Second, we observed that approximately 2% of all sentiments were related to racism or stigma.
Governments and public health leaders should embed an equity lens in the development of their key messages to discredit such sentiments and reduce discrimination. For instance, we observed trending of the hashtag #ProtectTheVulnerable. Despite positive intentions, potentially stigmatizing terms such as "vulnerable" should be avoided in public discourse, particularly by government and health official accounts (http://www.bccdc.ca/Health-Info-Site/Documents/Language-guide.pdf). White papers such as the Chief Public Health Officer of Canada's Report on the State of Public Health in Canada 2020 highlight the importance of embedding equity into policies and language to dismantle systems of oppression and racism in Canada, which were exacerbated during the pandemic [56][57][58]. Additional guidance such as the British Columbia Centre for Disease Control provides guidelines on using inclusive language for written and digital content [59,60]. Approaching public health messaging using an equity and inclusivity lens may foster trust, particularly for historically marginalized populations and groups. For example, and as stated in the hashtag trends, we observed some changes in discourse over the course of the pandemic, for instance with the shift from #SocialDistancing to #PhysicalDistancing. Finally, we noted the public's appetite for clear guidance on how to navigate the pandemic. Notably, approximately 33% of all sentiments were related to information requests or sharing. Leaders can leverage the public's desire for such information by providing specific, plain-language, evidence-based recommendations to the public in real time. Specifically, the federal government and federal health accounts should disseminate this information, given the levels of public engagement that we observed towards these accounts. In this context, and to increase the visibility of their messages, government officials should leverage hashtags to better communicate with people. Studies suggest that leveraging "organically developed" hashtags, rather than creating new ones, may improve the visibility of certain messages [61]. Besides hashtags, government officials can engage influencers and experts in online conversations on social media [61] to help share verified facts about COVID-19 while aiming to reduce fear and anxiety.
Contributions to the literature
This research has four main contributions to the literature. First, our findings showed that the public demonstrated a greater ratio of interest towards the federal officials' accounts compared to provincial/territorial officials' accounts; this was observed for both government and health officials' accounts. We hypothesized that the public would be more trusting of higher levels of government. Our results are consistent with the Statistics Canada crowdsourcing surveys which showed that, during COVID-19, Canadians were more trusting of the federal government compared to lower levels (61.5%, 55.8%, 54.7% trust in federal, provincial/territorial, and municipal governments, respectively). Also, Canadians were more trusting of health authorities compared to government (74.4%, 74.3%, 65.1% trust in federal, provincial/territorial, and municipal health authorities, respectively) [62]. Thus, as the perceived most trustworthy source by Canadians, federal health authorities should be equipped with timely, relevant, and actionable messaging during health emergencies.
Second, we noted that public health messaging between government and health authorities was not always consistent. For instance, provincial/territorial government accounts focused more on mental health compared to federal governments. Additionally, federal public health authorities used more hashtags related to testing and screening compared to provincial/territorial health authorities. While the differences we observed were minor, the findings provide an opportunity for reflection on how public trust is impacted by conflicting or inconsistent messaging. Risk messaging that follows established evidence-based guidance, such as the World Health Organization's COVID-19 outbreak communication guidelines [63], may mitigate opportunities for misinformation while fostering public trust. Such recommendations include establishing trust with communities ahead of health emergencies, establishing relationships with partners (e.g., varying levels of governments or health authorities) to facilitate the rapid development and announcement of public health guidance, and planning to identify spokespersons and lead agencies (e.g., federal health authorities) to gain buy-in with politicians and other stakeholders.
Our study also explored public engagement with health and government authorities through a triangulation of varying data sources and methods, specifically an engagement analysis, hashtags and trends analyses, and a sentiment analysis. To the best of our knowledge, there are limited examples in the literature that used multiple sources as we have presented here [64,65]. Rather, identified studies focused on a singular form of analysis, typically engagement, or sentiment analysis only [66][67][68][69]. Combining our sources provided additional depth to our research findings and demonstrated the feasibility of using machine learning methods to inform public health responses. Future work can continue to build on the methods we've presented here. For instance, initiatives such as the Early AI-supported Response with Social Listening study tracks real time COVID-19 related discussions in over 30 countries (https:// www.who-ears.com/#/). Additional research to explore public sentiments by population groups [70,71] (e.g., gender, age groups, newcomer groups) can provide insights on how to optimize messaging targeting such groups.
Finally, our study adds a methodological contribution in advancing the sentiment categories developed by Chew & Eysenbach (2010) to analyze Twitter discourse during the H1N1 outbreak. Specifically, we added two new categories and updated four existing categories. In contrast to the authors' analysis approach of an automated sentiment classification based on search queries of keywords and phrases, we applied the Support Vector Machine algorithm which is the dominant algorithm for sentiment analysis [43]. We demonstrate therefore the feasibility of using natural language programming and machine learning techniques to analyze large social media datasets. Importantly, these techniques highly correlated with manually coded sentiments, demonstrating the validity of the approach. Such methods can be leveraged by government, health officials, and researchers to gauge real-time public sentiments during public health emergencies and craft corresponding messages.
Limitations
Our study is not without limitations. First, our findings are limited to the public discourse over Twitter, which may not be reflective of the overall Canadian public discourse across communication channels. Second, we restricted our study to English and French Twitter posts only, which may limit the representativeness of our results given the diversity of the Canadian population. Third, we are aware that there may be a selection bias in the collection of our data set as we only considered gathering tweets and responses posted by Canadian government and health officials only, thus overlooking other Twitter accounts that may have high traction and influence in the Twittersphere (e.g., celebrities, influencers, community leaders, religious leaders). Additionally, our data set was limited to tweets and replies that contained specific COVID-related hashtags only, which we developed using relevant literature [35] and social media tools and guides [36][37][38]. There is a possibility that we may have overlooked additional hashtags that may have been relevant to this discourse. Fourth, the interpretation of both hashtags and emojis usages during the classification phase of our sentiment analysis is limited by our lack of knowledge of how the authors intended them. Both are influenced by the context, the culture, the age and the gender of the authors, therefore making them open to interpretation [72,73]. We observed a number of available emojis where each of them expresses a particular sentiment and that are more and more used. So, the combination of emojis with hashtags and sentiments can provide new substantial information to identify the sentiments behind a post. Consequently, it will be interesting to develop techniques to better interpret hashtags and emojis based on their context. Fifth, we did not disaggregate data by province/territory; additional research to do so would provide us with more insight on regional and context-specific considerations. Finally, our trends analysis only looked at the messages posted by governments in a one-way communication. Future research can investigate two-way communication models between the public and health/government authorities. Such two-way communication is critical to optimizing risk communications [8]. These limitations do not diminish the originality and the impact of our research, which resulted in the development of practical recommendations and important insights to support leaders to communicate in future crises.
Conclusion
We demonstrated the feasibility of leveraging machine learning and natural language programming to assess public discourse during a public health emergency using social media. Our findings suggest members of the Canadian public demonstrated increased engagement with federal officials' Twitter accounts as compared to provincial/territorial accounts. Hashtag trends analyses illustrated the topic shift in the Canadian public discourse, which initially focused on COVID-19 mitigation strategies and evolved to address emerging issues such as COVID-19 mental health effects. Additionally, we identified 11 sentiments in response to officials' COVID-19-related posts. We provided suggestions on how government and public health officials can optimize their messaging during public health emergencies. Future research can explore these trends in the context of the second and third waves, to determine the discourse of officials and the public over time in Canada.
|
v3-fos-license
|
2018-04-03T06:02:45.546Z
|
2017-03-01T00:00:00.000
|
45464108
|
{
"extfieldsofstudy": [
"Medicine",
"Chemistry"
],
"oa_license": "CCBY",
"oa_status": "GREEN",
"oa_url": "https://figshare.com/articles/journal_contribution/Antioxidant_activity_and_chemical_composition_of_i_Juniperus_excelsa_i_ssp_i_polycarpos_i_wood_extracts/3487127/files/5512724.pdf",
"pdf_hash": "19c7810b0017300177850788274c6465d8f60613",
"pdf_src": "TaylorAndFrancis",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44374",
"s2fieldsofstudy": [
"Environmental Science",
"Materials Science",
"Chemistry"
],
"sha1": "6bfe5a849fe62cf4341eb1b32dfeb4a1f1022935",
"year": 2017
}
|
pes2o/s2orc
|
Antioxidant activity and chemical composition of Juniperus excelsa ssp. polycarpos wood extracts
Abstract Extracts from the wood of Juniperus excelsa ssp. polycarpos were analysed for their antioxidant activity using the DPPH method and compared with ascorbic acid and butylated hydroxytoluene. The most active extracts were analysed for their chemical composition using gas chromatography–mass spectrometry. Acetone extract was found to be moderately active as an antioxidant agent at 58.38%, which was lower than the value of vitamin C (98.56%) at the concentration of 14.20 mg/mL. The major components identified in the acetone extract as trimethylsilyl (TMS) derivatives were pimaric acid TMS (24.56%), followed by α-d-glucopyranoside,1,3,4,6-tetrakis-O-(TMS)-β-d-fructofuranosyl 2,3,4,6-tetrakis-O-(TMS) (21.39%), triflouromethyl-bis-(TMS)methyl ketone (9.32%), and cedrol (0.72%). The dissolved water:methanol (1:1 v/v) partitioned from acetone extract afforded 12 fractions; among them, the F9 fraction was found to have good antioxidant activity (88.49%) at the concentration of 14.20 mg/mL. The major compounds identified in F9 fraction were α-d-glucopyranoside, 1,3,4,6-tetrakis-O-(TMS) (20.22%) and trifluoromethyl-bis-(TMS)methyl ketone (5.10%).
Introduction
Persian juniper (Juniperus excelsa ssp. Polycarpos) is a dioecious tree up to 6-7 m tall or a low shrub with dense head (Emami et al. 2011), also widely distributed in other areas such as south-east Arabia, Iran, Caucasus, Baluchistan, Afghanistan, north-west Himalaya (Townsend & Guest 1966) Armenia, India, Uzbekistan, and Pakistan (Franco 1964).
The extracted compounds of different types of Juniperus and their antibiotic activities have been expressed (Angioni et al. 2003;Filipowicz et al. 2003). The potential uses of the extracted compounds are in clued, aromatherapy, fragrance, soup, candle, lotions, and cosmetics materials (Yesenofski 1996).
Recent studies also indicated that the inhibitory effects of the extracted compounds were registered in various types of pathogenic fungi and similar micro-organisms (Soković et al. 2004). The largest compound in the fruit cones of Juniperus communis were terpenes (32.1%) which is used to treat indigestion, and also as disinfectant in dyspepsia as well as some other antibiotic effects (Lamparsky & Klime 1985).
Cedrol was found in the essential oil of Conifers, especially in the genera Cupressus and Juniperus (Connolly & Hill 1991). Its main uses are in the chemistry of aroma compounds (Breitmaier 2006). Result of Lindh et al. (2015) studies suggested that cedrol strongly attracts pregnant female mosquitoes after to create cedrol-baited traps. Sabinene was the most abundant compound in Juniperus thurifera L. var. Africana oils from the dried leaves and the oil had good antibacterial activities (Bahri et al. 2013). α-pinene, germacrene d, myrcene, abietadiene, and cis-calamenene were the main chemical composition of essential oil from Juniperus oxycedrus ssp. macrocarpa (S. & m.) Ball. and Juniperus oxycedrus L. ssp. rufescens (L. K.) berries and showed good antioxidant capacity (Hanène et al. 2012). Fatty acids and their methyl ester such as hexadecannoic and octadecanoic acids are relatively common essential oils in higher plants (Shabi et al. 2010). Juniperus has high resistance against wood eating pests; humidity has no effect on it. The scent extracted from the tree also repels snakes and scorpions and other blood sucking insects (Zargari 1983).
In the present study, for the first time, the aim was firstly, to assess in vitro antioxidant activity of wood extracts from J. excelsa ssp. polycarpos with voucher specimen number 1893, and secondly to analyse the chemical composition of the extracts by GC/MS.
Antioxidant activity
Statistically, there were significant differences among the treatments (F1 to F12, water:methanol, n-hexane, acetone, butylated hydroxytoluene (BHT), vitamin C and their concentrations ( Table S1). The lowest antioxidant activity (22.25%) was observed at the concentration of 0.44 mg/mL, which was lower than the antioxidant activity value of vitamin C (60.66%) at the same concentration. The moderate activity came from acetone extract (53.98%) at 14.20 mg/mL, which was lower than the antioxidant activity value of vitamin C (98.56%) at the same concentration (Table S2). The same trend was observed with the reference (BHT). Emami et al. (2011) reported that the essential oils from various parts of both J. excelsa subsp. polycarpos and Juniperus excelsa subsp. excels species had relatively low antioxidant activity, but these activities suggested the possible uses of these essential oils in very low concentrations for preserving food materials.
Wood extract
Acetone extract of the fresh wood of J. excelsa ssp. polycarpos afforded 12% (v/w) in yield. Seventeen compounds of trimethylsilyl (TMS) derivatives were identified (Table S3).
Conclusion
The chemical composition and antioxidant activity of extracts from the wood of J. excelsa ssp. polycarpos were reported for the first time.
|
v3-fos-license
|
2022-05-27T06:22:18.354Z
|
2022-05-26T00:00:00.000
|
249064922
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": null,
"oa_url": null,
"pdf_hash": "5b1d2e091e4a064cbf5096c7b76db2c5578f1d99",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44375",
"s2fieldsofstudy": [
"Materials Science",
"Chemistry",
"Environmental Science"
],
"sha1": "1feaf31d36dfd0607037d5586ade33689fb5f9bb",
"year": 2022
}
|
pes2o/s2orc
|
Chemically Recyclable Poly(β-thioether ester)s Based on Rigid Spirocyclic Ketal Diols Derived from Citric Acid
Incorporating rigid cyclic acetal and ketal units into polymer structures is an important strategy toward recyclable high-performance materials from renewable resources. In the present work, citric acid, a widely used platform chemical derived from biomass, has been efficiently converted into di- and tricyclic diketones. Ketalization with glycerol or trimethylolpropane afforded rigid spirodiols, which were obtained as complex mixtures of isomers. After a comprehensive NMR analysis, the spirodiols were converted into the respective di(meth)acrylates and utilized in thiol–ene polymerizations in combination with different dithiols. The resulting poly(β-thioether ester ketal)s were thermally stable up to 300 °C and showed glass-transition temperatures in a range of −7 to 40 °C, depending on monomer composition. The polymers were stable in aqueous acids and bases, but in a mixture of 1 M aqueous HCl and acetone, the ketal functional groups were cleanly hydrolyzed, opening the pathway for potential chemical recycling of these materials. We envision that these novel bioderived spirodiols have a great potential to become valuable and versatile bio-based building blocks for several different kinds of polymer materials.
Alternative method using 2-MeTHF as solvent. tB (826 mg, 2.23 mmol) in a round bottom flask was dissolved in dry 2-MeTHF (30 mL). The flask was flushed with argon, capped with a rubber septum, and cooled down on an ice bath. Thereafter Et3N (0.81 mL) and acryloyl chloride (0.42 mL, 5.22 mmol, 2.25 eq) were added simultaneously dropwise. The ice bath was removed, and the mixture was stirred for 48 hours at room temperature. The completion of the reaction was estimated by TLC. The reaction was quenched by addition of saturated aq. NaHCO3 (50 mL) and extracted three times with EtOAc (3x 50 mL). The organic phases were combined, dried over MgSO4, and concentrated under the reduced pressure. The product was purified by flash chromatography over silica gel (30% EtOAc in petroleum ether). The pure product was obtained as an oily viscous liquid (825 mg, yield 77%) Synthesis of glycerol-spiro-diacrylate gBa gB (3.915 g, 13.67 mmol) was dissolved in dry CH2Cl2 (30 mL). The flask was capped with a rubber septum, flushed with argon, and cooled down using an ice bath. Acryloyl chloride (3.27 mL, 34.18 mmol, 2.4 eq) and Et3N (4.76 mL) were added simultaneously dropwise. The ice bath was removed, and the mixture was stirred overnight at room temperature. The reaction was quenched by addition of saturated aq. NaHCO3 (50 mL) and extracted three times with CH2Cl2 (3x 50 mL). The organic phases were combined, dried over MgSO4, and concentrated under reduced pressure. The mixture was purified by silica flash column (30% EtOAc in petrol ether). The pure product was obtained as an oily viscous liquid (3.219 g, yield 60%).
Polymerizations
The di(meth)acrylate (typically about 500 mg) was dissolved in CHCl3 (ca 100 mg/mL) and 1 equivalent of dithiol was added. The mixture was cooled on an ice bath and 0.1 equivalents of DBU was added as a solution in chloroform. The ice bath was removed shortly afterwards, and the mixture was stirred at room temperature for 24 hours. The polymer was then precipitated in 100 mL of MeOH and allowed to stir slowly overnight (16 h), after which the polymer had precipitated to the bottom. The solvent was decanted, and the polymer residue was left to dry for 5-10 minutes, after which a small amount of CH2Cl2 (2-3 mL) was added to solve the polymer for casting a film. The film was cast into a small Petri dish and left to dry at room temperature overnight, after which the film was removed from the dish for further drying under reduced pressure.
Polymerization of poly(tTa-HDT)
tTa (386.8 mg, 0.73 mmol) was dissolved in 5 mL of CHCl3, HDT (112.1 mg, 114 µl, 0.78 mmol) was added, and the mixture was cooled on an ice bath. DBU (11.9 mg, 0.078 mmol) was added as a solution in CHCl3, and the ice bath was removed. The mixture was stirred at room temperature for 24 hours. The mixture was then precipitated in 100 mL of MeOH while stirring slowly. The next day, the ether was decanted, the polymer residue solved in CH2Cl2 and cast into a Petri dish to obtain a thin film (373.5 g, yield 70.3%). 1 60 mmol) was dissolved in 10 mL of CHCl3, TBBT (670.8 mg, 2.67 mmol) was added, and the mixture was cooled on an ice bath. DBU (42.9 mg, 0.28 mmol) was added as a solution in CHCl3, and the ice bath was removed. The mixture was stirred at room temperature for 24 hours. The mixture was then precipitated in 100 mL of MeOH while stirring slowly. The next day, the ether was decanted, the polymer residue solved in CH2Cl2 and cast into a Petri dish to obtain a thin film (1.665 g, yield 76.7%). Some of the precipitate (170 mg) remained insoluble in CH2Cl2 and was collected separately. Alternative method in 2-Me-THF tTa (183.1 mg, 0.34 mmol) was dissolved in 6 mL of 2-MeTHF, TBBT (88.6 mg, 0.35 mmol) was added, and the mixture was cooled on an ice bath. DBU (5.6 mg, 0.03 mmol) was added as a solution in 2-MeTHF, and the ice bath was removed. The mixture was stirred at room temperature for 24 hours. Thereafter the mixture was then precipitated in 100 mL of MeOH while stirring slowly. The next day, the ether was decanted, the polymer residue solved in CH2Cl2 and cast into a Petri dish to obtain a thin film (110.1 mg, yield 38.4%). 1 (Fig. S27)
NMR analysis Glycerol spirodiol gB and diacrylate gBa NMR analysis
Ketalization of cis-bicyclo [3.3.0]octane-3,7-dione B with glycerol results in numerous isomers. At first the formation of 1,2 or 1,3 ketals is possible. In first case hydroxymethyl group can be connected to endo or exo position of C3 and C7 of cis-bicyclooctane ring and further isomers are obtained from the mutual different orientation of hydroxymethyl groups in diketals. 1 H NMR spectrum at 800 MHz is non-informative about the composition of mixture of compounds (Fig. S1). It is hard to resolve even numerous first order multiplets from 4-CH2OH substituted 1,3 dioxolane ring. For example, the number of signals from the vicinal couplings of H-4 between 4.00 and 3.95 ppm with neighbor methylene protons must be 128. 13 C NMR spectrum of ketalization product reveals the formation of complex mixture of compounds. For 13 C NMR spectrum the most informative starting points are the regions of spiro carbons with connected to them two carbon and two oxygen atoms. For the naming of these isomers generic names were used (see Fig. S2) In principle spiro connected to bicyclo[3.3.0]octane hydroxymethyl group at C4 of 1,3dioxolane ring can have 2 different configurations, but they are barely observed due to low barrier conformational mobility of 1,3-dioxolane ring. Geometry optimizations by AM1 and Gaussian calculations show that these isomers differ in their energies in the order of only 100 cal/mol and have in most stable conformation diversely twisted bicyclo[3.3.0]octane 5membered rings which are characterized also by the different dihedral angles between the bicyclo[3.3.0]octane bridgehead H atoms. Different calculations give these angles values from nearly zero to more than 30 degrees. The 1,3-dioxolane parts of isomers are characterized by a low inversion barrier of conversion from the different mutual orientation of substituents on the 1,3-dioxolane ring. No NMR study of this conversion was found, but an ESR study from 1973 has found that the inversion barrier in 2-methyl 1,3-dioxolane is as low as 5.6±0.2 kcal/mol 1 . This needs temperatures below -100 °C degrees to observe different conformers in NMR spectra. Room temperature linewidths in present mixture are quite narrow to resolve 0.003 ppm differences in 13 C chemical shifts, but at the same time they already demonstrate the small exchange broadening effects. This is seen in the 13 C spectrum of bicyclo[3.3.0]octane bridgehead carbons in a mixture of glycerol di-and monoketals at room temperature (Fig. S4). Resolution enhancement reveals the presence of dynamic broadening in signals from diketals. The monoketal itself is also not free from the exchange effects, because the keto ring signals are even sharper compared to the other monoketal signals. The number of observed isomers led to the conclusion that mutual orientation of substituents in 1,3-dioxolane ring isomers are still separable in NMR spectra. Additionally, exo and endo substitution cis and trans orientations of hydroxymethyl substituents were observed. For further analysis the configuration of one hydroxymethyl group was fixed and remaining substitution patterns were fixed toward this substituent. The analysis of 6 isomeric diketals was based on NMR spectra of glycerol monoketals and 2,2-dimethyl-1,3-dioxolan-4-yl-methanol (solketal, Fig. S3) 13 C NMR spectrum of monoketals shows the presence of 2 compounds defined as exo and endo isomers with the chemical shift differences between the corresponding atoms from 0.01 to 0.5 ppm. The largest difference is observed on methylene groups of 1,3-dioxolane ring due to their exo or endo orientation on bicyclo[3.3.0] ring in beta position from spiro carbon. As a model compound for the assignment of exo or endo methylene groups the chemical shifts of 3methoxy isomers of cis-bicyclooctane derivatives were used 2 . In this study endo methoxy carbons on C3 of bicyclooctane were shifted to low field. The same regularity is observed also for 1,3-dioxolane ring methine carbon atoms in present isomers. Further confirmation of assignment of exo or endo configuration of hydroxymethyl substituents follows from 1 H chemical shift differences of bicyclo[3.3.0]octane bridgehead proton chemical shifts, which result from long range deshielding effects of CH2OH groups in exo isomers by shifting bridgehead protons to low fields by about 0.02 ppm. Very small 1 H chemical shift differences in two monoketal isomers complicate the use of NOESY experiments for the analysis of interactions between the spiro and bicyclooctane ring protons in these isomers. Another model compound, solketal behaves differently from the 2-methyl-1,4dioxaspiro [4.5]decane with 4-methylsubstituted 1,3-dioxolane ring. For the last compound half chair conformation was declared on the basis of vicinal H-H spin-spin coupling constants with methine proton as 5.7 and 8.4 Hz 3 . In solketal and in present monoketal and diketals these coupling constants have very similar values (in solketal 6.5 and 6.6 Hz in CDCl3 and 6.3 and 6.4 Hz in DMSO, in both monoketals 6.7 and 6.3 Hz in CDCl3 and 6.5 and 6.1 Hz in DMSO). These results justify the use of solketal as adequate model for the analysis of present isomers. Methyl atoms on C2 of solketal have different 1 H and 13 C chemical shifts. These chemical shifts were assigned by NOESY experiments, which show that both proton and carbon chemical shifts are for methyl groups cis oriented to hydroxymethyl group shifted towards low fields. This result is in accordance with 13 C NMR studies of stereoisomeric 2,4-dialkyl 1,3dioxolanes 4 . Full assignment of 13 C chemical shifts in monoketals was achieved by 13 C-13 C INADEQUATE correlation experiments. This results in assignment of connections between the bridgehead and methylene carbons of bicyclo[3.3.0]octane ring, which are important for the assignment of cis and trans isomers of unsymmetrical endo-exo isomers. Correlations between the bridgehead carbons were not observed due to too low intensities of outer signals of AB spin systems. With the information from solketal and monoketals the diketal mixture was analyzed by various 2D FT experiments (COSY, NOESY, HSQ, HMBC, SELECTIVE HMBC, INADEQUATE). Spectra were measured in CDCl3, MeOD and DMSO-d6. Best resolution of bicyclo[3.3.0]octane bridgehead protons was observed in DMSO solution (Fig. S1), being complex band of overlapping signals, but still giving possibility to assign by 2D FT bridgehead 10 carbon signals to definite isomers (Fig. S4). INADEQUATE experiment was used to sort out signals to all isomers. In Fig. S5 the connectivity diagram of bridgehead carbons is demonstrated and in Fig. S6 the assignment of methylene carbon atoms in 6 isomers is shown. Acrylic acid diesters from the mixture of spirodiols have retained the same relative concentrations of 6 the isomers. Expanded 13 C NMR spectrum is quite similar to the spectrum of diols (Fig. S7). In 13 C typical NMR esterification effects are observed in alcohol parts of isomers where in alpha position regular ~2 ppm low field and in beta position ~3 ppm high field shifts are registered. At more remote positions different types of carbon atoms are shifted marginally to higher fields. Terminal acrylic carbons are not now any more separated to 8 components and carbonyl carbons show 2 signals representing only exo and endo orientation towards bicycle C3 and C7. Esterification of spirodiols results in smaller variations of carbon chemical shifts within bicyclo[3.3.0]octane bridgehead carbon chemical shifts. In diester they occupy less than 0.30 ppm, in spirodiols they have 0.50 ppm range. The most surprising result in 1 H NMR is the resolution of vinyl protons to 6 from possible 8 types. In Fig. S8 1 H signals from high field half of terminal Z-vinyl protons with only geminal 1.4 Hz coupling constants are shown. These chemical shift differences are result of 22 bond distance between the terminal vinyl H atoms in these isomeric acrylic acid diesters. Fig. S1. Room temperature 800 MHz 1 H NMR spectrum from isomeric dispirodiols mixture in DMSO solution 5
.3.3]propellane spirodiol tT NMR analysis
1 H spectrum of tT points to dynamic effects in the molecule. Two bands from carbocyclic sixmembered ring protons at 1.4 ppm (Fig. S11) are not unresolved equatorial and axial protons, but they are result of intramolecular exchange process. This exchange is even better seen in 13 C spectrum (Fig. S10) from the observed linewidths, where signals from all 5 rings of these molecules are influenced. Linewidths in this spectrum reflect the chemical shift differences in exchanging positions of molecules. They are smallest in quaternary carbons resulting in their opposite to normal most intensive signal intensities. In reported NMR data for unsubstituted [4.3.3]propellane 2 singlet signals with intensity ratio 2 to 3 at 1.40 and 1.58 ppm were reported for 1 H at 80 MHz and assigned 13 C chemical shifts fit with present data of isomeric propellanes except needed obvious exchange of assignment of six membered ring C2, C5 and five membered rings methylene groups signals. Nothing was reported about intramolecular exchange processes for [4.3.3]propellane. The simplest model compound for dynamics study should be 1,1,2,2-tetramethyl cyclohexane, but data for inversion barrier in this compound were not available. For 1,1-dimethyl cyclohexane experimental NMR studies have reported for ΔG of 10.2 5 and 10.5 6 kcal/mol. These values are very close to reported values on unsubstituted cyclohexane. 7 Thus the observed exchange broadening is specific to present isomers. Our NMR probehead was not suitable for low temperature experiments where temperatures lower than -50 °C are needed. AM1 calculations show that trans isomer is more stable by 90 cal/mol and in both isomers the dihedral angle at bridgehead in 6-membered carbocycle is 36.6 degrees. In NMR spectra all methylene protons with 14.4 Hz geminal spin-spin coupling constants in 5-membered rings resonate within 0.07 ppm. For methylene carbons this interval is nearly 100 times larger, demonstrating the advantages of 13 C NMR spectroscopy in stereochemical studies.
|
v3-fos-license
|
2021-07-14T13:25:45.649Z
|
2021-05-15T00:00:00.000
|
235814432
|
{
"extfieldsofstudy": [
"Computer Science"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.mdpi.com/2072-4292/13/10/1936/pdf?version=1621308001",
"pdf_hash": "ed852cdccbabf99fa54926f746c7b18f84ad12e0",
"pdf_src": "MergedPDFExtraction",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44376",
"s2fieldsofstudy": [
"Computer Science"
],
"sha1": "3daf2d08cec2834ae99f89412f005d4c49b53ce2",
"year": 2021
}
|
pes2o/s2orc
|
CscGAN: Conditional Scale-Consistent Generation Network for Multi-Level Remote Sensing Image to Map Translation
: Automatic remote sensing (RS) image to map translation is a crucial technology for intelligent tile map generation. Although existing methods based on a generative network (GAN) generated unannotated maps at a single level, they have limited capacity in handling multi-resolution map generation at different levels. To address the problem, we proposed a novel conditional scale-consistent generation network (CscGAN) to simultaneously generate multi-level tile maps from multi-scale RS images, using only a single and unified model. Specifically, the CscGAN first uses the level labels and map annotations as prior conditions to guide hierarchical feature learning with different scales. Then, a multi-scale discriminator and two multi-scale generators are introduced to describe both high-resolution and low-resolution representations, aiming to improve the similarity of generated maps and thus produce high-quality multi-level tile maps. Meanwhile, a level classifier is designed for further exploring the characteristics of tile maps at different levels. Moreover, the CscGAN is optimized by jointly multi-scale adversarial loss, level classification loss, and scale-consistent loss in an end-to-end manner. Extensive experiments on multiple datasets and study areas demonstrate that the CscGAN outperforms the state-of-the-art methods in multi-level map translation, with great robustness and efficiency.
Introduction
Electronic maps are of great importance for urban computing and location-based services like navigation, autonomous vehicles and so on. However, electronic maps are mainly traditionally obtained through field surveys or manual image interpretation, which is time-consuming and labor-intensive. Hence, automatically, electronic map production is of great value in addressing these limitations and is widely considered [1][2][3][4][5][6][7][8][9][10][11]. Recently, domain mapping or image-to-image translation based methods have been intensively focused and applied to automatic electronic map production, which automatically and efficiently targets translating remote sensing (RS) images to tile maps [2][3][4]. Although promising results have been achieved in one-level map translation [2][3][4]8], simultaneously creating multi-level tile maps remains several challenges, such as scale variation, text annotation loss and ground target change in different levels (see Figure 1). To address these challenges, we propose the CscGAN, a novel deep generation network that can simultaneously translate multi-scale RS images to the corresponding tile maps of different levels, with significant robustness and efficiency.
Electronic map production generally can be divided into the following two categories: traditional computer-aided cartography-based methods and deep learning-based methods. Computer-aided cartography usually includes four stages, namely map designation, data input, symbolized editing and graphic output. However, in the process of computeraided map production, a lot of work still depends on manual expert participation and Recently, deep learning-based methods have been intensively developed for automatically multi-level map translation, due to its strong capacity of feature representation and generation ability [2][3][4]12]. It significantly simplifies the overall process and labour cost of map production and provides the possibility to quickly generate large-range electronic maps. Existing deep learning-based methods can be divided into the following two broad categories: one-to-one mappings [2][3][4]13,14] and many-to-many mappings [5,6,15]. The one-to-one mapping-based methods, such as pix2pix [2], CycleGAN [3] and GcGAN [4], use the source domain image as an input condition and then use one generator to output the target domain image. Meanwhile, one discriminator output the real or fake probability of the target domain image. The many-to-many mapping-based methods, such as Star-GAN [5] and its improved versions [6], can simultaneously generate multi-domain images based on different target labels. Although these solutions have achieved satisfactory results in image-to-image translation, it's still difficult to directly use them for multi-level tile map generation, mainly due to the following two limitations.
1.
One-to-one mapping-based methods usually use two ways to implement map generation with multiple levels, that is, separated training at each level and uniform training with multiple levels. For the former way, tile maps at each level are trained separately, which makes the time and space complexity of the model very high. Meanwhile, due to the low utilization of the training set during the separated training, a larger amount of training samples are needed. For the latter way, the unified training of RS images with different levels can easily cause multi-level information confusion and detailed information loss at a finer level. As shown in Figure 1, the uniformly trained pix2pix did not discriminate RS images from which levels, resulting in generating the confusing contents at the 17th level (maybe from the 18th level), meanwhile, the loss of important contents such as the green land at the finer level (see the red rectangles in Figure 1).
2.
Many-to-many mapping-based methods usually use one generator and auxiliary information to complete multi-level map generation, leading to the errors of detailed content generated at higher levels. For example, as shown in Figure 1, although the StarGAN generated the green land at the 18th level map, the generated land contained a lot of false information and also lost a lot of details (see the red rectangle in the Figure 1b).
To address the above-mentioned limitations in multi-level map translation, we proposed a novel conditional scale-consistent GAN (CscGAN) that simultaneously generates multi-level tile maps from multi-scale RS images, using only a single and unified model with great robustness and efficiency. The CscGAN consists of a multi-scale generator, two multi-scale discriminators, and a map-level classifier, where the annotation images with level labels are used as the prior conditions to guide the network for hierarchical feature generation.
The main contributions of this paper are as follows:
1.
A single and unified multi-level map generation model, called CscGAN, is proposed to learn the mappings among multiple levels, training effectively and efficiently from multi-scale RS images and annotation images with different levels and resolutions. As far as we know, this is the first model to simultaneously generate different tile maps in multiple levels.
2.
Two multi-scale discriminators and a multi-scale generator are designed for jointly learning both high-resolution and low-resolution representations, aiming to produce high-quality tile maps with rich details at different levels.
3.
A map-level classifier is introduced to guide the network for discriminating the learned representations from which level, improving the stability and efficiency of adversarial training in multi-level map generation.
4.
We construct and label a new RS-image-to-map dataset for multi-level map generation and analysis referred to as the "self-annotated RS-image-to-map dataset". Extensive experiments on two datasets and cross study areas show that the Csc-GAN outperforms the state-of-the-art methods for the quality of different levels of map translation.
The remainder of this paper is organized as follows: Section 2 describes related work. Section 3 introduces the details of the used dataset in this study. Section 4 presents the proposed CscGAN for tile map generation with multiple levels in detail. Section 5 discusses the experimental results on publicly available and self-annotated datasets and study areas. Finally, this paper is concluded in Section 6.
Related Work
In this section, methods that are related to Image-to-Image translation and tile map translation are discussed.
Image-to-Image Translation
Image-to-Image translation has been a recent development and research hotspot in the field of generative adversarial network (GAN) [12]. GAN based Image-to-Image translation generally consists of a generator and discriminator that play games during training, to achieve Nash equilibrium and finally to generate the fake data. It's known that it is a challenge to optimize the generator and discriminator in GAN during training [16][17][18]. To address this problem, a lot of training algorithms have been developed for novel generative tasks over the past few years. DCGAN [19] used a convolutional neural network as the generative network and proposed a series of suggestions so that GAN is more stable in training. WGAN [16] adopted the Wasserstein distance to the objective function of the GAN, which can effectively solve gradient vanishes or gradient explosion during training. WGAN-GP [20] directly restricted the gradient of the discrim-inator based on WGAN. LSGAN [21] also modified the objective function and changed the classification task to a regression task in the discriminator, which can effectively solve gradient vanishing. Additionally, to produce high-resolutional images, existing methods, such as [13,[22][23][24][25], first produced lower resolution images and then reproduced them into higher resolution images.
Automatic Map Translation from RS Images
Automatic map translation from RS images has recently attracted more and more attention from academia and industry [26]. Pix2pix [2] is first used in map translation. It used RS images as the generator's input to generate the corresponding Google map and then used the generated Google map to train the discriminator. The recently proposed CycleGAN architecture has been evaluated on RS image to map translation [3], which does not need pairing data by adding a cycle consistency loss. Similar to CycleGAN, GcGAN [4] proposed a geometric consistency constraint GAN to generate maps from RS images. GANs have been also used to create spoof satellite images and spoof images of the ground truth conditioned on the satellite image of the location [27]. Conditional GANs have also been used to generate ground-level views of locations from overhead satellite images Deng, Zhu, and Newsam [28]. Semantic segmentation has also been used to predict the probabilities from spectral features of the RS images [7][8][9]. It is very similar to map translation.
In general, the existing methods mostly generated high-quality unannotated maps at a single level, and the generation of more detailed text annotations at multiple levels is still an open research problem.
Maps Dataset
The maps dataset is widely used in RS image-to-map translation task [2,4,29,30]. The data were collected from Google Maps at a single level. In the experiments, 1096 RS images and 1096 electronic tile maps are used together for training, and 1098 RS images and 1098 electronic tile maps are used for testing. Table 1 shows the maps dataset for the training, the testing in our experiment. Note that the dataset has no level attribute. Some examples from the map dataset are shown in Figure 2.
RS image
Tile map
Dataset Training Testing
Maps dataset 1096 1098 Self-annotated RS-image-to-map dataset 6150 615 3.2. Self-Annotated RS-Image-to-Map Dataset and Study Areas This dataset was collected, annotated and built independently by the authors. The pairs of RS images, annotation images and maps with multiple levels scraped from Google Maps. The RS image, annotation image and the corresponding map for this dataset were collected from some regions in Shanghai and Hubei, China, covering 14, 15, 16, 17 and 18 levels, respectively. In this dataset, there are 1476 pairs of tile images (namely maps and RS images) at each level, where the size of each tile is 256 × 256 pixels. Figure 3 shows examples with different levels from the dataset, and Table 2 lists the detailed information of the dataset. In this study, since the number and resolution in lower levels is insufficient for training, for example, there are only 506 tile maps at level 8 in China, we chose the levels from 14 to 18. Additionally, the data split for training (1230 pairs), validation (123 pairs), test sets (123 pairs), and the total number of samples (1476 pairs) at each level in the experiments. Table 1 gives detailed training and testing schemes for the self-annotated RS-image-to-map dataset.
To evaluate the performance of the different areas, we selected two study areas in Shanghai (including, Songjiang District, Pudong New District, Minhang District, Qingpu District) and Hubei (including Wuhan city, Yingcheng city, Xiaogan city, Huanggang city), respectively. These two areas are relatively developed in China, with intricate roads and rich annotated information, which are very suitable for map translation evaluation. Table 2 shows the detailed information of the study areas, including the latitude, longitude range, scale and spatial resolution at each level.
Methods
In this section, a brief overview of the proposed end-to-end CscGAN for multi-scale RS images to multi-level map translation is first presented. Then, the learning process of each component of the approach is described.
The Overview of the Method
In this paper, we propose a new multi-level map translation network (termed CscGAN) based on multi-scale RS images and their annotation images, which incorporates a multiscale generator, two multi-scale discriminators, and a map-level classifier into a GAN framework, as shown in Figure 4. CscGAN allows simultaneous training of multi-level RS data with different scales within a single network, where the annotation images with level labels are used as prior conditions to guide the network to perform hierarchical feature learning. Specifically, given an RS image x and its annotation image x a with the level label c as the conditional input, the multi-scale generator G is optimized to produce tile map distributions with two different resolutions via using two residual blocks of different scales as, G : (x, x a |c) → {G 1 (x, x a |c), G 2 (x, x a |c)} (see the proposed CscGAN training pipeline in Figure 4). Meanwhile, two multi-scale discriminator D i are optimized to respectively distinguish the generated maps G i (x, x a |c) and real tile maps for learning the hierarchical features D i,j (y i |x i , x ai ) at different levels, where i represents the scale and j represents the features. Furthermore, the map-level classifier is introduced to guide the whole network for learning the map representations most relevant to the corresponding level according to the conditional input. Overall, high-quality tile maps with rich details at different levels can be simultaneously translated by the CscGAN with the following objective functions, where L D and L G respectively are the loss of multi-scale discriminator and generator.
x is a real RS image from the true data distribution p data (x), and y is a tile map from distribution p data (y). p is the number of generated branches, and we set p = 2 in all of our experiments. λ cls and λ L1 are hyper-parameters that control the relative importance of level classification loss L cls and distance loss L1 during training, respectively. We set λ cls = 1 and λ L1 = 100 in all of our experiments, similar to [2]. In order to stabilize the training process, we use the least-squares loss [21] instead of the traditional objective function in GAN. Each component in the CscGAN is subsequently introduced in detail.
Multi-Scale Generator
Due to resolution variation in both RS images and tile maps at different levels, the traditional generator in GANs has difficulty generating both high and low resolution maps directly due to overfitting and unstable training [24]. Therefore, a multi-scale generator G with two parallel scale branches is designed for generating multi-resolution maps of different levels, aiming to model both high-resolution and low-resolution image feature distributions at the same time and overwhelm overfitting during training. The detailed architecture of the proposed multi-scale generator is presented in Figure 5. It consists of a backbone adopted in CycleGAN [30] and two scale generated branches (see G 1 and G 2 in Figure 5). In the small-scale branch G 1 , the generator first generates low-resolution tile maps according to basic color and structures via two stride-2 convolutions and 7 residual blocks, and thus some detailed text annotations might be omitted; then, in the large-scale branch G 2 , the generator focuses on previously ignored text information to generate higher resolution maps. Specifically, the small-scale branch G 1 outputs a lower resolution map with the size of 128 × 128, and the large-scale branch G 2 outputs the higher resolution map with the size of 256 × 256. To effectively learn discriminative representations with different resolution at each branch, a multi-scale adversarial loss function is proposed and calculated as follows: where x i , x ai and y i respectively represent the RS image, annotation image and its map with the ith resolution. D i (x i , x ai , y i ) is the multi-scale discriminator described in the following section. Additionally, the multi-scale generator is to not only fool the discriminator but also approach the ground truth via the following L1 scale-consistent distance loss at each scale: It forces the generated map to be near the prior annotation map x a at each scale.
Multi-Scale Discriminator
Since the multi-scale generator produces maps of two different resolutions (see the above Section 3.1), two multi-scale discriminators, i.e., D 1 and D 2 in the Figure 4, are adopted to respectively connect to the above two generator branches (G 1 and G 2 ), to explicitly enforce the CscGAN to learn better alignment between the RS image and the conditioning text annotation images at multiple levels. The framework of each multi-scale discriminator is shown in Figure 6. For each multi-scale discriminator D i , we first use PatchGAN [2,3] as the backbone, which classifies each 70 × 70 patch of an image as real or fake. However, some detailed information may be lost in this process. To alleviate information loss, each multi-scale discriminator D i is designed as a tree-like structure, which contains three sub-discriminators to hierarchically learn features of different levels. Since multi-resolution images were generated by two generated branches, two discriminators were used for different scales. During training, each discriminator takes real RS images and their corresponding text annotation images as positive sample pairs. The total multi-scale adversarial loss is used to optimize the two multi-scale discriminators and is defined as follows: where m is the total number of sub-discriminators in each multi-scale discriminator and is set as 3 in this study. j represents the jth sub-discriminator. Additionally, the total multi-scale adversarial loss L adv is the average for all generator branches.
Finally, the proposed multi-scale discriminator D i learns multi-resolution probability distributions over both input source x, x a , and discriminate the tile map y, that is, D : x, y, c → {D i (y|x, x a ), M i (c|x, x a , y)}. Besides, M i is the proposed map-level classifier that is used to classify input data into the relevant level, which will be described below.
Map-Level Classifier
In this section, a map-level classifier is introduced to guide the network for discriminating the learned representations from which level. To make use of the prior conditions, the map-level classifier M is plunged into the top of the multi-scale discriminator D, as shown in Figure 7, improving the stability and efficiency of adversarial training for map generation at different levels. Figure 7 illustrates the training process of the map-level classifier and the multi-scale generator. Given the level label c, a one-hot vector is first used to encode c as [0, 1, 0, 0, 0] for categorical attributes. Then, a level classification loss of real images is used to optimize the classifier and the multi-scale discriminator D i , while a level classification loss of fake images is used to optimize multi-scale generator G. In detail, the map-level classification loss of real images is given by where the factor M i (c|x i , x ai , y i ) represents a probability distribution over map level labels computed by the map-level classifier M i . x i and y i represent RS images and tile maps at the ith resolution branch, respectively. Through minimizing this objective L r cls , M i can classify a real RS image x i to its corresponding level c. Additionally, the map-level classification loss of the fake images is defined as where G i is the ith generator branch in the multi-scale generator. It can classify the generated fake images to the relevant level c by minimizing this objective function. The level classifier contained three stride-1 convolutions and four stride-2 convolutions. The output size of the level classifier was 1 × 1 × N, where N represents the number of levels in the experiments.
Experiments and Analysis
In this section, we thoroughly evaluate the proposed approach on two challenging pair datasets, that is, the public maps dataset and a self-annotated RS-image-to-map dataset, and two different study areas including Shanghai and Wuhan in China.
Evaluation Metrics
To quantitatively and thoroughly evaluate the proposed model, we also perform quantitative evaluation using the following metrics: Peak Signal to Noise Ratio (PSNR) [31], Structural Similarity (SSIM) [31,32], Pixel Accuracy [4], and the metrics in a classification task (Accuracy, Precision, Recall, F1 score).
Peak Signal to Noise Ratio (PSNR)
PSNR directly measures the difference in pixel values. Suppose x and y represent the pixel values from the generated image and the original image, respectively. The size of each image is m × n pixels. The mean squared error (MSE) is first calculated as: where i and j define the pixel index positions in an image. Then, the PSNR can be expressed as where MAX 2 I is the maximum possible pixel value of the image. For example, each pixel is represented by an 8 bit in binary; then, MAX 2 I equals 255 2 .
Structural Similarity (SSIM)
SSIM estimates the holistic similarity between two images. SSIM is designed by modelling any image distortion as a combination of the following three factors: loss of structure, luminance distortion, and contrast distortion [31,32]. The SSIM is calculated as: where l(x, y) = 2µ x µ y + C 1 s(x, y) = σ xy + C 3 σ x σ y + C 3 .
Equation (11) is the luminance comparison function. It is used to measure the closeness of the average luminance of two images (x and y). This factor is equal to 1 only if µ x = µ y . Equation (12) is the contrast comparison function. It is used to measure the closeness of the contrast between the two images (i.e., x and y). The standard deviation measures the contrast σ x and σ y . This term is equal to 1 only if σ x = σ y . Equation (13) is the structure comparison function. It is used to measure the correlation coefficient of image x and image y [31]. Note that σ xy is the covariance between the two images x and y. The positive value of the SSIM index is in [0, 1]. Zero means there is no correlation between images; one means x = y. To avoid a null denominator, bring into three positive constants C1, C2, and C3.
Pixel Accuracy
The third evaluation metric is used in GcGAN [4], which is used to assess the accuracy of aerial photo to map translation. Formally, given a pixel i with the ground-truth RGB value (r i ,g i ,b i ) and the predicted RGB value (r i ,g i ,b i ), the pixel accuracy (acc) is computed as Since maps only contain a limited number of different RGB values, it is reasonable to compute pixel accuracy using this strategy (θ = 5 in this paper).
Accuracy, Precision, Recall, F1 Score, ROC Curves
As with other classification tasks, we use Accuracy, Precision, Recall, F1 Score to evaluate the level classifier's performance as follows: We also use the receiver operating characteristic (ROC) curve as the level classifier's performance indicators.
Training Details
CscGAN was implemented using the Pytorch deep learning framework [33]. We adopted mini-batch SGD and applied the Adam solver with a batch size of 1, a learning rate of 0.0002, and momentum parameters β 1 = 0.5, β 2 = 0.999.
Experimental Machine Configuration
The network models were trained on a PC with an Intel (R) Core TM i7-6700 CPU at 4.00 GHz with 32 GB memory and an NVIDIA GeForce RTX 2080. All the models are tested on an NVIDIA GeForce GTX 960M.
Evaluation of Maps Dataset
The proposed CscGAN was compared with existing state-of-the-art methods, including pix2pix [2] and CycleGAN [30] on the maps dataset, as shown in Table 3. The pix2pix uses an RS image x to realize the translation from the RS image x to the tile map y. Cycle-GAN also achieves translation from the RS image x to tile map y, but the data requirements are not as strict as pix2pix. The PSNR [31], SSIM [31,32], and pixel accuracy [4] mentioned in Section 5.1 were used to evaluate these methods. Table 3 lists the experimental results, and Figure 8 shows the visualization results generated by the different methods on the maps dataset. Compared to the state-of-the-art methods, the proposed CscGAN achieved outperformance in all evaluation metrics. Since the maps dataset has only one level, the CscGAN here did not include the map-level classifier. The results generated by the CscGAN are more similar to the ground truth than those generated by the other methods. For PSNR, SSIM and pixel accuracy, our method increases by 0.611, 0.039, and 5.234%, respectively, compared to the second highest model(pix2pix). As shown in Figure 8c,d, both the pix2pix and CycleGAN cannot generate large areas such as rivers and green space well. The rivers and green spaces generated by CscGAN significantly outperformed the other methods. It is proved that the multi-scale generator enables the network to obtain more detailed information. Table 4 reports the comparison results of pix2pix [2], CycleGAN [30], StarGAN [5], and the proposed CscGAN on the self-annotated RS-image-to-map dataset. Since this dataset includes five levels, the one-to-one mapping-based methods, like pix2pix and Cy-cleGAN, need to be trained as five independent models. As shown in Table 4, the proposed CscGAN exhibited significantly improved performance for multi-levels map generation in several evaluation metrics. Compared to existing methods, our method has the largest growth in PSNR, SSIM and pixel accuracy over 5%, 0.18% and 13% respectively. We conjecture that multi-scale generator and multi-scale discriminator can model more detailed information so that the CscGAN can generate better results at multiple levels. Additionally, Table 5 lists the total parameter sizes and inference time of the different models. Compared to other methods, the proposed CscGAN has smaller parameter sizes (only 81.5 MB), which makes training time much less than other methods. The tiny increase of reference time shows that the proposed CscGAN can achieve the best performance with tiny additional computational cost, which means that the proposed method can achieve an excellent balance between accuracy and efficiency.
Params Size (MB) Inference Time (ms)
pix2pix [2] 269.9 0.011 CycleGAN [3] 539.5 0.008 StarGAN [5] 64.9 0.010 CscGAN 81.5 0.014 In addition, due to space limitations, in Figure A1, we show the generated results of different methods at each level. Compared to the CycleGAN and pix2pix, the CscGAN produced more detailed and precise contents in tile maps. Furthermore, compared to the results of StarGAN, the CscGAN also achieved competitive visualization results at multiple levels, especially detailed contents such as text annotations and subtle loads in the high-level maps.
Ablation Experiment and Study
The impact of each component in CscGAN on the final performance is verified in this section. Table 6 presents the ablation results of the gradual addition of the level classifier, multi-scale discriminator and multi-scale generator training on the baseline pix2pix [2] framework. The results of ablation experiments were quantified by PSNR [31], SSIM [32], and pixel accuracy [4]. As seen from Table 6, after adding the map-level classifier, the PSNR, SSIM and pixel accuracy are remarkably higher (respectively increases by 0.521%, 0.018%, 1.994%) than the baseline. After adding the multi-scale discriminator, the PSNR achieved the increment (about 0.114). The possible reason is that the improvement of the discriminator's ability indirectly leads to the enhancement of the generator's ability. Finally, adding the multi-scale generator, the PSNR, SSIM and pixel accuracy are remarkably higher (increases by 0.01%, 0.004%, 1.183%). In addition to improving the quality of the generated results, the multi-scale approach can effectively enhance the stability of GAN training, especially in the generation of high-resolution images.
Furthermore, Figure 9d shows the generated visualization results, with using multiscale and not using multi-scale generator. As shown in Figure 9d, without using the multi-scale generator, the training process was very unstable, resulting in the very terrible results. On the contrary, with using the multi-scale generator, the problem of training instability is alleviated and thus the correct maps can be generated (see Figure 9e). Table 6. Ablation study of the proposed CscGAN. Impact of integrating our different components (Level Cls, Mult D, and Mult G) into the baseline on the RS-image-to-map dataset for ablation experiments. +Map-level Cls: Add a level classifier to pix2pix. +Mult D: Add a multi-scale discriminator based on the previous model. +Mult G: Add a multi-scale generator based on the previous model. Note: The best results are presented in bold.
Level
Baseline To further study the effectiveness of the level classifier, Figures 10 and 11 respectively provide the confusion matrixes and ROC curve of real-fake map classification in the level classifier on the self-annotation RS-image-to-map dataset. From Figure 10a, we can be observed that the map-level classifier reached an accuracy of 94.8%, precision of 94.97%, recall of 94.8%, and F1 score of 0.95 for real map classification. Additionally, we used the multi-scale generator to generate a fake map as the input into the level classifier. The classified results by the level classifier are shown in Figure 10b. Most of the fake maps can be successfully classified to the corresponding level by the level classifier (the accuracy is 89.27%, the precision is 90.14%, recall is 89.27%, and F1 score is 0.89). Moreover, Figure 12a,b respectively presents the generated maps at the 17th, 18th level, whether using the level classifier. Obviously, the level classifier makes the generated map details richer and more accurate.
Generalization Analysis of Cross Study Areas
To verify the generalization ability of the proposed CscGAN in different areas, we used RS images in the Shanghai area for training and the Hubei area for testing. Table 2 lists the training and testing information of the used study areas. The comparison experiments were conducted by pix2pix [2], CycleGAN [30], StarGAN [5], and CscGAN on the cross study areas. Table 7 reports the evaluation results of the PSNR [31], SSIM [32] and pixel accuracy [4] for the three models. Compared to the other state-of-the-art methods, the results of the proposed CscGAN demonstrate that it can be better reused for multi-level map generation in other study areas. For SSIM, the highest results were achieved by the proposed CscGAN at each level. For PSNR and Acc, compared to the StarGAN at the 17th and 18th levels, although the results of the proposed method slightly declined, the average results of the three metrics were significantly improved (increases by 0.039%, 0.003%, 0.456%). Additionally, to clarify the visualization quality of the generation maps, generation results at each level can be shown in Figure A2. The CscGAN has a good effect on detailed information generation at finer levels, e.g., rivers, text annotations, and green areas.
Result Analysis and Discussion of Study Areas with Different Levels
To clearly discuss the quality of generated maps from large RS images at different levels, Figures 13-17 Figure 13 presents the large RS image, annotation image, ground truth, and generated map in Songjiang District of Shanghai at the 14th level, where the results generated by the proposed CscGAN are very similar to the RS image and the ground truth. The generated results in Figure 13c show that both the coarse information including green land, rivers, and some rough roads and the detailed information including map annotations and words, are similar to the ground truth. For clarity, the zoomed local areas (A1 and A2) are on the right of the figure. Note that, because there is slight mismatch between RS images and real maps (see Figure 13a,b), the low resolution images at the 14th level are difficult to distinguish the fake or real features by the discriminator. Figure 14 exhibits the maps generated by the proposed CscGAN in the Pudong New Area of Shanghai at the 15th level. Through observation, it can be found that the finer roads generated at the 15th level are finer than at the 14th level. Additionally, Figures 15-17 respectively depict the generation results with levels 16 to 18 in Minhang District and Qingpu District, in Shanghai. With the finer level, detailed annotations and content in RS images become finer, so the generation maps are clearer and more accurate than previous levels. Moreover, the generated lettering annotations were verisimilitude at each level.
Conclusions
This paper proposed an end-to-end trainable map generation network, termed Csc-GAN, to perform high-quality map generation with multiple levels from multi-scale RS images using only a single and unified model. In CscGAN, we designed two multi-scale discriminators and a multi-scale generator to jointly learn both high-resolution and lowresolution representations with rich details at different levels, and a map-level classifier to further guide the network for learning the map representations most relevant to the corresponding level. Furthermore, to carry out experiments at different map levels, we constructed a new dataset with multiple level RS images, annotation images and corresponding tile maps. Experiments on two map datasets (namely the maps dataset and the self-annotated RS-image-to-map dataset) and two different study areas (i.e., Songjiang District, Pudong New District, Minhang District, and Qingpu District in Shanghai and Wuhan, Yingcheng, Xiaogan, Huanggang in Hubei) demonstrate that the CscGAN can simultaneously train multiple levels of data using a single model and achieve a muchimproved performance and greater robustness than other methods. However, in finer levels, dense building contours are still easily blurred. In future work, a powerful edgeconstrained network will be explored in our CscGAN framework, for providing a more reliable synthetic map.
|
v3-fos-license
|
2017-04-20T17:48:54.157Z
|
2014-06-12T00:00:00.000
|
6855134
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0099014&type=printable",
"pdf_hash": "246a670b4ceb13a820a6a2f2e0348cc6ead9348d",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44378",
"s2fieldsofstudy": [
"Medicine"
],
"sha1": "246a670b4ceb13a820a6a2f2e0348cc6ead9348d",
"year": 2014
}
|
pes2o/s2orc
|
Characteristics of Myocardial Postsystolic Shortening in Patients with Symptomatic Hypertrophic Obstructive Cardiomyopathy before and Half a Year after Alcohol Septal Ablation Assessed by Speckle Tracking Echocardiography
Objectives Postsystolic shortening (PSS) has been proposed as a marker of myocardial dysfunction. Percutaneous transluminal septal myocardial ablation (PTSMA) is an alternative therapy for patients with hypertrophic obstructive cardiomyopathy (HOCM) that results in sustained improvements in atrial structure and function. We investigated the effects of PTSMA on PSS in HOCM patients using speckle tracking imaging. Methods Conventional echocardiographic and PSS parameters were obtained in 18 healthy controls and 30 HOCM patients before and half a year after PTSMA. Results Compared with the healthy controls, the number of segments having PSS and the average value of PSS were significantly increased in the HOCM patients. At 6 months after PTSMA, both the number of segments having PSS (10.5±2.8 vs. 13.2±2.6; P<0.001) and the average value of PSS (−1.24±0.57 vs. −1.55±0.56; P = 0.009) were significantly reduced. Moreover, the reductions in the average value of PSS correlated well with the reductions in the E-to-Ea ratio (r = 0.705, P<0.001). Conclusions Both the number of segments having PSS and the average value of PSS were significantly increased in the HOCM patients. PTSMA has a favourable effect on PSS, which may partly account for the persistent improvement in LV diastolic function in HOCM patients after PTSMA.
Introduction
Postsystolic shortening (PSS) is considered myocardial shortening after the point of aortic valve closure, which has been known for years. PSS is found in one-third of normal myocardium [1]; however, it is increased by different cardiovascular diseases [2][3][4]. The presence and degree of PSS has been found to be associated with severe cardiac abnormalities [5]. PSS has been suggested as a marker of myocardial ischemia and fibrosis [6][7]. A previous study observed that patients with hypertrophic cardiomyopathy (HCM) had more PSS than normal controls [4]. Percutaneous transluminal septal myocardial ablation (PTSMA) is an alternative type of therapy for patients with hypertrophic obstructive cardiomyopathy (HOCM) that may result in the long-term improvement of symptoms and partly reverse myocardial ischemia and fibrosis [8][9][10][11]. However, it is unclear whether PTSMA can decrease PSS in HOCM patients. Speckle tracking echocardiography (STE) is a novel ultrasonic technique, which can be used for measuring myocardium deformation during the cardiac cycle and is well suited for the quantification of PSS. This study was designed to quantitatively analyse and compare myocardial longitudinal PSS by STE in HOCM patients before and half a year after PTSMA to determine the effect of PTSMA on PSS.
Methods
This study was reviewed and proved by the Ethics Committee of Beijing Fuwai Hospital. Written informed consents were obtained from all patients and control subjects.
Study population
The study population consisted of 38 patients with symptomatic obstructive HCM (HOCM) who were referred for PTSMA to our centre between May 2012 and December 2012. However, 5 patients underwent surgical myectomy and mitral-valve replacement, and 3 patients failed to return for the follow-up examination. The final cohort included 30 patients. The diagnoses of HCM were obtained by means of 2-dimensional echocardiography in patients with interventricular septal thicknesses $1.5 cm who had no other causes attributed to their left ventricular (LV) hypertrophy. The selection criteria for PTSMA were as follows: the persistence of symptoms despite being administered the maximum tolerated dosage of medication; a left ventricular outflow tract (LVOT) gradient .50 mmHg at rest or . 100 mmHg after provocation; accessible septal branches, particularly of the left anterior descending coronary artery; the absence of a significant intrinsic abnormality of the mitral valve; and other conditions for which cardiac surgery was indicated. PTSMA were performed as previously described [12][13]. PTSMA success was defined as an improvement in the New York Heart Association (NYHA) class and a reduction in the LVOT pressure gradient of 50% of the baseline. Eighteen age-and gender-matched healthy controls were included from the subjects who visited our hospital for annual routine medical examinations.
Echocardiography
Each subject underwent an echocardiographic evaluation using a commercially available echocardiographic scanner (IE33, Philips Medical Systems, Best, Netherlands) that was equipped with an S5-1 transducer (frequency transmitted, 1.7 MHz; frequency received, 3.4 MHz) before and half a year after the PTSMA procedure. One lead electrocardiogram was recorded continuously. A 2-dimensionally guided M-mode echocardiography was performed to measure the thicknesses of the interventricular septum and left ventricular posterior wall (LVPW). The wall thickness was measured at the level of the mitral valve and papillary muscles in each of the four myocardial segments and at the apical level in the anterior and posterior segments using parasternal short-axis views. The maximum LV wall thickness was defined as the greatest thickness in any single segment. The left ventricular ejection fraction (LVEF) and the left atrial end-systolic volume (LAV) were calculated using a modified Simpson's biplane method in the apical 4-and 2-chamber views. The maximal early and late diastolic inflow velocities (E and A waves), E-to-A ratio, and deceleration time (DT) of the E wave were obtained using a pulsed-wave Doppler. The sample volume was placed immediately below the level of the mitral leaflet tips in the apical 4-chamber view. The LV outflow tract (LVOT) gradient was measured using a continuous-wave Doppler in the apical 5-chamber view. The TDI of the mitral annulus movement was performed using the apical 4-chamber view. A 1.5-mm sample volume was placed at the lateral side of mitral annular. The velocity of mitral annular was measured in early diastole (Ea) and with atrial contraction (Aa). Analyses of the isovolumic relaxation time (IVRT) were performed.
Strain Data Acquisition and Analysis
Three cardiac cycles were recorded in apical 4-, 2-, and 3chamber views using grey-scale acquisition at a frame rate over 80 s 21 . The off-line longitudinal strain data analysis was performed with QLAB 6.0 software (Philips Medical Systems, Andover, Massachusetts, USA). Eighteen segments from 6 LV walls were assessed, namely, the basal, mid, and apical segments for the inferior and anterior septum, and the anterior, anterolateral, inferolateral and inferior wall. The peak negative systolic strain was recorded to assess segmental myocardial systolic function. PSS was defined as the segmental shortening in the diastole beyond the minimum systolic segment length (the peak negative strain in the diastole minus the peak negative strain in the systole). If the minimum segment length was within the systole, PSS was set to zero ( Figure 1). The average values of PSS from all 18 segments were then calculated and were considered to be the global PSS.
The reproducibility of measurements of PSS was assessed by inter-and intra-observer agreement. Intra-observer agreement was determined by having one observer repeating the measurements of these variables in all 30 patients. Inter-observer agreement was determined by having a second observer measuring these variables in the same patients. We used the Cohen kappa coefficient to determine inter-and intra-observer agreement of the presence of PSS in different myocardial segments. Spearman, coefficient was performed to assess inter-and intra-observer agreement of average value of PSS in each subjects.
Statistical analyses
The data are presented as the mean6SD for the continuous variables or as percentages for the categorical variables. The clinical characteristics were compared using the t-test for the continuous variables and the chi-square test for the categorical variables. The correlations between changes in the number of myocardial segments with PSS and average values of PSS and the improvement of the E/Ea ratio at half a year after the PTSMA procedure were examined using the Pearson's test. All probability values were for 2-tailed tests. A value of P,0.05 was considered indicative of a statistically significant result. Data processing and statistical analyses were performed using SPSS 17.0 software (SPSS, Chicago, IL, USA).
Results
The basic clinical characteristics of the HOCM patients and healthy controls are summarised in Table 1. There were no significant differences between the two groups in terms of age, gender, height, weight, heart rate, systolic blood pressure or diastolic blood pressure. Most of the patients experienced severe symptoms of heart failure. The mean NYHA functional class was 2.760.3, and 73.3% of the patients were in the NYHA functional class III despite optimal medical therapy that consisted of betablockers in 23 (76.7%) patients and calcium-channel blockers in 7 (26.7%) patients. Because of serious side effects or contraindications, 3 patients received neither beta-blockers nor calciumchannel blockers.
The conventional echocardiographic parameters in HOCM patients before PTSMA and the control group are shown in Table 2. According to disease characteristics, patients with HOCM exhibited higher LVEF, greater LAV, and thicker LV walls for both the ventricular septum and the posterior wall. The mean baseline resting LVOT gradient for the HOCM patients was 92.4632.2 mmHg. All of the HOCM patients had abnormal diastolic function before the PTSMA procedure. The transmitral E-wave velocities, A-wave velocities and E/A ratios were similar between the two groups, whereas the DT and IVRT were significantly prolonged in patients with HOCM. The TDI parameters from the lateral mitral annulus showed that the early diastolic peak velocities were significantly lower in HOCM patients. Additionally, the E/Ea ratio in HOCM patients was more than twice as high in healthy controls. Figure 2 shows the comparison of the number of myocardial segments having PSS and the average value of PSS between the two groups. As shown, patients with HOCM had a greater number of segments with PSS and greater PSS values than healthy controls(13.2062.62 vs 5.7462.05,P,0.001; 21.5560.57 vs 21.0560.64,P = 0.012). In HOCM patients group, the inter-and intra-observer agreement is very high in both the presence of PSS in different myocardial segments (Kappa = 0.911,p, 0.001;Kappa = 0.831,p,0.001) and the average value of PSS(r = 0.871,p,0.001;r = 0.810,p,0.001).
During the ablation procedure, the mean amount of alcohol that was injected was 2.3760.90 ml. Right-bundle branch blocks occurred at a rate of 36.7% (11 patients). Transitory trifascicular blocks occurred at a rate of 23.3% (7 patients). No patient underwent a permanent pacemaker implantation following the procedure. There was no peri-interventional mortality during the observational period.
Changes in the conventional echocardiographic parameters in the patients with HOCM at half a year following the PTSMA procedure are shown in Table 2. Patients exhibited an obvious decrease in the LVOT gradient. The PTSMA produced a significant reduction in the interventricular septal thickness. The left ventricular posterior wall also decreased in size, but this change was not statistically significance. The left atrial volume indexed to body surface area was significantly reduced following PTSMA at half a year. There were no significant changes in the Ewave velocity, A-wave velocity, or the E/A ratio. However, the reduction in DT and IVRT were indicative of the amelioration of the LV diastolic dysfunction. Similar trends in the DTI parameters of the diastolic function were also observed. There was an obvious increase in the Ea velocities, resulting in a significant reduction in the E-to-Ea ratio.
As shown in Figures 3, 4 and 5, both the number of myocardial segments having PSS and the average value of PSS were significantly reduced(13.2062.62 vs 10.5362.83,P,0.001; 21.5560.57 vs 21.2460.57,P = 0.009). In addition, a significant correlation between the reduction of the average value of PSS and the reduction of the E-to-Ea ratio was observed (r = 0.705, P, 0.001; Figure 5).
Discussion
Our study provides unprecedented data regarding the PSS parameters in HOCM patients before and half a year after a PTSMA procedure. We found that compared with healthy controls, both the number of myocardial segments having PSS and the average value of PSS were significantly increased in the HOCM patients. At half a year following PTSMA, both parameters of PSS were significantly improved. Moreover, the reduction of the average value of PSS in the patients was significantly correlated with the reduction of the E-to-Ea ratio.
In 1986, Bertha and colleagues observed myocardial segments shorting after the point of the aortic valve closure into the early diastole using the sonomicrometry technique; they named the phenomenon ''postsystolic shortening'' [14]. Subsequent studies demonstrated that PSS is a nonspecific feature that occurs in both healthy hearts and in different heart diseases [2][3][4]. However, PSS is observed more frequently and with greater amplitude in the ischemia myocardium than in a normal myocardial [3]. Previous studies showed that PSS could distinguish the ischemia myocardial and reflect the severity of myocardial ischemia [8][9][10][11]. Myocardial fibrosis is another major reason for an increasing PSS. Plaksej and colleagues observed that with the progression of NYHA functional classes in heart failure patients, the concentration of circulating markers of myocardial fibrosis increased and the PSS index also increased [15]. Tsai further found that in hypertension patients, increased serum procollagen type I carboxyterminal propeptide (PICP), which is considered a circulating maker for myocardial fibrosis, was correlated with increased PSS [16]. Currently, a precise explanation for the association between myocardial fibrosis and PSS is difficult. However, previous studies found that myocardial fibrosis can hinder the unfolding of the left ventricle [17] and delay the myocardial relaxation that may partly contribute to the delayed myocardial shortening into the early diastole.
Hypertrophic cardiomyopathy (HCM) is a genetically transmitted myocardial disease characterised by varying degrees of myocardial hypertrophy. Microvascular dysfunction, characterised by a blunted vasodilator reserve in the absence of an epicardial coronary stenosis, is very common in patients with HCM [18][19]. It is mostly a result of the intimal and medial hyperplasia of the intramural coronary arteries and subsequent lumen reduction [20]. Extravascular compression following hypertrophy of the LV wall and the elevated LV end-diastolic pressure is another major reason [21]. Microvascular dysfunction, in turn, can lead to myocardial ischemia because of hypoperfusion in the corresponding area. Myocardial fibrosis is a prominent pathological feature of HCM. Myocardial fibrosis is closely related with the symptoms in HCM patients and is an independent marker of an unfavourable prognosis in this disease [22][23]. Thus, it is reasonable to consider whether PSS increases in patients with HCM. However, few studies have focused on PSS in HCM patients. In 2003, Stoylen described PSS in a patient with apical HCM in a case study [24]. In 2006, Ito and colleagues investigated PSS in 30 HCM patients and 30 healthy controls using strain imaging based on tissue Doppler [4]. They found that compared with the healthy controls, the incidence of PSS was noticeably more frequent, and the postsystolic index used to assess the severity of PSS was higher in HCM patients. In our study, we assessed PSS in HCM patients using speckle tracking imaging, which is angle independent and less susceptible to signal noise compared with Doppler strain. However, our results were similar with previous studies. We found that both the number of myocardial segments having PSS and the average value of PSS were significantly increased in the HCM patients.
Currently, PTSMA is considered to be an alternative therapy to surgical myomectomy for patients with HOCM that results in the immediate relief of a LVOT obstruction, the regression of LV hypertrophy and the sustained improvement in LV diastolic function [8][9]25]. Thus, it is reasonable to hypothesise that PTSMA can improve microvascular dysfunction by relieving the extravascular compression in HOCM patients. Recent evidence supports that PTSMA has a favourable effect on microvascular dysfunction. Soliman evaluated the intramyocardial flow dynamics with an adenosine myocardial contrast echocardiography in healthy volunteers and 14 HOCM patients before and 6 months after PTSMA. Soliman found that 6 months after PTSMA, both the myocardial flow reserve and the septal hyperemic endo-to-epi myocardial blood flow ratio were significantly improved [21]. In another study [26], Timmer performed a 15 O-water PET study to obtain the resting myocardial blood flow and the coronary vasodilator reserve in 15 HOCM patients before and half a year after the PTSMA. In the study, Timmer observed similar results as Soliman, and he further proved that with the improvement of microvascular dysfunction, myocardial energy was restored in the HOCM patients after PTSMA. Furthermore, some studies have suggested that PTSMA can also partly reverse myocardial fibrosis in HOCM patients [11,[27][28]. Thus, it is reasonable to consider whether PTSMA can reduce PSS in HOCM patients. However, to our knowledge, there is no study that focuses on the effect of PTSMA on PSS in HOCM patients. Our study showed that PTSMA can significantly reduce PSS in patients with HOCM and may result in improvements in microvascular dysfunction and myocardial fibrosis.
Impaired LV diastolic function is the most common pathophysiological feature of HCM, which has been implicated as the primary determinant of symptoms related to heart failure in HCM patients [29][30]. PSS can delay myocardial relaxation resulting in an increased LV filling pressure. Ito reported that the number of segments having PSS correlated significantly with the isovolumic relaxation time in patients with HCM [4]. Ito's results indicated that PSS might contribute to the impaired LV diastolic function in patients with HCM. Significant and sustained improvement in the LV diastolic function has been observed in the short-and longterm following a PTSMA [25]. However, the mechanism of improvement in the LV diastolic function is still unclear. In this study, we found that the decrease in the average value of PSS correlated well with the reduction of the E-to-Ea ratio, which is widely used to estimate the LV filling pressure. Our results indicated that the reduction in PSS might partly account for the sustained improvement of the LV diastolic function in HOCM patients after a successful PTSMA.
This study had several limitations. In addition to the relatively small sample size and the retrospective study design, the invasive measurements of the LV diastolic function were not performed during the course of this study. Instead, we used the E-to-Ea ratio to reflect the LV filling pressure. However, Geske reported that there was only a modest correlation between the estimated LV filling pressure with the use of the E-to-Ea ratio and the directly measured pressure in HCM patients [31].
Conclusions
In conclusion, compared with the healthy controls, the number of segments having PSS and the average value of PSS were significantly increased in the HOCM patients prior to the septal ablation. PTSMA was found to have a favourable effect on PSS, which may partly account for the persistent improvement in the LV diastolic function in HOCM patients after PTSMA. A larger, prospective study with invasive catheters to evaluate the LV filling pressure is necessary to confirm our results.
|
v3-fos-license
|
2017-05-28T20:41:40.000Z
|
2016-11-29T00:00:00.000
|
208108675
|
{
"extfieldsofstudy": [
"Physics",
"Computer Science"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://link.springer.com/content/pdf/10.1140/epjc/s10052-017-4852-3.pdf",
"pdf_hash": "01e88398b34ab2ccac8037ee892b437dd8af3035",
"pdf_src": "ScienceParseMerged",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44379",
"s2fieldsofstudy": [
"Physics"
],
"sha1": "5e40e49b35d5e27a61fdc91455ea93342519c1ee",
"year": 2017
}
|
pes2o/s2orc
|
Performance of the ATLAS trigger system in 2015
During 2015 the ATLAS experiment recorded \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$3.8\,{\mathrm{fb}}^{-1}$$\end{document}3.8fb-1 of proton–proton collision data at a centre-of-mass energy of \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$13\,{\mathrm{TeV}}$$\end{document}13TeV. The ATLAS trigger system is a crucial component of the experiment, responsible for selecting events of interest at a recording rate of approximately 1 kHz from up to 40 MHz of collisions. This paper presents a short overview of the changes to the trigger and data acquisition systems during the first long shutdown of the LHC and shows the performance of the trigger system and its components based on the 2015 proton–proton collision data.
Introduction
The trigger system is an essential component of any collider experiment as it is responsible for deciding whether or not to keep an event from a given bunch-crossing interaction for later study. During Run 1 (2009 to early 2013) of the Large Hadron Collider (LHC), the trigger system [1][2][3][4][5] of the ATLAS experiment [6] operated efficiently at instantaneous luminosities of up to 8 × 10 33 cm −2 s −1 and primarily at centre-of-mass energies, √ s, of 7 TeV and 8 TeV. In Run 2 (since 2015) the increased centre-of-mass energy of 13 TeV, higher luminosity and increased number of proton-proton interactions per bunch-crossing (pile-up) meant that, without upgrades of the trigger system, the trigger rates would have exceeded the maximum allowed rates when running with the trigger thresholds needed to satisfy the physics programme of the experiment. For this reason, the first long shutdown (LS1) between LHC Run 1 and Run 2 operations was used to improve the trigger system with almost no component left untouched.
After a brief introduction of the ATLAS detector in Sect. 2, Sect. 3 summarises the changes to the trigger and data acquisition during LS1. Section 4 gives an overview of the trigger menu used during 2015 followed by an introduction to the reconstruction algorithms used at the high-level trigger in Sect. 5. The performance of the different trigger signatures is shown in Sect. 6 for the data taken with 25 ns bunchspacing in 2015 at a peak luminosity of 5 × 10 33 cm −2 s −1 with comparison to Monte Carlo (MC) simulation.
ATLAS detector
ATLAS is a general-purpose detector with a forwardbackward symmetry, which provides almost full solid angle coverage around the interaction point. 1 The main components of ATLAS are an inner detector (ID), which is surrounded by a superconducting solenoid providing a 2T axial magnetic field, a calorimeter system, and a muon spectrometer (MS) in a magnetic field generated by three large superconducting toroids with eight coils each. The ID provides track reconstruction within |η| < 2.5, employing a pixel detector (Pixel) close to the beam pipe, a silicon microstrip detector (SCT) at intermediate radii, and a transition radiation tracker (TRT) at outer radii. A new innermost pixeldetector layer, the insertable B-layer (IBL), was added dur- 1 ATLAS uses a right-handed coordinate system with its origin at the nominal interaction point (IP) in the centre of the detector and the z-axis along the beam pipe. The x-axis points from the IP to the centre of the LHC ring, and the y-axis points upward. Cylindrical coordinates (r, φ) are used in the transverse plane, φ being the azimuthal angle around the z-axis. The pseudorapidity is defined in terms of the polar angle θ as η = − ln tan(θ/2). ing LS1 at a radius of 33 mm around a new and thinner beam pipe [7]. The calorimeter system covers the region |η| < 4.9, the forward region (3.2 < |η| < 4.9) being instrumented with a liquid-argon (LAr) calorimeter for electromagnetic and hadronic measurements. In the central region, a lead/LAr electromagnetic calorimeter covers |η| < 3.2, while the hadronic calorimeter uses two different detector technologies, with steel/scintillator tiles (|η| < 1.7) or lead/LAr (1.5 < |η| < 3.2) as absorber/active material. The MS consists of one barrel (|η| < 1.05) and two end-cap sections (1.05 < |η| < 2.7). Resistive plate chambers (RPC, three doublet layers for |η| < 1.05) and thin gap chambers (TGC, one triplet layer followed by two doublets for 1.0 < |η| < 2.4) provide triggering capability as well as (η, φ) position measurements. A precise momentum measurement for muons with |η| up to 2.7 is provided by three layers of monitored drift tubes (MDT), with each chamber providing six to eight η measurements along the muon trajectory. For |η| > 2, the inner layer is instrumented with cathode strip chambers (CSC), consisting of four sensitive layers each, instead of MDTs.
The Trigger and Data Acquisition (TDAQ) system shown in Fig. 1 consists of a hardware-based first-level trigger (L1) and a software-based high-level trigger (HLT). The L1 trigger decision is formed by the Central Trigger Processor (CTP), which receives inputs from the L1 calorimeter (L1Calo) and L1 muon (L1Muon) triggers as well as several other subsystems such as the Minimum Bias Trigger Scintillators (MBTS), the LUCID Cherenkov counter and the Zero-Degree Calorimeter (ZDC). The CTP is also responsible for applying preventive dead-time. It limits the minimum time between two consecutive L1 accepts (simple dead-time) to avoid overlapping readout windows, and restricts the number of L1 accepts allowed in a given number of bunch-crossings (complex dead-time) to avoid front-end buffers from overflowing. In 2015 running, the simple dead-time was set to 4 bunch-crossings (100 ns). A more detailed description of the L1 trigger system can be found in Ref. [1]. After the L1 trigger acceptance, the events are buffered in the Read-Out System (ROS) and processed by the HLT. The HLT receives Region-of-Interest (RoI) information from L1, which can be used for regional reconstruction in the trigger algorithms. After the events are accepted by the HLT, they are transferred to local storage at the experimental site and exported to the Tier-0 facility at CERN's computing centre for offline reconstruction.
Several Monte Carlo simulated datasets were used to assess the performance of the trigger. Fully simulated pho-ton+jet and dijet events generated with Pythia8 [8] using the NNPDF2.3LO [9] parton distribution function (PDF) set were used to study the photon and jet triggers. To study tau and b-jet triggers, Z → τ τ and tt samples generated with Powheg-Box 2.0 [10][11][12] with the CT10 [13] PDF Fig. 1 The ATLAS TDAQ system in Run 2 with emphasis on the components relevant for triggering. L1Topo and FTK were being commissioned during 2015 and not used for the results shown here set and interfaced to Pythia8 or Pythia6 [14] with the CTEQ6L1 [15] PDF set were used.
Changes to the Trigger/DAQ system for Run 2
The TDAQ system used during Run 1 is described in detail in Refs. [1,16]. Compared to Run 1, the LHC has increased its centre-of-mass energy from 8 to 13 TeV, and the nominal bunch-spacing has decreased from 50 to 25 ns. Due to the larger transverse beam size at the interaction point (β * = 80 cm compared to 60 cm in 2012) and a lower bunch population (1.15 × 10 11 instead of 1.6 × 10 11 protons per bunch) the peak luminosity reached in 2015 (5.0 × 10 33 cm −2 s −1 ) was lower than in Run 1 (7.7 × 10 33 cm −2 s −1 ). However, due to the increase in energy, trigger rates are on average 2.0 to 2.5 times larger for the same luminosity and with the same trigger criteria (individual trigger rates, e.g. jets, can have even larger increases). The decrease in bunch-spacing also increases certain trigger rates (e.g. muons) due to additional interactions from neighbouring bunch-crossings (out-of-time pile-up). In order to prepare for the expected higher rates in Run 2, several upgrades and additions were implemented during LS1. The main changes relevant to the trigger system are briefly described below.
In the L1 Central Trigger, a new topological trigger (L1Topo) consisting of two FPGA-based (Field-Programmable Gate Arrays) processor modules was added. The modules are identical hardware-wise and each is pro-grammed to perform selections based on geometric or kinematic association between trigger objects received from the L1Calo or L1Muon systems. This includes the refined calculation of global event quantities such as missing transverse momentum (with magnitude E miss T ). The system was fully installed and commissioned during 2016, i.e. it was not used for the data described in this paper. Details of the hardware implementation can be found in Ref. [17]. The Muon-to-CTP interface (MUCPTI) and the CTP were upgraded to provide inputs to and receive inputs from L1Topo, respectively. In order to better address sub-detector specific requirements, the CTP now supports up to four independent complex dead-time settings operating simultaneously. In addition, the number of L1 trigger selections (512) and bunch-group selections (16), defined later, were doubled compared to Run 1. The changes to the L1Calo and L1Muon trigger systems are described in separate sections below.
In Run 1 the HLT consisted of separate Level-2 (L2) and Event Filter (EF) farms. While L2 requested partial event data over the network, the EF operated on full event information assembled by separate farm nodes dedicated to Event Building (EB). For Run 2, the L2 and EF farms were merged into a single homogeneous farm allowing better resource sharing and an overall simplification of both the hardware and software. RoI-based reconstruction continues to be employed by time-critical algorithms. The functionality of the EB nodes was also integrated into the HLT farm. To achieve higher readout and output rates, the ROS, the data collection network and data storage system were upgraded. The on-detector front-end (FE) electronics and detector-specific readout drivers (ROD) were not changed in any significant way.
A new Fast TracKer (FTK) system [18] will provide global ID track reconstruction at the L1 trigger rate using lookup tables stored in custom associative memory chips for the pattern recognition. Instead of a computationally intensive helix fit, the FPGA-based track fitter performs a fast linear fit and the tracks are made available to the HLT. This system will allow the use of tracks at much higher event rates in the HLT than is currently affordable using CPU systems. This system is currently being installed and expected to be fully commissioned during 2017.
Level-1 calorimeter trigger
The details of the L1Calo trigger algorithms can be found in Ref. [19], and only the basic elements are described here. The electron/photon and tau trigger algorithm (Fig. 2) identifies an RoI as a 2 × 2 trigger tower cluster in the electromagnetic calorimeter for which the sum of the transverse energy from at least one of the four possible pairs of nearest neighbour towers (1 × 2 or 2 × 1) exceeds a predefined threshold. Isolation-veto thresholds can be set for the electromagnetic (EM) isolation ring in the electromagnetic calorimeter, as well as for hadronic tower sums in a central 2×2 core behind the EM cluster and in the 12-tower hadronic ring around it. The E T threshold can be set differently for different η regions at a granularity of 0.1 in η in order to correct for varying detector energy responses. The energy of the trigger towers is calibrated at the electromagnetic energy scale (EM scale). The EM scale correctly reconstructs the energy deposited by particles in an electromagnetic shower in the calorimeter but underestimates the energy deposited by hadrons. Jet RoIs are defined as 4 × 4 or 8 × 8 trigger tower windows for which the summed electromagnetic and hadronic transverse energy exceeds predefined thresholds and which surround a 2 × 2 trigger tower core that is a local maximum. The location of this local maximum also defines the coordinates of the jet RoI.
In preparation for Run 2, due to the expected increase in luminosity and consequent increase in the number of pileup events, a major upgrade of several central components of the L1Calo electronics was undertaken to reduce the trigger rates.
For the preprocessor system [20], which digitises and calibrates the analogue signals (consisting of ∼7000 trigger towers at a granularity of 0.1 × 0.1 in η × φ) from the calorimeter detectors, a new FPGA-based multi-chip module (nMCM) was developed [21] and about 3000 chips (including spares) were produced. They replace the old ASIC-based MCMs used during Run 1. The new modules provide additional flexibility and new functionality with respect to the old system. In particular, the nMCMs support the use of digital autocorrelation Finite Impulse Response (FIR) filters and the implementation of a dynamic, bunch-by-bunch pedestal correction, both introduced for Run 2. These improvements lead to a significant rate reduction of the L1 jet and L1 E miss T triggers. The bunch-by-bunch pedestal subtraction compensates for the increased trigger rates at the beginning of a bunch train caused by the interplay of in-time and out-oftime pile-up coupled with the LAr pulse shape [22], and linearises the L1 trigger rate as a function of the instantaneous luminosity, as shown in Fig. 3 for the L1 E miss T trigger. The autocorrelation FIR filters substantially improve the bunchcrossing identification (BCID) efficiencies, in particular for low energy deposits. However, the use of this new filtering scheme initially led to an early trigger signal (and incomplete events) for a small fraction of very high energy events. These events were saved into a stream dedicated to mistimed events and treated separately in the relevant physics analyses. The source of the problem was fixed in firmware by adapting the BCID decision logic for saturated pulses and was deployed at the start of the 2016 data-taking period.
The preprocessor outputs are then transmitted to both the Cluster Processor (CP) and Jet/Energy-sum Processor (JEP) subsystems in parallel. The CP subsystem identifies electron/photon and tau lepton candidates with E T above a programmable threshold and satisfying, if required, certain isolation criteria. The JEP receives jet trigger elements, which Fig. 3 The per-bunch trigger rate for the L1 missing transverse momentum trigger with a threshold of 50 GeV (L1_XE50) as a function of the instantaneous luminosity per bunch. The rates are shown with and without pedestal correction applied are 0.2 × 0.2 sums in η × φ, and uses these to identify jets and to produce global sums of scalar and missing transverse momentum. Both the CP and JEP firmware were upgraded to allow an increase of the data transmission rate over the custom-made backplanes from 40 to 160 Mbps, allowing the transmission of up to four jet or five EM/tau trigger objects per module. A trigger object contains the E T sum, η − φ coordinates, and isolation thresholds where relevant. While the JEP firmware changes were only minor, substantial extra selectivity was added to the CP by implementing energydependent L1 electromagnetic isolation criteria instead of fixed threshold cuts. This feature was added to the trigger menu (defined in Sect. 4) at the beginning of Run 2. In 2015 it was used to effectively select events with specific signatures, e.g. EM isolation was required for taus but not for electrons.
Finally, new extended cluster merger modules (CMX) were developed to replace the L1Calo merger modules (CMMs) used during Run 1. The new CMX modules transmit the location and the energy of identified trigger objects to the new L1Topo modules instead of only the threshold multiplicities as done by the CMMs. This transmission happens with a bandwidth of 6.4 Gbps per channel, while the total output bandwidth amounts to above 2 Tbps. Moreover, for most L1 triggers, twice as many trigger selections and isolation thresholds can be processed with the new CMX modules compared to Run 1, considerably increasing the selectivity of the L1Calo system.
Level-1 muon trigger
The muon barrel trigger was not significantly changed with respect to Run 1, apart from the regions close to the feet that support the ATLAS detector, where the presence of support structures reduces trigger coverage. To recover trigger acceptance, a fourth layer of RPC trigger chambers was installed before Run 1 in the projective region of the acceptance holes. These chambers were not operational during Run 1. During LS1, these RPC layers were equipped with trigger electronics. Commissioning started during 2015 and they are fully operational in 2016. Additional chambers were installed during LS1 to cover the acceptance holes corresponding to two elevator shafts at the bottom of the muon spectrometer but are not yet operational. At the end of the commissioning phase, the new feet and elevator chambers are expected to increase the overall barrel trigger acceptance by 2.8 and 0.8% points, respectively. During Run 1, a significant fraction of the trigger rate from the end-cap region was found to be due to particles not originating from the interaction point, as illustrated in Fig. 4. To reject these interactions, new trigger logic was introduced in Run 2. An additional TGC coincidence requirement was deployed in 2015 covering the region 1.3 < |η| < 1.9 (TGC-FI). Further coincidence logic in the region 1.0 < |η| < 1.3 is being commissioned by requiring coincidence with the inner TGC chambers (EIL4) or the Tile hadronic calorimeter. Figure 5a shows the muon trigger rate as a function of the muon trigger pseudorapidity with and without the TGC-FI coincidence in separate data-taking runs. The asymmetry as a function of η is a result of the magnetic field direction and the background particles being mostly positively charged. In the region where this additional coincidence is applied, the trigger rate is reduced by up to 60% Number of rigger while only about 2% of offline reconstructed muons are lost in this region, as seen in Fig. 5b.
Trigger menu
The trigger menu defines the list of L1 and HLT triggers and consists of: • primary triggers, which are used for physics analyses and are typically unprescaled; • support triggers, which are used for efficiency and performance measurements or for monitoring, and are typically operated at a small rate (of the order of 0.5 Hz each) using prescale factors; • alternative triggers, using alternative (sometimes experimental or new) reconstruction algorithms compared to the primary or support selections, and often heavily overlapping with the primary triggers; • backup triggers, with tighter selections and lower expected rate; • calibration triggers, which are used for detector calibration and are often operated at high rate but storing very small events with only the relevant information needed for calibration.
The primary triggers cover all signatures relevant to the ATLAS physics programme including electrons, photons, muons, tau leptons, (b-)jets and E miss T which are used for Standard Model (SM) precision measurements including decays of the Higgs, W and Z bosons, and searches for physics beyond the SM such as heavy particles, supersymmetry or exotic particles. A set of low transverse momentum ( p T ) dimuon triggers is used to collect B-meson decays, which are essential for the B-physics programme of ATLAS.
The trigger menu composition and trigger thresholds are optimised for several luminosity ranges in order to maximise the physics output of the experiment and to fit within the rate and bandwidth constraints of the ATLAS detector, TDAQ system and offline computing. For Run 2 the most relevant constraints are the maximum L1 rate of 100 kHz (75 kHz in Run 1) defined by the ATLAS detector readout capability and an average HLT physics output rate of 1000 Hz (400 Hz in Run 1) defined by the offline computing model. To ensure an optimal trigger menu within the rate constraints for a given LHC luminosity, prescale factors can be applied to L1 and HLT triggers and changed during data-taking in such a way that triggers may be disabled or only a certain fraction of events may be accepted by them. Supporting triggers may be running at a constant rate or certain triggers enabled later in the LHC fill when the luminosity and pile-up has reduced and the required resources are available. Further flexibility is provided by bunch groups, which allow triggers to include specific requirements on the LHC proton bunches colliding in ATLAS. These requirements include paired (colliding) bunch-crossings for physics triggers, empty or unpaired crossings for background studies or search for long-lived particle decays, and dedicated bunch groups for detector calibration.
Trigger names used throughout this paper consist of the trigger level (L1 or HLT, the latter often omitted for brevity), multiplicity, particle type (e.g. g for photon, j for jet, xe for E miss T , te for E T triggers) and p T threshold value in GeV (e.g. L1_2MU4 requires at least two muons with p T > 4 GeV at L1, HLT_mu40 requires at least one muon with p T > 40 GeV at the HLT). L1 and HLT trigger items are written in upper case and lower case letters, respectively. Each HLT trigger is configured with an L1 trigger as its seed. The L1 seed is not explicitly part of the trigger name except when an HLT trigger is seeded by more than one L1 trigger, in which case the L1 seed is denoted in the suffix of the alternative trigger (e.g. HLT_mu20 and HLT_mu20_L1MU15 with the first one using L1_MU20 as its seed). Further selection criteria (type of identification, isolation, reconstruction algorithm, geometrical region) are suffixed to the trigger name (e.g. HLT_g120_loose).
Physics trigger menu for 2015 data-taking
The main goal of the trigger menu design was to maintain the unprescaled single-electron and single-muon trigger p T thresholds around 25 GeV despite the expected higher trigger rates in Run 2 (see Sect. 3). This strategy ensures the collection of the majority of the events with leptonic W and Z boson decays, which are the main source of events for the study of electroweak processes. In addition, compared to using a large number of analysis-specific triggers, this trigger strategy is simpler and more robust at the cost of slightly higher trigger output rates. Dedicated (multi-object) triggers were added for specific analyses not covered by the above. Table 1 shows a comparison of selected primary trigger thresholds for L1 and the HLT used during Run 1 and 2015 together and tau groups. The rate increase around luminosity block 400 is due to the removal of prescaling of the B-physics triggers. The combined group includes multiple triggers combining different trigger signatures such as electrons with muons, taus, jets or E miss T with the typical thresholds for offline reconstructed objects used in analyses (the latter are usually defined as the p T value at which the trigger efficiency reached the plateau). Trigger thresholds at L1 were either kept the same as during Run 1 or slightly increased to fit within the allowed maximum L1 rate of 100 kHz. At the HLT, several selections were loosened compared to Run 1 or thresholds lowered thanks to the use of more sophisticated HLT algorithms (e.g. multivariate analysis techniques for electrons and taus). Figure 6a, b show the L1 and HLT trigger rates grouped by signatures during an LHC fill with a peak luminosity of 4.5 × 10 33 cm −2 s −1 . The preventive dead-time 2 The singleelectron and single-muon triggers contribute a large fraction to the total rate. While running at these relatively low luminosities it was possible to dedicate a large fraction of the bandwidth to the B-physics triggers. Support triggers contribute about 20% of the total rate. Since the time for trigger commissioning in 2015 was limited due to the fast rise of the LHC luminosity (compared to Run 1), several backup triggers, which contribute additional rate, were implemented in the menu in addition to the primary physics triggers. This is the case for electron, b-jet and E miss T triggers, which are discussed in later sections of the paper.
Event streaming
Events accepted by the HLT are written into separate data streams. Events for physics analyses are sent to a single 2 The four complex dead-time settings were 15/370, 42/381, 9/351 and 7/350, where the first number specifies the number of triggers and the second number specifies the number of bunch-crossings, e.g. 7 triggers in 350 bunch-crossings.
Main stream replacing the three separate physics streams (Egamma, Muons, JetTauEtMiss) used in Run 1. This change reduces event duplication, thus reducing storage and CPU resources required for reconstruction by roughly 10%. A small fraction of these events at a rate of 10 to 20 Hz are also written to an Express stream that is reconstructed promptly offline and used to provide calibration and data quality information prior to the reconstruction of the full Main stream, which typically happens 36 h after the data are taken. In addition, there are about twenty additional streams for calibration, monitoring and detector performance studies. To reduce event size, some of these streams use partial event building (partial EB), which writes only a predefined subset of the ATLAS detector data per event. For Run 2, events that contain only HLT reconstructed objects, but no ATLAS detector data, can be recorded to a new type of stream. These events are of very small size, allowing recording at high rate. These streams are used for calibration purposes and Trigger-Level Analysis as described in Sect. 6.4.4. Figure 7 shows typical HLT stream rates and bandwidth during an LHC fill.
Events that cannot be properly processed at the HLT or have other DAQ-related problems are written to dedicated debug streams. These events are reprocessed offline with the same HLT configuration as used during data-taking and accepted events are stored into separate data sets for use in physics analyses. In 2015, approximately 339,000 events were written to debug streams. The majority of them (∼90%) are due to online processing timeouts that occur when the event cannot be processed within 2-3 min. Long processing times are mainly due to muon algorithms processing events with a large number of tracks in the muon spectrometer (e.g. due to jets not contained in the calorimeter). During the debug Fig. 7 a HLT stream rates and b bandwidth during an LHC fill in October 2015 with a peak luminosity of 4.5 × 10 33 cm −2 s −1 . Partial Event Building (partial EB) streams only store relevant subdetector data and thus have smaller event sizes. The other physics-related streams contain events with special readout settings and are used to overlay with MC events to simulate pile-up stream reprocessing, 330,000 events were successfully processed by the HLT of which about 85% were accepted. The remaining 9000 events could not be processed due to data integrity issues.
HLT processing time
The HLT processing time per event is mainly determined by the trigger menu and the number of pile-up interactions. The HLT farm CPU utilisation depends on the L1 trigger rate and the average HLT processing time. Figure 8 shows (a) the HLT processing time distribution for the highest luminosity run in 2015 with a peak luminosity of 5.2 × 10 33 cm −2 s −1 and (b) the average HLT processing time as a function of the instantaneous luminosity. At the highest luminosity point the average event processing time was approximately 235 ms. An L1 rate of 80 kHz corresponds to an average utilisation of 67% of a farm with 28,000 available CPU cores. About 40, 35 and 15% of the processing time are spent on inner detector tracking, muon spectrometer reconstruction and calorimeter reconstruction, respectively. The muon reconstruction time is dominated by the large rate of lowp T B-physics triggers. The increased processing time at low luminosities observed in Fig. 8b is due to additional triggers being enabled towards the end of an LHC fill to take advantage of the available CPU and bandwidth resources. Moreover, trigger prescale changes are made throughout the run giving rise to some of the observed features in the curve. The clearly visible scaling with luminosity is due to the pileup dependence of the processing time. It is also worth noting that the processing time cannot naively be scaled to higher luminosities as the trigger menu changes significantly in order to keep the L1 rate below or at 100 kHz.
Trigger menu for special data-taking conditions
Special trigger menus are used for particular data-taking conditions and can either be required for collecting a set of events for dedicated measurements or due to specific LHC bunch configurations. In the following, three examples of dedicated menus are given: menu for low number of bunches in the LHC, menu for collecting enhanced minimum-bias data for trigger rate predictions and menu during beam separation scans for luminosity calibration (van der Meer scans).
When the LHC contains a low number of bunches (and thus few bunch trains), care is needed not to trigger at resonant frequencies that could damage the wire bonds of the IBL or SCT detectors, which reside in the magnetic field. The dangerous resonant frequencies are between 9 and 25 kHz for the IBL and above 100 kHz for the SCT detector. To avoid this risk, both detectors have implemented in the readout firmware a so-called fixed frequency veto that prevents triggers falling within a dangerous frequency range [23]. The IBL veto poses the most stringent limit on the acceptable L1 rate in this LHC configuration. In order to provide trigger menus appropriate to each LHC configuration during the startup phase, the trigger rate has been estimated after simulating the effect of the IBL veto. Figure 9 shows the simulated IBL rate limit for two different bunch configurations and the expected L1 trigger rate of the nominal physics trigger menu. At a low number of bunches the expected L1 trigger rate exceeds slightly the allowed L1 rate imposed by the IBL veto. In order not to veto important physics triggers, the required rate reduction was achieved by reducing the rate of supporting triggers.
Certain applications such as trigger algorithm development, rate predictions and validation require a data set that is Simulated limits on the L1 trigger rate due to the IBL fixed frequency veto for two different filling schemes and the expected maximum L1 rate from rate predictions. The steps in the latter indicate a change in the prescale strategy. The simulated rate limit is confirmed with experimental tests. The rate limit is higher for the 72-bunch train configuration since the bunches are more equally spread across the LHC ring. The rate limitation was only crucial for the low luminosity phase, where the required physics L1 rate was higher than the limit imposed by the IBL veto. The maximum number of colliding bunches in 2015 was 2232 minimally biased by the triggers used to select it. This special data set is collected using the enhanced minimum-bias trigger menu, which consists of all primary lowestp T L1 triggers with increasing p T threshold and a random trigger for very high cross-section processes. This trigger menu can be enabled in addition to the regular physics menu and records events at 300 Hz for a period of approximately one hour to obtain a data set of around one million events. Since the correlations between triggers are preserved, per-event weights can be calculated and used to convert the sample into a zerobias sample, which is used for trigger rate predictions during the development of new triggers [24]. This approach requires a much smaller total number of events than a true zero-bias data set. During van der Meer scans [25], which are performed by the LHC to allow the experiments to calibrate their luminosity measurements, a dedicated trigger menu is used. ATLAS uses several luminosity algorithms (see Ref. [26]) amongst which one relies on counting tracks in the ID. Since the different LHC bunches do not have the exact same proton density, it is beneficial to sample a few bunches at the maximum possible rate. For this purpose, a minimum-bias trigger selects events for specific LHC bunches and uses partial event building to read out only the ID data at about 5 kHz for five different LHC bunches.
High-level trigger reconstruction
After L1 trigger acceptance, the events are processed by the HLT using finer-granularity calorimeter information, precision measurements from the MS and tracking information from the ID, which are not available at L1. As needed, the HLT reconstruction can either be executed within RoIs identified at L1 or for the full detector. In both cases the data is retrieved on demand from the readout system. As in Run 1, in order to reduce the processing time, most HLT triggers use a two-stage approach with a fast first-pass reconstruction to reject the majority of events and a slower precision reconstruction for the remaining events. However, with the merging of the previously separate L2 and EF farms, there is no longer a fixed bandwidth or rate limitation between the two steps. The following sections describe the main reconstruction algorithms used in the HLT for inner detector, calorimeter and muon reconstruction.
Inner detector tracking
For Run 1 the ID tracking in the trigger consisted of custom tracking algorithms at L2 and offline tracking algorithms adapted for running in the EF. The ID trigger was redesigned for Run 2 to take advantage of the merged HLT and include information from the IBL. The latter significantly improves the tracking performance and in particular the impact parameter resolution [7]. In addition, provision was made for the inclusion of FTK tracks once that system becomes available later in Run 2.
Inner detector tracking algorithms
The tracking trigger is subdivided into fast tracking and precision tracking stages. The fast tracking consists of triggerspecific pattern recognition algorithms very similar to those used at L2 during Run 1, whereas the precision stage relies heavily on offline tracking algorithms. Despite similar naming the fast tracking as described here is not related to the FTK hardware tracking that will only become available during 2017. The tracking algorithms are typically configured to run within an RoI identified by L1. The offline tracking was reimplemented in LS1 to run three times faster than in Run 1, making it more suitable to use in the HLT. To reduce CPU usage even further, the offline track-finding is seeded by tracks and space-points identified by the fast tracking stage.
Inner detector tracking performance
The tracking efficiency with respect to offline tracks has been determined for electrons and muons. The reconstructed tracks are required to have at least two (six) pixel (SCT) clusters and lie in the region |η| < 2.5. The closest trigger track within a cone of size R = ( η) 2 + ( φ) 2 = 0.05 of the offline reconstructed track is selected as the matching trigger track. Figure 10 shows the tracking efficiency for the 24 GeV medium electron trigger (see Sect. 6.2) as a function of the η and of the p T of the offline track. The tracking efficiency is measured with respect to offline tracks with p T > 20 GeV for tight offline electron candidates from the 24 GeV electron support trigger, which does not use the trigger tracks in the selection, but is otherwise identical to the physics trigger. The efficiencies of the fast track finder and precision tracking exceed 99% for all pseudorapidities. There is a small efficiency loss at low p T due to bremsstrahlung energy loss by electrons. Figure 11a shows the tracking performance of the ID trigger for muons with respect to loose offline muon candidates with p T > 6 GeV selected by the 6 GeV muon support trigger as a function of the offline muon transverse momentum. The efficiency is significantly better than 99% for all p T for both the fast and precision tracking. Shown in Fig. 11b is the resolution of the transverse track impact parameter with respect to offline as a function of the offline muon p T . The resolution in the fast (precision) tracking is better than 17 µm (15 µm) for muon candidates with offline p T > 20 GeV.
Multiple stage tracking
For the hadronic tau and b-jet triggers, tracking is run in a larger RoI than for electrons or muons. To limit CPU usage, multiple stage track reconstruction was implemented.
A two-stage processing approach was implemented for the hadronic tau trigger. First, the leading track and its position along the beamline are determined by executing fast tracking in an RoI that is fully extended along the beam- Figure 13 shows the performance of the tau two-stage tracking with respect to the offline tau tracking for tracks with p T > 1 GeV originating from decays of offline tau lepton candidates with p T > 25 GeV, but with very loose track matching in R to the offline tau candidate. Figure 13a shows the efficiency of the fast tracking from the first and second stages, together with the efficiency of the precision tracking for the second stage. The second-stage tracking efficiency is higher than 96% everywhere, and improves to better than 99% for tracks with p T > 2 GeV. The efficiency of the firststage fast tracking has a slower turn-on, rising from 94% at 2 GeV to better than 99% for p T > 5 GeV. This slow turn-on arises due to the narrow width ( φ < 0.1) of the first-stage RoI and the loose tau selection that results in a larger fraction of lowp T tracks from tau candidates that bend out of the RoI (and are not reconstructed) compared to a wider RoI. The transverse impact parameter resolution with respect to offline for loosely matched tracks is seen in Fig. 13b and is around 20 µm for tracks with p T > 10 GeV reconstructed by the precision tracking. The tau selection algorithms based on this two-stage tracking are presented in Sect. 6.5.1.
For b-jet tracking a similar multi-stage tracking strategy was adopted. However, in this case the first-stage vertex tracking takes all jets identified by the jet trigger with E T > 30 GeV and reconstructs tracks with the fast track finder in a narrow region in η and φ around the jet axis for each jet, but with |z| < 225 mm along the beam line. Following this step, the primary vertex reconstruction [27] is performed using the tracks from the fast tracking stage. This vertex is used to define wider RoIs around the jet axes, with | η| < 0.4 and | φ| < 0.4 but with | z| < 20 mm relative to the primary vertex z position. These RoIs are then used for the second-stage reconstruction that runs the fast track finder in the wider η and φ regions followed by the precision tracking, secondary vertexing and b-tagging algorithms. The performance of the primary vertexing in the b-jet vertex tracking can be seen in Fig. 14a, which shows the vertex finding efficiency with respect to offline vertices in jet events with at least one jet with transverse energy above 55, 110, or 260 GeV and with no additional b-tagging requirement. The efficiency is shown as a function of the number of offline tracks with p T > 1 GeV that lie within the boundary of the wider RoI (defined above) from the selected jets. The efficiency rises sharply and is above 90% for vertices with three or more tracks, and rises to more than 99.5% for vertices with five or more tracks. The resolution in z with respect to the offline z position as shown in Fig. 14b is better than 100 µm for vertices with two or more offline tracks and improves to 60 µm for vertices with ten or more offline tracks.
Inner detector tracking timing
The timing of the fast tracking and precision tracking stages of the electron trigger executed per RoI can be seen in Fig. 15 for events passing the 24 GeV electron trigger. The fast tracking takes on average 6.2 ms per RoI with a tail at the per-mille level at around 60 ms. The precision tracking execution time has a mean of 2.5 ms and a tail at the per-mille level of around 20 ms. The precision tracking is seeded by the tracks found in the fast tracking stage and hence requires less CPU time.
The time taken by the tau tracking in both the singlestage and two-stage variants is shown in Fig. 16. Figure 16a shows the processing times per RoI for fast tracking stages: individually for the first and second stages of the two-stage tracking, and separately for the single-stage tracking with the wider RoI in η, φ and z. The fast tracking in the single-stage tracking has a mean execution time of approximately 66 ms, with a very long tail. In contrast, the first-stage tracking with an RoI that is wide only in the z direction has a mean execution time of 23 ms, driven predominantly by the narrower RoI width in φ. The second-stage tracking, although wider in η and φ, takes only 21 ms on average because of the significant reduction in the RoI z-width along the beam line. Figure 16b shows a comparison of the processing time per RoI for the precision tracking. The two-stage tracking executes faster, with a mean of 4.8 ms compared to 12 ms for the single-stage tracking. Again, this is due to the reduction in the number of tracks to be processed from the tighter selection in z along the beam line.
Calorimeter reconstruction
A series of reconstruction algorithms are used to convert signals from the calorimeter readout into objects, specifically cells and clusters, that then serve as input to the reconstruction of electron, photon, tau, and jet candidates and the reconstruction of E miss T . These cells and clusters are also used in the determination of the shower shapes and the isolation properties of candidate particles (including muons), both of which are later used as discriminants for particle identification and the rejection of backgrounds. The reconstruction algorithms used in the HLT have access to full detector granularity and thus allow improved accuracy and precision in energy and position measurements with respect to L1.
Calorimeter algorithms
The first stage in the reconstruction involves unpacking the data from the calorimeter. The unpacking can be done in two different ways: either by unpacking only the data from within the RoIs identified at L1 or by unpacking the data from the full calorimeter. The RoI-based approach is used for well- 16 The ID trigger tau tracking processing time for a the fast track finder and b the precision tracking comparing the single-stage and two-stage tracking approach separated objects (e.g. electron, photon, muon, tau), whereas the full calorimeter reconstruction is used for jets and global event quantities (e.g. E miss T ). In both cases the raw unpacked data is then converted into a collection of cells. Two different clustering algorithms are used to reconstruct the clusters of energy deposited in the calorimeter, the sliding-window and the topo-clustering algorithms [28]. While the latter provides performance closer to the offline reconstruction, it is also significantly slower (see Sect. 5.2.3).
The sliding-window algorithm operates on a grid in which the cells are divided into projective towers. The algorithm scans this grid and positions the window in such a way that the transverse energy contained within the window is the local maximum. If this local maximum is above a given threshold, a cluster is formed by summing the cells within a rectangular clustering window. For each layer the barycentre of the cells within that layer is determined, and then all cells within a fixed window around that position are included in the cluster. Although the size of the clustering window is fixed, the central position of the window may vary slightly at each calorimeter layer, depending on how the cell energies are distributed within them.
The topo-clustering algorithm begins with a seed cell and iteratively adds neighbouring cells to the cluster if their energies are above a given energy threshold that is a function of the expected root-mean-square (RMS) noise (σ ). The seed cells are first identified as those cells that have energies greater than 4σ . All neighbouring cells with energies greater than 2σ are then added to the cluster and, finally, all the remaining neighbours to these cells are also added. Unlike the sliding-window clusters, the topo-clusters have no predefined shape, and consequently their size can vary from cluster to cluster.
The reconstruction of candidate electrons and photons uses the sliding-window algorithm with rectangular clustering windows of size η × φ = 0.075 × 0.175 in the barrel and 0.125 × 0.125 in the end-caps. Since the magnetic field bends the electron trajectory in the φ direction, the size of the window is larger in that coordinate in order to contain most of the energy. The reconstruction of candidate taus and jets and the reconstruction of E miss T all use the topo-clustering algorithm. For taus the topo-clustering uses a window of 0.8 × 0.8 around each of the tau RoIs identified at L1. For jets and E miss T , the topo-clustering is done for the full calorimeter. In addition, the E miss T is also determined based on the cell energies across the full calorimeter (see Sect. 6.6).
Calorimeter algorithm performance
The harmonisation between the online and offline algorithms in Run 2 means that the online calorimeter performance is now much closer to the offline performance. The E T resolutions of the sliding-window clusters and the topo-clusters with respect to their offline counterparts are shown in Fig. 17. The E T resolution of the sliding-window clusters is 3% for clusters above 5 GeV, while the E T resolution of the topoclustering algorithm is 2% for clusters above 10 GeV. The slight shift in cell energies between the HLT and offline is due to the fact that out-of-time pile-up effects were not corrected in the online reconstruction, resulting in slightly higher reconstructed cell energies in the HLT (this was changed for 2016). In addition, the topo-cluster based reconstruction shown in Fig. 17b suffered from a mismatch of some calibration constants between online and offline during most of 2015, resulting in a shift towards lower HLT cell energies.
Calorimeter algorithm timing
Due to the optimisation of the offline clustering algorithms during LS1, offline clustering algorithms can be used in the HLT directly after the L1 selection. At the data preparation stage, a specially optimised infrastructure with a memory caching mechanism allows very fast unpacking of data, even from the full calorimeter, which comprises approximately 187,000 cells. The mean processing time for the data preparation stage is 2 ms per RoI and 20 ms for the full calorimeter, and both are roughly independent of pile-up. The topoclustering, however, requires a fixed estimate of the expected pile-up noise (cell energy contributions from pile-up interactions) in order to determine the cluster-building thresholds and, when there is a discrepancy between the expected pileup noise and the actual pile-up noise, the processing time can show some dependence on the pile-up conditions. The mean processing time for the topo-clustering is 6 ms per RoI and 82 ms for the full calorimeter. The distributions of the topoclustering processing times are shown in Fig. 18a for an RoI and Fig. 18b for the full calorimeter. The RoI-based topoclustering can run multiple times if there is more than one RoI per event. The topo-clustering over the full calorimeter runs at most once per event, even if the event satisfied both jet and E miss T selections at L1. The mean processing time of the sliding window clustering algorithm is not shown but is typically less than 2.5 ms per RoI.
Tracking in the muon spectrometer
Muons are identified at the L1 trigger by the spatial and temporal coincidence of hits either in the RPC or TGC chambers within the rapidity range of |η| < 2.4. The degree of deviation from the hit pattern expected for a muon with infinite momentum is used to estimate the p T of the muon with six possible thresholds. The HLT receives this information together with the RoI position and makes use of the precision MDT and CSC chambers to further refine the L1 muon candidates.
Muon tracking algorithms
The HLT muon reconstruction is split into fast (trigger specific) and precision (close to offline) reconstruction stages, which were used during Run 1 at L2 and EF, respectively.
In the fast reconstruction stage, each L1 muon candidate is refined by including the precision data from the MDT chambers in the RoI defined by the L1 candidate. A track fit is performed using the MDT drift times and positions, and a p T measurement is assigned using lookup tables, creating MS-only muon candidates. The MS-only muon track is backextrapolated to the interaction point using the offline track extrapolator (based on a detailed detector description instead of the lookup-table-based approach used in Run 1) and combined with tracks reconstructed in the ID to form a combined muon candidate with refined track parameter resolution.
In the precision reconstruction stage, the muon reconstruction starts from the refined RoIs identified by the fast stage, reconstructing segments and tracks using information from the trigger and precision chambers. As in the fast stage, muon candidates are first formed by using the muon detectors (MSonly) and are subsequently combined with ID tracks leading to combined muons. If no matching ID track can be found, combined muon candidates are searched for by extrapolating ID tracks to the MS. This latter inside-out approach is slower The combined muon candidates are used for the majority of the muon triggers. However, MS-only candidates are used for specialised triggers that cannot rely on the existence of an ID track, e.g. triggers for long-lived particles that decay within the ID volume.
Muon tracking performance
Comparisons between online and offline muon track parameters using Z → μμ candidate events are presented in this section while muon trigger efficiencies are described in Sect. 6.3. Distributions of the residuals between online and offline track parameters (1/ p T , η and φ) are constructed in bins of p T and two subsequent Gaussian fits are performed on the core of the distribution to extract the widths, σ , of the residual distributions as a function of p T . The inversep T residual widths, Fig. 19 as a function of the offline muon p T for the precision MSonly and precision combined reconstruction. The resolution for combined muons is better than the resolution for MS-only muons due to the higher precision of the ID track measurements, especially at low p T . As the tracks become closer to straight lines at high p T , it becomes more difficult to precisely measure the p T of both the MS and ID tracks, and hence the resolution degrades. The p T resolution for lowp T MS-only muons is degraded when muons in the barrel are bent out of the detector before traversing the entire muon spectrometer. The resolution is generally better in the barrel than in the end-caps due to the difference in detector granularity. The η residual widths, σ (η online − η offline ), and φ residual widths, σ (φ online − φ offline ), are shown as a function of p T in Fig. 20 for both the MS-only and combined algorithms. As the trajectories are straighter at high p T , the precision of their position improves and so the spatial resolution decreases with p T . Good agreement between track parameters calculated online and offline is observed. Figure 21 shows the processing times per RoI for the (a) fast MS-only and fast combined algorithms and (b) precision muon algorithm. The large time difference between the fast and precision algorithms, with the precision reconstruction using too much time to be run by itself at the full L1 muon trigger rate, motivates the need for a two-stage reconstruction. is due to algorithm caching [29] 6 Trigger signature performance
Muon tracking timing
The following sections describe the different selection criteria placed upon the reconstructed objects described in Sect. 5 in order to form individual trigger signatures that identify leptons, hadrons, and global event quantities such as E miss T . For each case the primary triggers used during 2015 are listed together with their output rate and performance. Where possible the trigger efficiency measured in data is compared with MC simulation. The following methods are used to derive an unbiased measurement of the trigger efficiency: • Tag-and-probe method, which uses a sample of offlineselected events that contain a pair of related objects reconstructed offline, such as electrons from a Z → ee decay, where one has triggered the event and the other one is used to measure the trigger efficiency; • Bootstrap method, where the efficiency of a higher trigger threshold is determined using events triggered by a lower threshold.
Trigger efficiencies are computed with respect to an offline-selected data sample. The ratio of the measured trigger efficiency to the simulated one is used as a correction factor in physics analyses. Unless otherwise specified, performance studies use good-quality data corresponding to an integrated luminosity of 3.2 fb −1 collected during 2015 with a bunch-spacing of 25 ns. Trigger rates shown in the following sections are usually extracted from multiple datataking runs to cover the maximum range in instantaneous luminosity. Due to different beam and detector conditions between runs, this can result in slightly different trigger rates for nearby luminosity values.
Minimum-bias and forward triggers
Studies of the total cross-section, hadronisation, diffraction, hadrons containing strange quarks and other nonperturbative properties of pp interactions require the use of a high-efficiency trigger for selecting all inelastic interactions that result in particle production within the detector. The MBTS minimum-bias trigger is highly efficient, even for events containing only two charged particles with p T > 100 MeV and |η| < 2.5.
The primary minimum-bias and high-multiplicity data set at √ s = 13 TeV was recorded in June 2015. The average pile-up μ varied between 0.003 and 0.03, and the interaction rate had a maximum of about 15 kHz. More than 200 million interactions were recorded during a one-week datataking period. Most of the readout bandwidth was dedicated to the loosest L1_MBTS_1 trigger (described below) recording events at 1.0 to 1.5 kHz on average.
Reconstruction and selection
The MBTS are used as the primary L1 hardware triggers for recording inelastic events with minimum bias, as reported in Refs. [30,31]. The plastic scintillation counters composing the system were replaced during LS1 and consist of two planes of twelve counters, each plane formed of an inner ring of eight counters and an outer ring of four counters. These rings are sensitive to charged particles in the interval 2.07 < |η| < 3.86. Each counter is connected to a photomultiplier tube and provides a fast trigger via a constant fraction discriminator and is read out through the Tile calorimeter data acquisition system. The MBTS triggers require a certain multiplicity of counters to be above threshold in a bunch-crossing with colliding beams. The L1_MBTS_1 and L1_MBTS_2 triggers require any one or two of the 24 counters to be above threshold, respectively. The coincidence of two hits in the latter suppresses beam-induced backgrounds from low-energy neutrons and photons. The L1_MBTS_1_1 trigger requires at least one counter to be above threshold in both the +z and −z hemispheres of the detector and is used to seed the highmultiplicity HLT triggers. The same trigger selections are also applied to empty (no beam present) and unpaired (one beam present) beam-crossings to investigate beam-induced backgrounds. No additional HLT selection is applied to L1_MBTS_1 and L1_MBTS_2 triggered events.
The mb_sptrk trigger is used to determine the efficiency of the MBTS. It is seeded using a random trigger on filled bunches and requires at least two reconstructed space-points in the Pixel system and three in the SCT, along with at least one reconstructed track with p T > 200 MeV. Studies using MC simulation and a fully unbiased data sample have demonstrated that this control trigger is unbiased with respect to the offline selection.
The primary high-multiplicity trigger (e.g. used in the measurement of two-particle correlations [32]) is mb_sp900_trk60_hmt_L1MBTS_1_1 and requires at least 900 reconstructed space-points in the SCT and at least 60 reconstructed tracks with p T > 400 MeV. This higher p T requirement for the high-multiplicity trigger is compatible with the p T cut used for physics analysis and reduces the computational complexity of the track-finding algorithms in the HLT to an acceptable level.
Trigger efficiencies
The MBTS trigger efficiency is defined as the ratio of events passing MBTS trigger, the control trigger (mb_sptrk) and offline selection to events passing the control trigger and offline selection. The efficiency is shown in Fig. 22 for two offline selections as a function of the number of selected tracks compatible in transverse impact parameter (|d 0 | < 1.5 mm) with the beam line (n BL sel ) for (a) p T > 100 MeV and (b) p T > 500 MeV. The efficiency is close to 95% in the first bin, quickly rising to 100% for L1_MBTS_1 and L1_MBTS_2. The L1_MBTS_1_1 trigger, which requires at least one hit on both sides of the detector, only approaches 100% efficiency for events with around 15 tracks. The primary reason for the lower efficiency of the L1_MBTS_1_1 trigger compared to L1_MBTS_1 or L1_MBTS_2 is that at low multiplicities about 30% of the inelastic events are due to diffractive interactions where usually one proton stays intact and thus particles from the interactions are only produced on one side of the detector. Systematic uncertainties in the trigger efficiency are evaluated by removing the cut on the transverse impact parameter with respect to the beam line from the track selection and applying a longitudinal impact parameter cut with respect to the primary vertex (for events where a primary vertex is reconstructed). This results in a less than 0.1% shift. The difference in response between the two hemispheres is additionally evaluated to be at most 0.12%.
The L1_MBTS_1 trigger is used as the control trigger for the determination of the efficiency turn-on curves for the high-multiplicity data set. The efficiency is parameterised as a function of the number of offline tracks associated with Figure 23 shows the efficiency for three different selections of the minimum number of SCT spacepoints and reconstructed tracks and for two selections of the offline track p T requirement (above 400 and 500 MeV). In the case of matching offline and trigger p T selections ( p T > 400 MeV) shown in Fig. 23a, the triggers are 100% efficient for a value of five tracks above the offline threshold (e.g. trk60 becomes fully efficient for 65 offline tracks). If the offline requirement is raised to 500 MeV as shown in Fig. 23b, the trigger is 100% efficient for the required number of tracks.
Electrons and photons
Events with electrons and photons in the final state are important signatures for many ATLAS physics analyses, from SM precision physics, such as Higgs boson, top quark, W and Z boson properties and production rate measurements, to searches for new physics. Various triggers cover the energy range between a few GeV and several TeV. Low-E T triggers are used to collect data for measuring the properties of J/ψ → ee, diphoton or low mass Drell-Yan production. Single-electron triggers with E T above 24 GeV, dielectron triggers with lower thresholds and diphoton triggers are used for the signal selection in a wide variety of ATLAS physics analyses such as studies of the Higgs boson.
Electron and photon reconstruction and selection
At L1 the electron and photon triggers use the algorithms described in Sect. 3.1. The isolation and hadronic leakage veto cuts are not required for EM clusters with transverse energy above 50 GeV.
At the HLT, electron and photon candidates are reconstructed and selected in several steps in order to reject events as fast as possible, thus allowing algorithms which reproduce closely the offline algorithms and require more CPU time to run at a reduced rate later in the trigger sequence. At first, fast calorimeter algorithms build clusters from the calorimeter cells (covering 0.025 × 0.025 in η × φ space) within the RoI ( η × φ = 0.4 × 0.4) identified by L1. Since electrons and photons deposit most of their energy in the second layer of the EM calorimeter, this layer is used to find the cell with the largest deposited transverse energy in the RoI. EM calorimeter clusters of size 3 × 7 in the barrel (|η| < 1.4) and 5 × 5 in the end-cap (1.4 < |η| < 2.47) are used to reconstruct electrons and photons. The identification of electrons and photons is based on the cluster E T as well as cluster shape parameters such as R had , R η and E ratio , 3 the latter being used for electron candidates and a few tight photon triggers. Electron candidates are required to have tracks from the fast tracking stage with p T > 1 GeV and to match clusters within η < 0.2.
The second step relies on precise offline-like algorithms. The energy of the clusters is calibrated for electron and photon triggers separately using a multivariate technique where the response of the calorimeter layers is corrected in data and simulation [33]. Precision tracks extrapolated to the second layer of the EM calorimeter are required to match to clusters within η of 0.05 and φ of 0.05. Electron identification relies on a multivariate technique using a likelihood (LH) discriminant with three operating points named loose LH, medium LH and tight LH. An additional working point named very loose LH is used for supporting triggers. The LHbased identification makes use of variables similar to the cutbased identification employed during Run 1 [2] but has better background rejection for the same signal efficiency. The discriminating variables used offline are also used by the trigger, exploiting the characteristic features of energy deposits in the EM calorimeters (longitudinal and lateral shower shapes), track quality, track-cluster matching, and particle identification by the TRT. All variables are described in Refs. [34,35]. The composition of the likelihood is the same as in the offline reconstruction with the exception of momentum loss due to bremsstrahlung, p/ p, which is not accounted for in the online environment. The photon identification relies only on the cluster shower-shape variables and three working points are also defined: loose, medium and tight.
Not applied during 2015 but foreseen for higher luminosities during Run 2 is an additional requirement on isolation for the lowest-threshold unprescaled single-electron trigger. The isolation parameter is calculated as the sum of the p T values of all tracks in a cone of size R = 0.2 around the electron for tracks with p T > 1 GeV and | z 0 sin θ | < 0.3, where z 0 is the distance along z between the longitudinal impact parameter of the track and the leading track in the RoI. The ratio of this quantity to the EM cluster E T , namely p T /E T , is used to estimate the energy deposited by other particles.
Electron and photon trigger menu and rates
The primary L1 and HLT electron and photon triggers used in 2015 are listed in Table 1. The lowest-threshold singleelectron trigger (e24_lhmedium_L1EM20VH) applies a 24 GeV transverse energy threshold and requires the electron to pass medium LH identification requirements. The trigger is seeded by L1_EM20VH, which requires E T > 20 GeV, and applies an E T -dependent veto against energy deposited in the hadronic calorimeter behind the electromagnetic cluster of the electron candidate (hadronic veto, denoted by H in the trigger name). The E T threshold varies slightly as a function of η to compensate for passive material in front of the calorimeter (denoted by V in the trigger name). To recover efficiency in the high transverse energy regime, this trigger is complemented by a trigger requiring a transverse energy above 120 GeV with loose LH identification (e120_lhloose). With a maximum instantaneous luminosity of 5.2 × 10 33 cm −2 s −1 reached during the 2015 datataking, the rates of electron triggers could be sustained without the use of additional electromagnetic or track isolation requirements at L1 or HLT. The lowest-threshold dielectron trigger (2e12_lhloose_L12EM10VH) applies a 12 GeV transverse energy threshold and requires the two electrons to pass loose LH identification requirements. The trigger is seeded by L1_2EM10VH, which requires two electrons with E T above 10 GeV and a hadronic energy veto.
The primary single-photon trigger used in 2015 is g120_loose. It requires a transverse energy above 120 GeV and applies loose photon identification criteria. It is seeded by L1_EM22VHI, which requires an isolated electromagnetic cluster (denoted by I in the trigger name) with E T above 22 GeV and applies a hadronic veto and η-dependent E T thresholds as described above. As mentioned earlier, the electromagnetic isolation and hadronic veto requirements are not applied for E T above 50 GeV. The two main diphoton triggers are g35_loose_g25_loose, which requires two photons above 35 and 25 GeV thresholds and loose photon identification requirements, and 2g20_tight, which requires two photons with E T above 20 GeV and tight identification. Both triggers are seeded by L1_2EM15VH, which requires two electromagnetic clusters with E T above 15 GeV and a hadronic veto. Figures 24 and 25 show the rates of the electron and photon triggers as a function of the instantaneous luminosity. These trigger rates scale linearly with the instantaneous luminosity.
Electron and photon trigger efficiencies
The performance of electron triggers is studied using a sample of Z → ee events. The tag-and-probe method utilises events triggered by a single-electron trigger and requires two offline reconstructed electrons with an invariant mass between 80 and 100 GeV. After identifying the electron that triggered the event (tag electron), the other electron (probe electron) is unbiased by the trigger selection, thus allowing its use to measure the electron trigger efficiency. HLT electrons (L1 EM objects) are matched to the probe electron if their separation is R < 0.07(0.15). The trigger efficiency is calcu- Fig. 26 Efficiency of the L1_EM20VH trigger and the logical 'or' of the e24_lhmedium_L1EM20VH and e120_lhloose triggers as a function of a the probe electron transverse energy E T and b pseudo-rapidity η. The offline reconstructed electron candidate is required to have an E T value at least 1 GeV above the trigger threshold lated as the ratio of the number of probe electrons passing the trigger selection to the number of probe electrons. The efficiency of the combination of the lowest unprescaled singleelectron trigger e24_lhmedium_L1EM20VH and the high transverse momentum electron trigger e120_lhloose with respect to the offline objects is shown in Fig. 26 as a function of the offline reconstructed electron transverse energy and pseudorapidity. The figure also shows the efficiency of the L1 trigger (L1_EM20VH) seeding the lowest unprescaled single-electron trigger. A sharp turn-on can be observed for both the L1 and overall (L1 and HLT) efficiency, and the HLT inefficiency with respect to L1 is small. Inefficiencies observed around pseudorapidities of −1.4 and 1.4 are due to the transition region between the barrel and endcap calorimeter.
The photon trigger efficiency is computed using the bootstrap method as the efficiency of the HLT trigger relative to a trigger with a lower E T threshold. Figure 27 shows the efficiency of the main single-photon trigger and the photons of the main diphoton trigger as a function of the offline reconstructed photon transverse energy and pseudorapidity for data and MC simulation. Very good agreement is observed between data and simulation.
Muons
Muons are produced in many final states of interest to the ATLAS physics programme, from SM precision physics to searches for new physics. Muons are identified with high purity compared to other signatures and cover a wide trans- Fig. 27 Efficiency of HLT photon triggers g20_tight, g25_loose, g35_loose, and g120_loose relative to a looser HLT photon trigger as a function of a the transverse energy E T and b pseudorapidity η of the photon candidates reconstructed offline and satisfying the tight identification and isolation requirements. The offline reconstructed photon candidate is required to have an E T value at least 5 GeV above the trigger threshold. The transition region between the barrel and end-cap calorimeter (1.37 < |η| < 1.52) is excluded verse momentum range, from a few GeV to several TeV. Muon trigger thresholds in the p T range from 4 to 10 GeV are used to collect data for measurements of processes such as J/ψ → μμ, low-p T dimuons, and Z → τ τ [36,37]. Higher p T thresholds are used to collect data for new-physics searches as well as measuring the properties and production rates of SM particles such as the Higgs, W and Z bosons, and top quarks [38-40].
Muon reconstruction and selection
The trigger reconstruction algorithms for muons at L1 and the HLT are described in Sects. 3.2 and 5.3, respectively. The selection criteria depend on the algorithm used for reconstruction. The MS-only algorithm selects solely on the p T of the muon candidate measured by the muon spectrometer; the combined algorithm makes selections based on the match between the ID and MS tracks and their combined p T ; and the isolated muon algorithm applies selection criteria based on the amount of energy in the isolation cones.
Muon trigger menu and rates
The lowest-threshold single-muon trigger (mu20_iloose_ L1MU15) requires a minimum transverse momentum of 20 GeV for combined muon candidates in addition to a loose isolation: the scalar sum of the track p T values in a cone of size R = 0.2 around the muon candidate is required to be smaller than 12% of the muon transverse momentum. The isolation requirement reduces the rate by a factor of approximately 2.5 with a negligible efficiency loss. The trigger is seeded by L1_MU15, which requires a transverse momen-tum above 15 GeV. At a transverse momentum above 50 GeV this trigger is complemented by a trigger not requiring isolation (mu50), to recover a small efficiency loss in the high transverse momentum region.
The lowest-threshold unprescaled dimuon trigger (2mu10) requires a minimum transverse momentum of 10 GeV for combined muon candidates. The trigger is seeded by L1_2MU10, which requires two muons with transverse momentum above 10 GeV. Figure 28 shows the rates of these triggers as a function of the instantaneous luminosity. The trigger rates scale linearly with the instantaneous luminosity. Dimuon triggers with lower p T thresholds and further selections (e.g. on the dimuon invariant mass) were also active and are discussed in Sect. 6.8. Additionally, an asymmetric dimuon trigger (mu18_mu8noL1) is included, where mu18 is seeded by L1_MU15 and mu8noL1 performs a search for a muon in the full detector at the HLT. By requiring only one muon at L1, the dimuon trigger does not suffer a loss of efficiency that would otherwise have if two muons were required at L1. This trigger is typically used by physics searches involving two relatively highp T muons to improve the acceptance with respect to the standard dimuon triggers.
Muon trigger efficiencies
The L1 and HLT muon efficiencies are determined using a tag-and-probe method with Z → μμ candidate events. Events are required to contain a pair of reference muons with opposite charge and an invariant mass within 10 GeV of the Z mass. Reference muons reconstructed offline using both ID and MS information are required to be inside the fiducial The absolute efficiency of the L1_MU15 trigger and the absolute and relative efficiencies of the logical 'or' of mu20_iloose and mu50 as a function of the p T of the offline muon track are shown in Fig. 29. The L1 muon trigger efficiency is close to 70% in the barrel and 90% in the end-caps. The different efficiencies are due to the different geometrical acceptance of the barrel and end-cap trigger systems and local detector inefficiencies. The HLT efficiency relative to L1 is close to 100% both in the barrel and in the end-caps. Figure 30 shows the muon trigger efficiency as a function of the azimuthal angle φ of the offline muon track for (a) the barrel and (b) the end-cap regions. The reduced barrel acceptance can be seen in the eight bins corresponding to the sectors containing the toroid coils and in the two feet sectors around φ ≈ −1.6 and φ ≈ −2.0, respectively.
Jets
Jet triggers are used for signal selection in a wide variety of physics measurements and detector performance studies. Precision measurements of inclusive jet, dijet and multi-jet topologies rely on the events selected with the single-jet and multi-jet triggers. Events selected by the single-jet triggers are also used for the calibration of the calorimeter jet energy scale and resolution. All-hadronic decays of tt events can be studied using multi-jet signatures and the all-hadronic decay of the weak bosons, Higgs bosons and top quarks can be selected in high transverse momentum ('boosted') topologies using large-radius jets. Searches for physics beyond the SM, such as high-mass dijet resonances, supersymmetry or large extra dimensions, often utilise single-jet and multijet unprescaled triggers with a high transverse momentum threshold.
Jet reconstruction
A detailed description of the jet triggers used during Run 1 can be found in Ref. [5]. Jets are reconstructed in the HLT using the anti-k t jet algorithm [43] with a radius parameter of R = 0.4 or R = 1.0. The inputs to the algorithm are calorimeter topo-clusters that are reconstructed from the full set of calorimeter cell information calibrated by default at the EM scale. The jets are calibrated in a procedure similar to that adopted for offline physics analyses [44]. First, contributions to the jet energy from pile-up collisions are subtracted on an event-by-event basis using the calculated area of each jet and the measured energy density within |η| < 2. Second, the response of the calorimeter is corrected using a series of p Tand η-dependent calibration factors derived from simulation.
The jet reconstruction in the HLT is highly flexible and some triggers use non-standard inputs or a calibration procedure that differs from the default outlined above. For example, the clusters can be reconstructed using cells from a restricted region in the calorimeter defined using the RoIs identified by the L1 trigger. The clusters can also be calibrated using local calibration weights that are applied after classifying each cluster as electromagnetic or hadronic in origin. Furthermore, the jet calibration can be applied in four ways: no jet calibration, pile-up subtraction only, jet response correction only, or both pile-up subtraction and jet response corrections (default). Finally, the jet reconstruction can be run twice to produce reclustered jets [45], in which the input to the second jet-finding is the output from the first, e.g. to build large-R jets from small-R jets.
Jet trigger menu and rates
The jet trigger menu consists of single-jet triggers, which require at least one jet above a given transverse energy threshold, multi-jet triggers, which require at least N jets above a given transverse energy threshold, H T triggers, which require the scalar sum of the transverse energy of all jets in the event, H T , above a given threshold, and analysis-specific triggers for specific topologies of interest. The jet triggers use at L1 either a random trigger (on colliding bunches) or an L1 jet algorithm. The random trigger is typically used for triggers that select events with offline jet p T < 45 GeV to avoid bias due to inefficiencies of the L1 jet algorithm for lowp T jets. In the following, only the most commonly used jet triggers are discussed.
The lowest-threshold unprescaled single-jet trigger for standard jets (R = 0.4) selects events that contain a jet at L1 with transverse energy above 100 GeV (L1_J100) and a jet in the HLT with transverse energy above 360 GeV (j360). This trigger has a rate of 18 Hz at a luminosity of 5 × 10 33 cm −2 s −1 . The lowest-threshold unprescaled multi-jet triggers are 3j175, 4j85, 5j60 and 6j45, which have rates of 6, 20, 15 and 12 Hz, respectively. The lowestthreshold unprescaled H T trigger used in 2015 is ht850 with a rate of 12 Hz where one jet with transverse energy above 100 GeV is required at L1 and H T is required to be above 850 GeV at HLT.
In addition to the unprescaled triggers, a set of lowerthreshold triggers select events that contain jets with lower transverse momentum and are typically prescaled to give an event rate of 1 Hz each. The lowest-threshold single-jet trigger in 2015 is j15, which uses a random trigger at L1. Multiple thresholds for single jets exist between j15 and j360 to cover the entire p T spectrum.
Jet trigger efficiencies
Jet trigger efficiencies are determined using the bootstrap method with respect to the p T of the jet. The single-jet trigger efficiencies for L1 and the HLT are shown in Fig. 31 for both the central and forward regions of the calorimeter. The ranges in |η| are chosen to ensure that the probe jet is fully contained within the |η| region of study. Good agreement is observed between simulation and data. The sharp HLT efficiency turnon curves in Fig. 31 are due to good agreement between the energy scale of jets in the HLT and offline, as shown in Fig. 32.
The multi-jet trigger efficiencies are dominated by the trigger efficiency of the N th leading jet and are shown in Fig. 33 for (a) L1 and (b) HLT as a function of the N th leading jet transverse momentum. Good agreement is found for the efficiency as a function of the N th jet for different jet multiplicities with the same threshold (e.g. L1_6J15, L1_4J15 and 4j45, 5j45) and between data and simulation for the HLT.
Finally, the efficiency of the H T and large-R (R = 1.0) triggers are shown in Fig. 34. The H T trigger efficiencies are measured with respect to the HLT_j150_L1J40 trigger. There is a small offset in the efficiency curves for data and simulation for both thresholds. For the large-R trig-
Jets and trigger-level analysis
Searches for dijet resonances with sub-TeV masses are statistically limited by the bandwidth allocated to inclusive singlejet triggers. Due to large SM multi-jet backgrounds, these triggers must be prescaled in order to fit within the total physics trigger output rate of 1 kHz. However, as the properties of jets reconstructed at the HLT are comparable to that of jets reconstructed offline, one can avoid this rate limitation by using Trigger-Level Analysis (TLA) triggers that record partial events, containing only relevant HLT jet objects needed for the search, to a dedicated stream. Using Trigger-Level Analysis triggers allows a factor of 100 increase in the event recording rates, and results in a significant increase in the number of lowp T jets as shown in Fig. 35. Dedicated calibration and jet identification procedures are applied to these partially built events, accounting for differences between offline jets and trigger jets as well as for the lack of detector data other than from the calorimeters. These procedures are described in detail in Ref. [46].
Tau leptons
Tau leptons are a key signature in many SM measurements and searches for new physics. The decay into tau lepton pairs Most (about 65%) of tau leptons decay hadronically. Hence an efficient trigger on hadronic tau decays is crucial for many analyses using tau leptons.
Dedicated tau trigger algorithms were designed and implemented based on the main features of hadronic tau decays: narrow calorimeter energy deposits and a small number of associated tracks. Due to the high production rate of jets with features very similar to hadronic tau decays, keeping the rate of tau triggers under control is particularly challenging.
Tau reconstruction and selection
At L1 the tau trigger uses the algorithms described in Sect. 3.1. The isolation requirement was tuned with 13 TeV simulation to yield an efficiency of 98% and is not applied for tau candidates with a transverse energy above 60 GeV.
At the HLT three sequential selections are made. First, a minimum requirement is applied to the transverse energy of the tau candidate. The energy is calculated using the locally calibrated topo-clusters of calorimeter cells contained in a cone of size R = 0.2 around the L1 tau RoI direction taken from the L1 cluster. A dedicated tau energy calibration scheme is used. Second, two-stage fast tracking (Sect. 5.1.3) is used to select tau candidates with low track multiplicity. A leading track is sought within a narrow cone ( R = 0.1) around the tau direction followed by a second fast tracking step using a larger cone ( R = 0.4) but with the tracks required to originate from within a fixed interval along the beam line around the leading track. Tracks with p T > 1 GeV are counted in the core cone region R < 0.2 and in the isolation annulus 0.2 < R < 0.4 around the tau candidate direction. A track multiplicity requirement selects tau candidates with 1 ≤ N trk R<0.2 ≤ 3 and N trk 0.2< R<0.4 ≤ 1. Finally, the HLT precision tracking is run, and a collection of variables built from calorimeter and track variables are input to a Boosted Decision Tree (BDT), which produces a score used for the final tau identification. The implementation of those variables follows closely their offline counterparts as described in Ref. [47]. In addition, the same BDT training is used offline and online to ensure a maximal correlation between online and offline identification criteria. The performance of the offline training was found to be comparable to a dedicated online training. To ensure a robust response under differing pile-up conditions, corrections as a function of the average number of interactions per bunch-crossing are applied to the discriminating variables. Working points of the BDT are tuned separately for 1-prong and 3-prong candidates. The baseline medium working point operates with an efficiency of 95% (70%) for true 1-prong (3-prong) taus.
Tau trigger menu and rates
The primary tau triggers consist of triggers for single high transverse momentum taus, and combined τ + X triggers, where X stands for an electron, muon, a second tau or E miss T . The transverse momentum thresholds used in the single-tau and ditau triggers in 2015 are indicated in Table 1. For all tau triggers the L1 isolation, HLT track multiplicity and online medium identification requirements are applied to the tau candidates.
Due to L1 rate limitations, the combined triggers τ +(e, μ) and τ +E miss T require the presence of an additional jet candidate at L1 with transverse momentum above 25 and 20 GeV, respectively. Variants of these triggers with higher thresholds for the tau transverse momentum and without the L1 jet requirement are also included in the trigger menu. Figure 36 shows the L1 and HLT output rates as function of the instantaneous luminosity for the primary single-tau, ditau, τ + e, τ + μ and τ +E miss T triggers.
Tau trigger efficiencies
The efficiency of the tau trigger was measured using a tagand-probe (T&P) method in an enriched sample of Z → τ μ τ had → μ + 2ν + τ had events, where τ μ is a tau lepton decaying to μνν and τ had is a tau lepton decaying hadronically. Events are selected by the lowest unprescaled singlemuon trigger and are tagged by an offline reconstructed and isolated muon with transverse momentum above 22 GeV. The presence of an offline reconstructed tau candidate with transverse momentum above 25 GeV, one or three tracks, fulfilling the medium identification criteria and with electric charge opposite to the muon charge is also required. This Fig. 36 Trigger rates as a function of instantaneous luminosity for several a L1 and b HLT tau triggers reconstructed tau candidate is the probe with respect to which the tau trigger efficiency is measured. The event selection used to enhance the sample with Z → τ μ τ had events and therefore the purity of the probe tau candidate is similar to the one described in Ref.
[47]: to reject Z (→ μμ) + jets and W (→ μν) + jets events, the invariant mass of the muon and the offline tau candidate is required to be between 45 and 80 GeV, the transverse mass, m T , composed of the muon p T and E miss ) is required to be smaller than 50 GeV, and the variable built from the difference in azimuth between the muon and E miss T and between the offline tau candidate and E miss T (cos φ(μ, E miss T ) + cos φ(τ, E miss T )) is required to be above −0.5. The dominant sources of background events in the resulting sample are W (→ μν)+jets and multi-jet events and their contributions are determined in data as described in Ref. [47]. The multi-jet contribution is estimated from events where the offline tau candidate and the muon have the same electric charge. The W (→ μν) + jets contribution is estimated from events with high m T .
Distributions of the transverse momentum, pseudorapidity, track multiplicity and BDT discriminant score for the HLT tau candidates matched to the offline probe tau candidates are shown in Fig. 37. The HLT tau candidates pass the tau25_medium trigger, which requires an isolated L1 RoI with transverse momentum above 12 GeV and a tau candidate at the HLT with transverse momentum above 25 GeV satisfying the track multiplicity and the online medium identification criteria. The observed distributions in data are in good agreement with simulation.
The estimated background is subtracted from data and the uncertainty in this subtraction is considered as a systematic uncertainty in the measured efficiency. This systematic uncertainty includes uncertainties in the background contri-butions estimated from both simulation and data. Figure 38a shows the measured efficiency for the tau25_medium trigger as a function of the transverse momentum of the offline tau candidate. The efficiency loss of the HLT with respect to L1 is mainly due to the HLT's track multiplicity selection and its BDT selection, which uses slightly different input variables online and offline. In Fig. 38b this efficiency is compared with simulation. The statistical uncertainties in data and simulation are shown together with the systematic uncertainties associated with the background subtraction procedure in data.
Missing transverse momentum
The E miss T trigger is used in searches where the final state contains only jets and large E miss T . The E miss T trigger can also be the most efficient trigger for selecting final states that contain highly energetic muons. An example is searches for supersymmetric particle production where jets, leptons and invisible particles are produced. Another major use is for multi-particle final states where the combination of E miss T with other trigger objects such as jets, electrons, or photons enables lower thresholds to be used for these other objects than would otherwise be possible. Finally, the E miss T trigger collects data samples used for detector performance studies. For example, the data set used for electron efficiency calculations in events consistent with a W boson is selected with an E miss T trigger.
E miss T reconstruction and selection
The very large rate of hadronic jet production means that, even with reasonably good calorimeter resolution, jet energy mismeasurement can lead to an unaffordably large E miss T trig- The improvements in the L1 E miss T determination, including the L1 dynamic pedestal correction described in Sect. 3.1, have been important in maintaining L1 performance. In par-ticular they have permitted the L1_XE50 trigger to be used without prescale throughout 2015.
To fulfil the desired broad E miss T -based physics programme, different HLT algorithmic strategies based on cells, jets or topo-clusters in addition to two methods for correcting the effects of pile-up were developed during LS1 and deployed during 2015 data-taking. While the offline algo- reconstruction. For each topocluster j, the momentum components ( p x, j , p y, j ) are calculated in the approximation that the particles contributing energy to the cluster are massless, and, in a manner similar to the cell algorithm, the missing transverse momentum is calculated from the negative vector sum of these components.
• Pile-up suppression algorithm (xe_tc_pueta): This algorithm is based on the topo-cluster E miss T algorithm described above, but includes a further pile-up suppression method that is intended to limit the degradation of the E miss T resolution at very high pile-up. The method starts by calculating the average topo-cluster energy and standard deviation in ten regions of pseudorapidity covering, in equal steps, −5.0 < η < 5.0 in the calorimeter. In each pseudorapidity region, known as a ring, the topo-clusters of energy above 2σ are omitted and the average energy of the residual topo-clusters is calculated. This average represents an estimate of the energy contribution from pile-up in that ring. The pile-up energy density in each ring is obtained by dividing the average energy by the solid angle of the ring. This energy density is then multiplied by the solid angle of each topo-cluster and then subtracted from the energy of that topo-cluster to obtain a topo-cluster energy measurement corrected for pile-up. The E miss T is recalculated as described above using the ( p x, j , p y, j ) of topo-clusters after the pile-up subtraction.
• Pile-up fit algorithm (xe_tc_pufit): Starting again from the topo-cluster E miss T described above, a different pile-up suppression method is used in this algorithm. The calorimeter is partitioned into 112 towers each of size η × φ ≈ 0.71 × 0.79. For each tower, the p x and p y components of all the topo-clusters with centres in that tower are summed to obtain the transverse momentum p T,k of that kth tower. The transverse energy sum of the tower E T,k is also calculated from the scalar sum of the p T of the individual clusters. If E T,k < 45 GeV, the tower is determined to be below threshold and its energy assumed to be due to pile-up. The average pile-up E T density is calculated from k E T,k / k A k of all the towers below threshold, where A k is the total area in (η, φ) coordinates of those towers. A fit estimates the E T contributed by pile-up in each tower above threshold using the average pile-up E T density and constraining the event-wide E miss T from pile-up to be zero within resolution. These estimated pile-up contributions are subtracted from the corresponding E T measurements for towers above threshold, and these corrected E T values are used to calculate E miss T . Figure 39 shows the E miss T distribution of the various HLT algorithms for events accepted into the Main physics stream. The differences observed between the cell-based and the topo-cluster-based E miss T distributions are caused in part by different calibration; the cell-based algorithm is calibrated at the EM scale, while algorithms based on topo-clusters generally have larger values of E miss T as they include a correction for the calorimeter response to hadrons (hadronic scale). Differences between the E miss T distributions for the various pile-up correction schemes are small, since these algorithms were optimised to improve the resolution at large pile-up values of 80 overlapping interactions that will only be achieved in future LHC runs. xe trigger with a threshold of 70 GeV remained unprescaled throughout the 2015 data-taking period. The typical output rate for this trigger was approximately 50 Hz at the same luminosity as seen in Fig. 40b. The topo-cluster-based algorithms, all of which are calibrated at the hadronic scale, had rates of approximately 110 Hz at the equivalent nominal threshold of 70 GeV. The output rate from these algorithms is larger for the same nominal threshold due in part to the different calibration methods. Prescaled triggers at a set of lower L1 and HLT thresholds, with HLT output rates of order 1 Hz each, were included in the menu to record a sample of data from which the efficiency of the unprescaled, primary physics triggers could be calculated. Further triggers based on the significance of the observed E miss T , known as xs triggers [48] were used to select W → eν events for electron reconstruction performance studies. Triggers used during Run 1 for selecting events based on the scalar sum of the transverse energy of all calorimeter cells E T were found to have a high sensitivity to pile-up [48], and so were not used during the proton-proton run in 2016. 5
E miss T trigger efficiencies
Since E miss T is a global observable calculated from many contributions, each of which has its own detector resolution, the efficiency of the E miss T trigger for any particular analysis inevitably depends on the event selection used in that analysis. The efficiency turn-on curves of the various E miss T trigger 5 A E T trigger was used during heavy-ion collisions at L1. Fig. 41, for W → eν and W → μν selections. The selection is similar to that of the W boson cross-section measurement [39], requiring exactly one lepton (electron or muon) with p T > 25 GeV, transverse mass m T > 50 GeV, and a single lepton trigger (24 GeV singleelectron or 20 GeV single-muon). The efficiencies are shown as a function of a modified offline E miss T calculation with no muon correction, emulating the calorimeter-only E miss T calculation used in the trigger. The event kinematics for the same E miss T are very different for the decays into electron and muon, since the energy of the electron for W → eν is included in both the online and offline calculations of E miss T , whereas this is not the case for the muon in W → μν.
Events with high p T muons are recorded by the muon triggers.
The turn-on curves are shown for different nominal HLT E miss T thresholds, selected such that they give rates close to that of the xe algorithm at its lowest unprescaled (70 GeV) threshold. All the HLT algorithms, with their stated thresholds, are close to fully efficient with respect to the offline E miss T for values of E miss T > 200 GeV. At that value of E miss T , the L1_XE50 trigger itself has an efficiency in the range of 95-99%, depending on the exact event selection required. The topo-cluster-based algorithms, and in particular xe_tc_mht have higher efficiency in the turn-on region than the cell-based algorithm. where the triggers approach full efficiency, the topo-cluster-based HLT algorithms show good linearity at values close to unity. The L1 and the xe HLT algorithms also show stable linearity in the trigger efficiency plateau, but at a lower value, reflecting their calibration at the EM scale rather than the hadronic scale.
The E miss T resolution is defined as the RMS of the xcomponent of the core of the p miss T distribution. Since the resolution is dominated by the stochastic fluctuations in calorimeter energy measurements, it is shown in Fig. 42b as a function of the offline value of E T (reconstructed offline without muon corrections). The expected approximate scaling of E miss T with √ E T can be observed. The stochastic contribution to the resolution can be seen to be accompanied by an offset that varies from algorithm to algorithm and that is lower in the cell-based, electromagnetically calibrated L1 and xe algorithms. Such differences are expected because different noise suppression schemes are used to define calorimeter cells and topological clusters. Figure 43 shows the efficiency of the trigger-level E miss T algorithm for W → μν events for several ranges of the number of reconstructed vertices. The effect of pile-up on the E miss T turn-on curves can be seen in this figure for the topocluster algorithm (xe_tc_lcw), which does not employ any pile-up correction methods. Some degradation of efficiency is observed for larger numbers of proton-proton vertices N vtx . The larger pile-up both increases the trigger rate, through increasing the probability to pass the trigger at lower E miss T , and degrades the efficiency in the turn-on region.
b-Jet reconstruction and selection
Several b-hadron properties are exploited to identify (tag) bjets. The b-hadrons have a mean lifetime of ∼1.5 ps and often travel several millimetres before decaying. Consequently, a secondary vertex (SV) displaced from a primary interaction point characterises the decay. Reconstructed tracks associated with this SV have large transverse and longitudinal (z 0 ) impact parameters with respect to the primary vertex. In addition, b-hadrons go through hard fragmentation and have a relatively high mass of about 5 GeV. Thus, in addition to the decay length, b-jets can be distinguished from light-quark jets by having a large invariant mass, a large fraction of jet energy carried by tracks and a large track multiplicity.
As track and vertex reconstruction are crucial for the identification of b-jets, the b-jet trigger relies heavily on the performance of the ID tracking described in Sect. 5.1. Several improvements in the ID tracking made for Run 2 have directly benefited the b-jet trigger. The new IBL improves the impact parameter resolution of reconstructed tracks, leading to better b-jet identification and overall performance of the b-jet triggers [7]. Another improvement for Run 2 is the multiple-stage tracking described in Sect. 5.1.3. This new approach provides improved primary vertex finding and mitigates CPU requirements in the face of increased pile-up.
The basic inputs to b-tagging are reconstructed jets, reconstructed tracks and the position of the primary vertex. The jet reconstruction used in the trigger is described in Sect. 6 During Run 1, the b-jet triggers used a combination of two likelihood-based algorithms, IP3D and SV1 [49]. The IP3D algorithm discriminates between b-and light-jets using the two-dimensional distribution of the longitudinal and transverse impact parameter significances. The SV1 algorithm exploits properties of the secondary vertex such as the invariant mass of tracks matched to the vertex, the fraction of the jet energy associated with the secondary vertex and the number of two-track vertices. These Run 1 algorithms, optimised for Run 2 conditions, were used during 2015 data-taking. Three operating points, loose, medium and tight, are defined to correspond to b-jet identification efficiencies obtained from simulated tt events of 79, 72 and 62%, respectively.
Another major development in the b-jet trigger for Run 2 is the adaptation of the offline b-tagging algorithms [50] for use in the trigger. The use of the offline MV2 multivariate b-tagging algorithm provides better online b-jet identification and leads to a higher level of coherence between the online and offline b-tagging decisions. The MV2 algorithm uses inputs from the IP3D, SV1 and JetFitter algorithms. The JetFitter algorithm exploits the topological structure of weak b-and c-hadron decays inside the jet. The MV2 algorithm used in the trigger was optimised to identify b-jets using a training sample with a background composition of 80% (20%) light-(c-) jets and is referred to as MV2c20. Operating points analogous to loose, medium and tight were defined for MV2c20 and give light-flavour rejections similar to the corresponding operating points of the Run 1 b-tagging algorithm. Triggers utilising the MV2c20 b-tagging algorithm were run in 2015 for commissioning purposes. MV2c20 is the baseline b-tagging algorithm for 2016. Figure 45 shows the expected performance of the MV2c20 and the IP3D+SV1 trigger taggers in Run 2 compared to the actual performance of the IP3D+SV1 tagger that was achieved during Run 1. Figure 46 shows the efficiency of the online b-tagging as a function of jet p T for the three operating points. The efficiencies are calculated in a pure sample of b-jets from fully leptonic tt decays and are computed with respect to jets identified by the 70% working point of the MV2c20 algorithm. Events used in the efficiency calculation require an online jet with p T greater than 40 GeV. A significant gain in trigger efficiency is seen when moving to the MV2 b-tagging algorithms.
b-Jet trigger menu and rates
Several b-jet triggers have been implemented with different combinations of jets and b-tagged jets, using different p T thresholds and b-tagging operating points. The operating points, thresholds and multiplicities, for several of the primary b-jet triggers are listed in Table 1. The jet multiplicities vary between one and four, with up to two b-tagged jets. The b-jet triggers are typically seeded at L1 using either a single jet with E T > 100 GeV or three jets with E T > 25 GeV and pseudorapidity |η| < 2.5. Rates of various b-jet triggers as a function of luminosity are shown in Fig. 47. The benefit of exploiting b-tagging in the HLT can be seen by comparing the thresholds used in jet triggers with and without b-tagging. The threshold for the lowest unprescaled single-jet trigger without b-tagging is 360 GeV. A loose requirement in the trigger allows this threshold to be lowered to 225 GeV. For the four-jet trigger, 85 GeV thresholds are used when no b-tagging is applied. Requiring two jets to satisfy the tight b-tagging requirement allows the four-jet threshold to be lowered to 35 GeV. Fig. 48 Trigger rates for a lowp T dimuon L1 triggers with various muon p T thresholds and b primary HLT B-physics triggers as a function of instantaneous luminosity. b Shows triggers requiring two muons to pass various p T thresholds, to have an invariant mass within the J/ψ mass window, and to form a good vertex (full markers); also shown are triggers requiring two muons with p T > 6 and 4 GeV and either having an invariant mass in a different window (B 0 (s) , ϒ(1, 2, 3S)) or forming a B → μμX candidate after combination with additional tracks found in ID (open markers). As L1_2MU4 was prescaled at luminosities above 4 × 10 33 cm −2 s −1 , the rate of 2mu4_bJpsimumu seeded from this L1 trigger drops above that luminosity
B-physics
The trigger selection of events for B-physics analyses is primarily based on the identification of b-hadrons through decays including a muon pair in the final state. Examples are decays with charmonium, B → J/ψ(→ μμ)X , rare decays B 0 (s) → μμ, and semileptonic B → μμX . Decays of prompt charmonium and bottomonium are also identified through their dimuon decays, and are therefore similar to b-hadron decays, apart from the lack of measurable displacement from the pp interaction point.
B-physics reconstruction and selection
The primary suite of triggers require two muons at L1. Their rate is substantially reduced compared to single-muon L1 triggers. However, this results in inefficiencies at high transverse momentum, where the opening angle of the two muons becomes small for low-mass resonances, and the granularity at L1 is not sufficient to form separate RoIs. At the HLT, muons are reconstructed using the same algorithms as described in Sect. 5.3 with the additional requirement that the two muons should have opposite charges and form a good vertex (where the fit is performed using the ID track parameters) within a certain invariant mass window. The primary triggers use three dimuon mass windows: 2.5 to 4.3 GeV intended for the selection of J/ψ and ψ(2S) decays into muon pairs (including charmonia produced in b-hadron decays), 4.0 to 8.5 GeV for B 0 (s) → μμ decays, and 8 to 12 GeV for ϒ(1, 2, 3S) → μμ decays. These invariant mass selections are indicated by the bJpsimumu, bBmumu and bUpsimumu suffixes in the trigger names, respectively.
Additional primary and supporting triggers are also implemented. Triggers using a single L1 muon RoI with an additional track found at the HLT do not have similar opening angle issues, but suffer from high rates and run with high prescale factors. These combined muon triggers are, however, essential components in data-driven estimates of the dimuon trigger efficiencies. Triggers requiring three muons at L1 help to maintain the lowest muon p T thresholds for certain event signatures with a likely presence of a third muon. Finally, for selecting semileptonic decays, such as B 0 → μμK * 0 (→ K + π − ), searches for additional ID tracks and a combined vertex fit are performed assuming a few exclusive decay hypotheses. This reduces the rate with respect to a simple dimuon vertex selection thus allowing the dimuon mass window to be widened to the full kinematically allowed range. The corresponding trigger names use the bBmumuxv2 suffix.
B-physics trigger menu and rates
Dimuon trigger rate restrictions at L1 define the lowest muon transverse momentum thresholds for primary B-physics triggers in 2015 data-taking. HLT triggers using L1_2MU4 were unprescaled up to a luminosity of 4 × 10 33 cm −2 s −1 . Above this, triggers seeded from L1_MU6_2MU4, 6 which requires Fig. 49 Invariant mass distribution of offline-selected dimuon candidates passing the lowest thresholds of dimuon B-physics triggers. Triggers targeting different invariant mass ranges are illustrated with different colours, and the differing thresholds are shown with different shadings. No accounting for overlaps between triggers is made, and the distributions are shown overlaid, and not stacked. For comparison, the number of candidates passing the lowest unprescaled single-muon trigger and supporting dimuon trigger is also shown two muons with p T above 4 and 6 GeV, were unprescaled. The overall loss of events collected with the former amounts to 15%. Higher-threshold triggers seeded from L1_2MU6 and L1_2MU10 were also active. Figure 48 shows the L1 rates for lowp T dimuon triggers as well as the HLT rates for various primary triggers seeded from them, as a function of the instantaneous luminosity.
The invariant mass distribution of offline reconstructed dimuon candidates passing the suite of primary triggers is shown in Fig. 49. For comparison, the number of candidates passing the lowest unprescaled single-muon trigger is also shown, as well as the supporting dimuon trigger with wide invariant mass range.
B-physics trigger efficiencies
To evaluate the efficiency of the B-physics selection at the HLT, two supporting triggers with and without the oppositesign and vertex criteria are used. The first trigger requires that the events contain two opposite-sign muons and form a good fit to a common vertex, using the ID track parameters of the identified muons with a χ 2 < 20 for the one degreeof-freedom. This selection is the same as used in primary dimuon triggers but has a wider invariant mass window. The second trigger differs by the absence of the muon charge selection and vertex fit. The efficiency is calculated using a sample collected by these triggers.
For the efficiency measurement, events are selected by requiring two offline reconstructed combined muons satisfying the tight quality selection criteria and p T (μ) > 4 GeV, Fig. 50 The efficiency of the opposite-sign muon requirement and vertex quality selection applied for dimuon B-physics triggers as a function of p T (μμ) for three rapidity regions. Supporting dimuon triggers with and without the selection criteria applied are used to determine the efficiency. The integrated luminosity shown takes into account the high prescale factors applied to the supporting triggers |η(μ)| < 2.3. The offline muons are fit to a common vertex, using their ID track parameters, with a fit quality of χ 2 /dof < 10 and invariant mass |m(μμ) − m J/ψ | < 0.3 GeV. The number of J/ψ candidates is determined from a fit to the offline dimuon invariant mass distribution. The efficiency of the opposite-sign muon requirement and vertex quality selection is shown in Fig. 50 as a function of the offline dimuon transverse momentum p T (μμ) calculated using the track parameters extracted after the vertex fit, for three slices of J/ψ rapidity. The observed small drop in efficiency at high p T (μμ) is due to the increasing collinearity of the two muons.
Conclusion
A large number of trigger upgrades and developments for the ATLAS experiment were made during the first long shutdown of the LHC in preparation for the Run 2 data-taking. A summary of the various updates as well as the first Run 2 performance studies can be found in this paper.
Many improvements in the L1 trigger were implemented including the addition of completely new systems. Upgrades in the L1 calorimeter trigger included the implementation of a dynamic pedestal correction to mitigate pile-up effects. In the L1 muon trigger, a new coincidence logic between the muon end-cap trigger and the innermost muon chamber has been used since 2015, and it is being extended with the hadronic calorimeter, to suppress the fake-muon rate. New chambers were also installed to increase the trigger coverage. In addition, the new central trigger processor doubles the number of L1 trigger thresholds and the L1 output rate limit has increased from 70 to 100 kHz. Furthermore, a new topological processor was installed and is being commissioned. A new HLT architecture was developed to unify the Level-2 and Event Filter scheme used in Run 1, improving the flexibility of the system. The HLT software was also upgraded, making the algorithms and selections closer to the offline reconstruction to maximise the efficiency, and making use of the newly installed systems such as the innermost pixel layer IBL.
The trigger menu was revisited and redesigned to cope with the greater rates due to the higher centre-of-mass energy and increasing instantaneous luminosity. The different trigger signatures were set up according to the physics needs, considering different luminosity scenarios. The ATLAS trigger system was successfully commissioned with the first data acquired at 13 TeV. First performance studies of the different trigger signatures and trigger efficiencies with respect to the offline quantities are presented using the 13 TeV protonproton collision data with a 25 ns bunch separation collected during 2015.
|
v3-fos-license
|
2016-05-04T20:20:58.661Z
|
2016-03-24T00:00:00.000
|
12077999
|
{
"extfieldsofstudy": [
"Chemistry",
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0151957&type=printable",
"pdf_hash": "a067e57c7cc4e1a7fb7f7d4a5ddb8c4dedaa17f5",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44380",
"s2fieldsofstudy": [
"Environmental Science"
],
"sha1": "a067e57c7cc4e1a7fb7f7d4a5ddb8c4dedaa17f5",
"year": 2016
}
|
pes2o/s2orc
|
Cupric Oxide (CuO) Oxidation Detects Pyrogenic Carbon in Burnt Organic Matter and Soils
Wildfire greatly impacts the composition and quantity of organic carbon stocks within watersheds. Most methods used to measure the contributions of fire altered organic carbon–i.e. pyrogenic organic carbon (Py-OC) in natural samples are designed to quantify specific fractions such as black carbon or polyaromatic hydrocarbons. In contrast, the CuO oxidation procedure yields a variety of products derived from a variety of precursors, including both unaltered and thermally altered sources. Here, we test whether or not the benzene carboxylic acid and hydroxy benzoic acid (BCA) products obtained by CuO oxidation provide a robust indicator of Py-OC and compare them to non-Py-OC biomarkers of lignin. O and A horizons from microcosms were burned in the laboratory at varying levels of fire severity and subsequently incubated for 6 months. All soils were analyzed for total OC and N and were analyzed by CuO oxidation. All BCAs appeared to be preserved or created to some degree during burning while lignin phenols appeared to be altered or destroyed to varying extents dependent on fire severity. We found two specific CuO oxidation products, o-hydroxybenzoic acid (oBd) and 1,2,4-benzenetricarboxylic acid (BTC2) that responded strongly to burn severity and withstood degradation during post-burning microbial incubations. Interestingly, we found that benzene di- and tricarboxylic acids (BDC and BTC, respectively) were much more reactive than vanillyl phenols during the incubation as a possible result of physical protection of vanillyl phenols in the interior of char particles or CuO oxidation derived BCAs originating from biologically available classes of Py-OC. We found that the ability of these compounds to predict relative Py-OC content in burned samples improved when normalized by their respective BCA class (i.e. benzene monocarboxylic acids (BA) and BTC, respectively) and when BTC was normalized to total lignin yields (BTC:Lig). The major trends in BCAs imparted by burning persisted through a 6 month incubation suggesting that fire severity had first order control on BCA and lignin composition. Using original and published BCA data from soils, sediments, char, and interfering compounds we found that BTC:Lig and BTC2:BTC were able to distinguish Py-OC from compounds such as humic materials, tannins, etc. The BCAs released by the CuO oxidation procedure increase the functionality of this method in order to examine the relative contribution of Py-OC in geochemical samples.
Introduction
Fire can significantly reduce the amount of carbon at the ecosystem level, and leave residual organic materials, such as black carbon and poly-cyclic aromatic hydro-carbons (PAHs; Table 1). Black carbon is a heterogeneous, aromatic, and C-rich residue [1] and PAHs are compounds composed of several fused benzenoid rings [2]. Together, the broad class of compounds produced as a result of incomplete combustion, including black carbon and PAHs, is referred to as pyrogenic organic carbon (Py-OC) in this paper. Pyrogenic organic carbon has been found to make up a large proportion of organic matter in soils and sediments from a variety of environments [3][4][5][6]. Because fire regimes (fire frequency and severity) have changed and will continue to change as a result of climate change and fire suppression [7,8], it is critical to understand the role of wildfire in altering organic matter composition and carbon stocks in order to predict changes in globally-relevant Py-OC dynamics. Assessing the contribution of Py-OC to organic carbon (OC) pools has been conducted through various methods that examine specific compound classes (e.g. black carbon-only, PAHs-only). The CuO oxidation procedure yields products from multiple biochemical precursors, including lignin, cutin, fatty acids, and specific amino acids [9][10][11], allowing inferences to be made on the source of organic matter in soils and sediments. CuO oxidation also yields a suite of benzoic acid products that have been utilized to measure the relative degradation state of organic matter [12][13][14]. In addition, upon CuO oxidation, condensed aromatic structures such as PAHs and/or black carbon yield a series of benzene products with one or more carboxylic acid groups that may be used as tracers for Py-OC [15]. In their study, Dickens et al. [15] showed that thermal alteration of pine and alder wood yields highly elevated yields of benzene carboxylic acid products (BCAs). However, unburned samples (such as tannic acid, vascular plant tissues, and algae) also yielded some of the BCA compounds, leading the authors to question whether CuO oxidation products could be used to quantify Py-OC in natural and artificial samples. In this paper, we revisit this question and show that the yields and compositions of BCA products, especially when compared to the yields and compositions of other biomarkers such as lignin-derived phenols, provide information about the contribution of Py-OC and in determining the alteration of organic matter in soils. Specifically, we examined Py-OC in soils from microcosms burned in under controlled conditions at varying levels of fire severity and subjected to laboratory incubations [16,17]. In combination with published studies [15], we use these data to evaluate the production and consumption of Py-OC during burning and post-fire degradation.
Burning and Incubation Experiments
Details of the laboratory burns can be found in Hatten and Zabowski [16] and Hatten and Zabowski [17], while details regarding the soils that were used in this experiment can be found in Hatten et al. [18] and Zabowski et al. [19] and Fig 1. Briefly, O and A horizons were collected from a ponderosa pine forest in Eastern Washington (47°27'00"N/ 120°37'54"W; 1070 m elevation). These soils were collected from a fire prone environment and likely contain some Py-OC created by past fires, including the last major fires that affected the region in 1929 and 1890 [18]. Char was evident in the A horizon; however, none was apparent through visual examination in the O horizon. In a previous study we found that the unburned O-horizon did contain some Py-OC as measured by the chemo-thermal oxidation method, which is only able to isolate and quantify the most recalcitrant soot fraction of the Py-OC continuum [20,21]. The Py-OC in the unburned O-horizon has probably been introduced into the O-horizon through bioturbation at the O/A interface and atmospheric deposition of Py-OC from nearby fires. After air drying, O horizons were separated into Oi (unaltered pine litter) and Oe (slightly altered organic material or duff). Mineral soil (A horizon) was sieved to 2 mm, homogenized, and stored in airtight containers.
A cylindrical form (36 cm diameter) was used to reassemble one soil microcosm for each fire severity treatment and associated control. Sieved A horizon material was added to a depth of 2 cm and tamped to a bulk density of 0.8 g cm -3 . Air-dried Oe horizon was placed on top of the mineral material to a depth of 4 cm at a bulk density of 0.09 g cm -3 and air-dried Oi horizon was placed on top of the Oe material to a depth of approximately 2 cm at a bulk density of 0.03 g cm -3 .
Flaming combustion was induced with the short (ca. 15 sec) application of a propane torch to the surface of the O horizon. Fire intensity was controlled by adjusting fuel moisture and applying heated air. An O horizon with 18% moisture content was used for the low-severity treatments. To achieve moderate-and high-severity burn levels the fuels needed to be dried (using a convection oven at 40°C) to 9% fuel moisture content. For the high-severity burns a heat gun was positioned 30 cm above the surface of the O horizon and aimed at the center of the circular form. Heated air was supplied until completion of flaming combustion.
The temperature of the burning soil microcosms was monitored with three thermocouples connected to a datalogger that recorded temperature once every minute. The thermocouples were placed on the surface of the O horizon, at the interface of the O horizon and mineral soil (0 cm), and 2 cm into the mineral soil. After burning, the soil columns were disassembled into O and A (0-1 cm) horizons. The A horizon from 1-2 cm depth was not analyzed for this study. To control for additions of organic matter from the O horizon into the A horizon, control columns were also assembled, but not burned, and then disassembled similarly to the burn treatments.
Mass loss was determined by recording the mass of O horizon before and after burning. Since the mass loss from the A horizon could not be directly measured we calculated it from organic matter content. We assumed that all mass loss that occurred from the A horizon during burning was a result of combusted organic matter. Loss on ignition (550°C for 6 hours) was used to measure organic matter in both horizons.
Following laboratory burns, the O and A horizons were incubated at 24°C for 180 days. The rate of CO 2 production (i.e. C-mineralization rates) of each incubated soil were analyzed and described in Hatten and Zabowski [22]. Briefly, 100 g of air-dried mineral soil or 10 g of organic soil was placed into a 2-L canning jar. Inoculating solution was produced by shaking 400 g of A horizon soil with 4 L of deionized water for 24 h and the solution separated from the residual soil using qualitative filter paper with a vacuum applied to a Buchner funnel. The A horizons were moistened with enough inoculating solution to bring the moisture content to 35% gravimetric moisture content (ca. 85% of field capacity), while the O horizons were moistened with solution to achieve 100% gravimetric moisture content. Approximately 35 and 10 mL of inoculant were added to organic and mineral soils, respectively. The 2-L canning jar was capped with an airtight lid and gasses in the incubation vessel were renewed by removing the lid for at least 15 min and allowing a fan to circulate air over the chambers. Carbon dioxide was adsorbed in NaOH traps, which were collected at times of gas exchange, and estimated gravimetrically after drying. At the time of gas exchange, soil gravimetric moisture content was brought up to 35 and 100% for A and O horizons, respectively, using deionized water. For the first 4 weeks, gasses were exchanged and the soil moisture adjusted weekly. The frequency of gas exchange lengthened to 2 weeks. Mass losses were not measured during incubations; so we used an alternative approach [23] to estimate loss/production of individual constituents (see below).
Elemental and Py-OC Analyses
The contents of OC and nitrogen (N) in all samples were measured via high-temperature combustion after removal of inorganic carbonates by vapor-phase acidification [24][25][26]. Briefly, subsamples of ground and homogenized soil were placed in 8x5mm silver capsules, placed in a desiccator, and exposed to concentrated HCl fumes for 24-36 h to remove any inorganic carbon [24]. After removing excess acid by oven drying for 24h, the OC and N contents of the soils were determined by high-temperature combustion using a Thermoquest NC-2500.
Ground and homogenized O and A horizons were analyzed by alkaline CuO oxidation [27,28] to obtain the yields of a variety of products derived from burned and unburned organic matter. Briefly, sub-samples containing 2-5mg OC were extracted with 50mg of ammonium iron (II) sulfate hexahydrate. Alkaline CuO oxidations were carried out under an N 2 atmosphere with oxygen-purged 2N NaOH at 150°C for 90minutes using a microwave digestion system. Ethylvanillin and trans-cinnamic acid were added as recovery standards after digestion and the solution was acidified to pH 1 with concentrated HCl. Samples were then liquid-liquid extracted with ethyl acetate. Anhydrous sodium sulfate was added to the ethyl acetate phase to remove water and the extracts were evaporated to dryness under a stream of N 2 in a hot water bath heated to 40°C. The CuO reaction products were redissolved in pyridine and derivatized with bis-trimethylsilyl trifluoroacetamide+10% trimethyl-chlorosilane. The yields of individual lignin oxidation products and benzene carboxylic acids (BCAs) were quantified by gas chromatography-mass spectrometry. The compounds were separated chromatographically in a 30mx250 mm DB1 (0.25 mm film thickness) capillary gas chromatography column, using an initial temperature of 100°1C, a temperature ramp of 4°C min -1 and a final temperature of 300°C. The mass spectrometer was run in electron impact mode, monitoring positive ions from a range of 50-650amu. External calibration standards were determined for all compounds using ions specific to each chemical structure. The calibrations, which were performed on a regular basis to test the response of the gas chromatographer-mass spectrometer, were fit to either a linear or polynomial function (r 2 >0.99) over the concentration ranges measured in the samples.
In general we quantified two broad classes of reaction products (lignin and benzene carboxylic acids or BCAs) that we hypothesize as being unaltered and altered by burning, respectively. Products of unaltered, hereafter called unburned, organic matter included those derived from lignin (Lig = VP+SP+CP); where VP = vannilyl phenols (vanillin (Vl) + acetovanillone + vanillic acid (Vd)), SP = syringyl phenols (syringealdehyde + acetosyringone + syringic acid), and CP = cinnamyl phenols (p-coumaric acid + ferulic acid). The reproducibility of the analysis was determined on selected duplicate samples. We found that individual compounds within the Lig class were able to be reproduced within 0.4-1.4% (mean = 0.9%) of the soil mass normalized values.
Total CuO oxidation product yield (COP) was calculated as the sum of all quantified CuO oxidation products (VP, SP, CP, BA, BDC, and BTC). All CuO oxidation products were quantified using external calibration standards. The calibrations were performed on a regular basis and were highly linear (R 2 >0.99). The soil normalized values of individual biomarkers are listed in S1 Table. Data Analyses Trends in constituents caused by burning and incubation were assessed using Pearson's correlation (R) before and after incubation. We used the maximum temperature achieved during the burning treatment as the indicator for fire severity. We also used this approach when assessing data from Dickens et al. [15]. Care must be taken when exploring trends in data sets with small samples sizes. We considered a trend significant when the p-value was less than 0.05. This level of significance corresponded to an R>0.950 when n = 4, or highly linearly correlated and a probability of random occurrence (i.e. type I error) of 5%. Furthermore, the relationship between burn temperature and constituent may have been non-linear; therefore, by assuming a linear relationship, we are taking a conservative approach when determining the trend between burn temperature and a constituent.
Mass-normalized compound yields underrepresent losses and over-represent the production of constituents since they do not take into account mass loss occurring during the experimental treatment [29]. Thus, in our study, mass-normalized yields alone do not provide an accurate representation of the quantitative loss, preservation or creation of individual compounds due to the burning and incubation processes. To address this issue, for each constituent we calculated the proportion of initial material remaining in the sample using the formula %IY = [(Yt/Yc) Ã (1 = (%ML/100))] Ã 100. Where %IY is the percent of initial yield after treatment, Y t and Y c are the constituent yield of treated and control or pre-incubation soils, respectively, and %ML represents the percent mass loss incurred in the sample. Mass loss was measured for burned O horizons and assumed to be negligible for burned A horizons. According to this formulation of %IY, constituents with < 100% were lost from the sample (due to combustion, mineralization, or alteration) during burning or incubation, whereas those with > 100% were produced. Components with a %IY of 100% behaved conservatively, reflecting no changes in overall yields during burning.
Mass loss was not recorded during the incubation and therefore %IY cannot be used to assess the relative response of the constituents to the incubation. We used COP to normalize for differences in the total yield of compounds. While it is thought that some components of Py-OC are more unreactive than lignin phenols [30], some of the products we are examining (i.e. BA and BDC) have been hypothesized to be products of organic matter degradation [12,14]. After incubation the relative yield (%RY) of each constituent from each sample was calculated using the formula %RY Where %RY is the percent of relative yield after the incubation, Y b and Y i are the constituent yields of post-burn and incubated or pre-incubation soils, respectively, and COP b and COP i are the total CuO oxidation product (lignin and BCA COPs) yields of post-burn and incubated or pre-incubation soils. According to this calculation constituents with %RY>100% are either produced or display lower reactivity than the average COP during incubations, whereas those with %RY < 100% are more reactive and consumed preferentially during incubations.
To determine if a constituent was more or less resistant to burning or incubation than bulk OC we utilized a paired sample t-test between the constituent's %IY and %RY and those calculated for bulk OC in both both O and A horizons.
Compositional Changes as a Result of Fire Severity
The three burn experiments revealed significant alterations to OC content and composition that were correlated with increasing maximum temperature at the surface of the O and A horizons ( Table 2). The low-severity burn treatments charred the Oi horizon (i.e. unaltered litter), leaving the underlying Oe horizon (i.e. duff or partially decomposed litter) visually unaltered. As expected, the low severity fire did not heat the mineral soil as drastically as the moderate and high severity burn treatments. The moderate and high severity burn treatments charred nearly the entire O horizon, while the high severity burns consumed most of the O horizon. These high rates of consumption led to higher maximum temperatures at the surface and in the mineral soil for the moderate and high severity fires. Mass loss of the O horizon (31.7-80.2%) and maximum temperature of the O (100-258°C) and A horizons (100-234°C) were significantly correlated (r = 0.987 and 0.945 with p = 0.001 and 0.015, respectively). This led to decreases of OC contents in both O and A horizons.
While there was not a significant trend in the N concentration of the post-burn residue, there was a significant negative trend in the OC:N of the remaining organic matter in both the O and A horizons with increasing fire severity ( Table 2). The slight negative trend in OC concentration led to this relationship. Carbon has been found to be less thermally stable than N which leads to an enrichment of N in post-fire residues [31][32][33][34][35][36].
In the O horizon the yields of all lignin phenols had a significant negative relationship with burn temperature, consistent with many studies that found a decrease in lignin with increasing burn temperature [37][38][39]. Only the cinnamyl phenols (CP) had a statistically significant relationship with maximum temperature of the A horizon. This suggests that CP may be slightly less resistant to burning than the other lignin phenols, a trend that has been observed by others [39].
The yields of both BDC and BTC were positively correlated with fire severity in O and A horizons, although the trend was not statistically significant. On the other hand, yields of BA from the O horizon decreased with burn severity, due to a high degree of consumption that may have occurred by burning at high temperatures. Indeed, the O horizons treated to high severity burning all showed lower soil-normalized BA, BDC, and BTC. Yields of all BCAs from the A horizon were positively related with burn severity, but only BTC had a statistically significant relationship.
The ratios of vanillic acid to vanillin (Vd:Vl) and 3,5-dihydroxybenzoic acid to vanillyl phenols (3,5-Bd:VP) are commonly used indicators of microbial alteration of lignin moieties [12,14,15] and have been found to also respond to leaching/sorption [40], microbial decay [29], and burning plant components [39,41]. We did not find a significant relationship with burn severity suggesting that the complex mixtures of organic matter sources in our samples (i.e. field collected soil horizons) may have contributed to obscure possible effects of burning on these indicators.
Lower mass-normalized yields of lignin phenols in the burned relative to the control were contrasted by elevated BCAs yields in burned O and A horizons (Fig 2). The trend was driven by a consistent decrease in lignin phenols with increased burning severity. In fact, lignin phenols represented the majority (>56%) of all CuO oxidation products in all samples except the high severity treated O horizon (24%), The relative proportion of BCAs increases from 9% to 74% in the O horizon and 21% to 40% in the A horizon as a result of burning at the highest severity. These results suggest that lignin is consumed while BCAs are preferentially preserved or produced by burning.
To correct for mass losses and compare the compositions of burned samples to those of unburned controls, we calculated the %IY of all measured organic constituents, including OC, N, lignin and non-lignin CuO products including BCAs (Fig 3). The %IY of OC and the major classes of lignin phenols and BCAs from the O horizon were near or less than 100% suggesting that these compounds were consumed by burning (Fig 3A). While the %IY of many of the lignin biomarker classes significantly decreased with increasing fire severity (Pearson's correlation for VP, SP, CP, and Lig was R<-0.95 and p<0.05), none of the BCA classes showed a significant linear relationships with burn temperature. Many of the BCAs showed non-linear behavior that was difficult to assess with such a small sample size and also would appear as a non-statistically significant relationship in our analysis. Organic carbon, N, and individual BCA's displayed higher %IY at all three fire-severity levels than the lignin phenols, consistent with a higher resistance to fire. Three individual compounds released from the O horizon were exceptions to the general decrease in %IY with burn severity. The %IY of oBd was 110 to 288% for all three levels of burn severity, indicating that this constituent was formed during the burning process (mass normalized data for individual compounds shown in S1 Table). Similarly, mBDC and BTC2 also appear to have been formed during the low and moderate severity burn treatments of the O horizon (%IY 149-188%). While the high severity burning of the O horizon (highest temperature) may have resulted in an overall decrease of these constituents perhaps as a result of further alteration of these Py-OC products to moieties not detected by our procedure (e.g., more highly condensed-ring structures). The response of constituents recovered from the A horizon to burning revealed that %IY values for OC and lignin phenols classes decreased as a function of burn severity, with the lowest values (%IY < 70%) consistently observed in the high-severity treatments. Notably, the % IY values of the low-severity treatment were slightly greater than 100%, which may have resulted from the incorporation of a small amount of organic matter from the unburned O horizon [16]. It is also possible that low temperature alteration promoted enhanced yields of lignin monomers from the CuO oxidation of their macromolecuar precursor. Based on these results, it appears that bulk OC and most lignin phenols behaved conservatively under low severity conditions but were lost (especially lignin compounds) during medium-and highseverity treatments. In contrast, most BCA products displayed %IY values > 100% after medium-and high-severity burns, consistent with their net production during the pyrogenic process. Note that in contrast to OC, the %IY values for N remain~100% for all three severity treatments indicating bulk N in the mineral soil horizons behaves conservatively through the burning process.
In both the O and A horizon, several BCAs (Bd, mBd, oBd, mBDC, BTC2, and BTC3) and classes (BA and BTC) had significantly higher %IY relative to OC. This suggests that the precursors of these compounds were preserved or created relative to bulk OC. These results are in contrast to the significantly lower %IY for the lignin phenol classes CP and Lig recovered from both horizons. Interestingly, the %IY values for N were significantly higher to bulk OC, possibly as a result of N having higher volatilization temperatures [32]. Benzoic acid (Bd) also had a significantly strong response to burning relative to bulk OC. Since Bd has many different sources in the environment [11], it may not be robust indicators of Py-OC. The %IY of mBd, oBd, mBDC, BTC2, BTC3, and the entire class of BTCs was higher than bulk OC of O and A horizons suggesting that these compounds may be robust indicators of Py-OC derived from either organic or mineral substrates.
Compositional Changes as a Result of Burning and Decomposition
The yields of many of the CuO oxidation products (lignin and BCAs) recovered from the incubated samples were still correlated with the maximum temperature of the burn treatments (Table 3). This suggests that even after incubation, burn severity is controlling the composition of OC in both the O and A horizons.
All of the CuO oxidation products recovered from the O horizon after burning that exhibited significant relationships between yield and maximum burn temperature continued to have significant relationships after incubation. Indeed, VP, SP, CP, and Lig had negative relationships (r<-0.975, p<0.05) with burn temperature whereas BDC had a positive relationship with burn temperature. Many of the CuO oxidation products recovered from the A horizon did not have significant relationships with burn temperature after burning or burning and incubation; however the strength of the relationship increased after incubation, probably as a result of less variability around the linear relationship between the CuO oxidation product yields and maximum burn temperature. CuO oxidation products with significant positive relationships with burn temperature included BDC and BTC (r>0.981, p<0.05).
While Vd:Vl and 3,5-Bd:V are hypothesized to increase with OC degradation [12,14,15] we did not observe a significant relationship with these ratios and CO 2 -C loss [16] from both O and A horizons as a result of incubation (data not shown; r>-0.730,p>0.206). It should be noted that even though they were not statistically significant all these relationships were negative suggesting that the response to burning was stronger than the response to degradation over a six-month incubation ( Table 3).
The contribution of mass-normalized lignin phenols in the burned relative to the control were contrasted by elevated BCAs in burned O and A horizons after incubation (Fig 4). While the differences were much less pronounced post-incubation the post-burn pattern is still apparent. The relative proportion of BCAs in the incubated control was 9% relative to 37% in the incubated O horizon burned at high severity and 16% and 36% in the A horizon as a result of burning at the highest severity. The post-burn pattern was still apparent after incubation and that the effects of burning were preserved through the incubation. These results suggest that these biomarkers may be robust indicators of Py-OC.
To assess the response of constituents to the incubation we calculated the yield of each relative to the response of total COP yield (%RY; Fig 5). These data were normalized by the preincubation samples for each treatment, so that we can examine how each constituent responds to the incubation. The %RY of OC decreased with incubation of O and A horizons as a result of mineralization. Aside from an outlying VP measurement of the high severity treatment the COPs of both O and A horizons behaved similarly during the incubation according to the % RY. Generally, VP, SP, and CP increased similarly cross horizons and treatments (average % RY = 114% across O and A horizons). Several BA compounds (mBd, 3,5-Bd, and oBd) and the BA class increased relative to bulk OC suggesting that these compounds may be products of organic matter decomposition and not robust indicators of Py-OC. These results are consistent with report that mBd, oBd, and 3,5-Bd are actively produced during the degradation of soil organic matter [11,12,14,15]. Because of these trends, constituents such as mBd, 3,5-Bd, oBd, and the BA class will need to be used with caution when used to trace Py-OC in the environment. Several of the BCAs with 2 or 3 carboxylic acid groups (oBDC, BTC1, BTC2, BTC3) and BTC decreased relative to OC during the incubation suggesting that these compounds are not decomposition products and are likely degraded in the environment.
OC Normalized CuO Oxidation Product Composition
Organic matter associated with soils and sediments are a mixture of organic and mineral dominated sources, therefore it is important that any measure or indicator of Py-OC be independent of the source. In addition these indicators should not be strongly altered by processes such as degradation. To examine the effects independent of matrix (i.e. organic versus mineral) and incubation and to bring in additional data from other studies we have normalized all the CuO oxidation products by the OC content (Table 4).
Potential Py-OC Indicating Ratios
To more clearly isolate a Py-OC signal we explored ratios of singular constituents and classes of constituents (Table 4). Compound ratios are relatively insensitive to processes such as mass losses or additions and are analytically more robust than absolute yield determinations [42]. We have focused on those constituents that have the strongest response to burning as measured by %IY (oBd, mBDC, and BTC2). Individual biomarkers within each class typically behaved similarly during burning and incubation. Therefore, we normalized the individual compounds by the class (i.e. oBd with BA, mBDC with BDC, and BTC2 with BTC). We have also explored the use of a ratio that described the contribution of Py-OC relative to unburned organic matter to samples by selecting the most robust class of BCAs and normalized it by total lignin content (i.e. BTC:Lig). Most of our selected ratios were significantly and positively related to burn temperature, mBDC:BDC was positively related to burn temperature but did not have a statistically significant linear relationship with burn temperature. oBd:BA, BTC2:BTC, and BTC:Lig had significant relationships with burn temperature across incubated and non-incubated O and A horizons, suggesting that these three ratios may be robust indicators of the charring temperature the Py-OC was created.
In general, we found significant negative Pearson correlations with maximum burn temperature and all of the OC normalized lignin classes and significant positive correlations with all of the OC normalized BCA classes, except BA. These results suggest a significant effect of burning across all sample types that is independent of matrix or incubation.
Discussion
We examined lignin and Py-OC CuO oxidation products after burning soil microcosms (O and A horizons) at three levels of fire severity and subjecting those soils to a six-month incubation. In general, as fire severity increased lignin phenols decreased and BCAs increased. Overall, OC and the major classes of lignin phenols and BCAs were consumed by burning. However, relative to bulk OC the precursors to BCAs were either preserved or created during burning. We found that these trends were preserved after incubation with natural microbial inoculates.
Outside Data Sources
Complete di-and tri-carboxylic benzoic acids are not often reported as products of CuO oxidation, as such we could find only one other published study that reported lignin products and BCAs in relation to burning or heating temperatures [15]. We incorporated the OC normalized lignin and BCA data reported by Dickens et al. [15] for charred pine wood into those portions of our study to demonstrate the applicability of this method across several sample types (Figs 6 & 7). Dickens et al. (2007) did not report all of the same constituents we have (e.g oBd was omitted from their report), so we have focused on those constituents that are common between the two studies. Dickens et al. (2007) report on red pine (Pinus resinosa) wood heated to six different temperatures and red alder (Alnus rubra) wood heated to 2 different temperatures. Including the wood samples from this study did not change our initial interpretation of the overall trends. We found that OC normalized VP decreased with burn temperatures above 100°C while BDC and BTC both increased. Both BDC and BTC reached a peak in yields around 280°C. It is also apparent that the source of material (O versus A horizon), and therefore any matrix effect, did not affect the OC normalized yields when the samples were burned or incubated.
Most of our selected ratios were significantly and positively related to burn temperature, mBDC:BDC was positively related to burn temperature but did not have a statistically significant linear relationship with burn temperature. oBd:BA, BTC2:BTC, and BTC:Lig had significant relationships with burn temperature across incubated and non-incubated O and A horizons, suggesting that these three ratios may be robust indicators of the charring temperature the Py-OC was created. We combined our data with burned wood from Dickens et al. (2007) and found that these ratios provide robust indicators of Py-OC across a range of sample types (Fig 7). mBDC:BDC appears to reach a peak around 190°C while BTC2:BTC appears to peak around 300°C suggesting that BTC2:BTC may be a more robust indicator of Py-OC and that these two ratios may be used in tandem to examine the relative differences burning conditions that created the Py-OC in natural samples.
Interfering Compounds
Benzene carboxylic acids have been shown to have sources other than Py-OC [14,15]. Dickens et al. [15] made an extensive evaluation of whether Py-OC could be quantified using CuO oxidation-derived BCAs, and measured chars produced in the lab as well as chars that were collected from soil profiles and the boles of trees. These have been plotted with our non-incubated and incubated moderate and high severity O and A horizon samples as ">200°C" in Fig 8. They also assessed unburned pine and alder wood which has been plotted with our non-incubated and incubated control and low severity O and A horizons as "<150°C". When these burned and unburned materials are plotted together there appears to be a distinction between the burned and unburned organic matter based on BTC:Lig (<0.04 and >0.05 for the unburned and burned material, respectively). However, Dickens et al. [15] found that glucose and tannic acid both produced lignin phenols and BTCs and therefore had ratios of BTC:Lig that would be considered Py-OC if BTC:Lig was the only indicator. Additionally, Dickens et al. [15] examined other interfering compounds including humified materials (as melanoidin), specific organic compounds (protein), organic matter (mangrove, brown and white rot, brown algae, and decomposed oak wood), and aromatic materials created as a result of geologic (graphite and bituminous coal) and fossil fuel consumption (n-hexane soot). With the use of BTC2:BTC interfering sources can be distinguished from Py-OC. Burned material and several chars of various ages can be clearly differentiated from interfering substances using a threshold of BTC2:BTC>0.35, as indicated by the values obtained from the char samples analyzed by Dickens et al. [15].
Environmental samples of soils and sediments are mixtures of materials that may have experienced burning and materials that have not. Therefore the Py-OC signature could be influenced by contributions of interfering compounds such as tannins. We can calculate the quantity of tannic acid required to raise a sample's Py-OC signature above our threshold using a two-end member mixing model using the average BTC2:BTC and BTC:Lig composition of unburned organic matter and tannic acid. Using this approach we found that a sample would need to be composed of >70% tannic acid in order to develop a signal that would fall within the bounds of our Py-OC endmembers. Leaves can have tannin contents up to 20% [43] while wood typically has tannin contents less than 1% by weight [44]. Tannins have been shown to be roughly as degradable as the bulk organic matter, so this material is not likely to increase in relative concentration through degradation [43]. This suggests that no matter what the contribution (fresh or degraded) it is not likely that tannins can significantly influence the Py-OC signature of typical soil and sediment samples as measured by the CuO oxidation procedure.
Our results suggest that the BTC:Lig parameter can be used to assess the overall Py-OC content while the BTC2:BTC ratio can be used to distinguish burned materials from other potentially interfering compounds. The ratio BTC:Lig may therefore correlate with some of more widely used measures of black carbon in soils. Dickens et al. [15] assessed the total CuO oxidation yield of BCAs from soils against black carbon determined by the ultra violet-nuclear magnetic resonance (UV-NMR) [45] and benzene poly-carboxylic acid (BPCA) [46] methods and found that they were not well correlated and driven by outlier samples (Fig 9). Using the data reported by Dickens et al. [15] we found that BTC:Lig was significantly correlated with black carbon determined by both the UV-NMR (r = 0.912, p = 0.012, n = 5) and BPCA (r = 0.883, p = 0.022. n = 5) methods. Furthermore, the OC normalized BTC was significantly correlated with BPCA (r = 0.933, p = 0.007, n = 5), but not significantly correlated with the NMR method (r = 0.737, p = 0.118, n = 5). We also found that the the BTC2:BTC of all soils were above the threshold used to distinguish burned materials from other potentially interfering compounds. Suggesting that the BTC:Lig ratio could be used to quantify the relative amounts of Py-OC in these soils. This supports the assertion that BCA COPs and BCAs recovered by the BPCA have similar sources and all of these methods (COP BCA, BPCA, and NMR) can be used to quantify relative amounts black carbon in some soils and sediment [46,47].
Lability of Py-OC and Lignin
We examined the lignin and Py-OC signatures before and after a 6-month incubation to examine the relative stability of these biomarkers. Interestingly, the %RY of the BDCs and BTCs decreased relative to lignin phenols as a result of the incubation. Since vascular plants are the source of lignin phenols and these were closed incubations this result was a result of either 1) the preservation of lignin relative to BDC and BTC or 2) higher rates of mineralization or transformation among these BCA classes relative to lignin. Since we did not collect reliable mass loss data for the incubations it is difficult to determine which of these explanations may be responsible for the observed trends. Either way, the lignin that remained after burning was more resistant to degradation than BDC and BTC, which is counter to widely held understanding of the relative reactivity of lignin and black carbon [30]. There may be a couple of reasons for this result. One is that residual lignin may be concentrated at the center of char particles and thereby physically protected from microbial degradation. Secondly, as previously mentioned, the CuO oxidation procedure may not be able to release BDCs and BTCs from the more recalcitrant chars (e.g. soot and turbostratic char). Therefore, the BDCs and BTCs in this study may have been derived from the more weakly bound amorphous chars or freely available [48] BCAs in the environment that may be more biologically available to mineralization or transformation. However, the major finding of our incubation was that the composition of BTCs and the relative contribution of BTC to lignin were robust indicators of Py-OC in samples that had been recently burned and experienced moderate diagenesis.
Limitations of CuO Oxidation for the Assessment of Py-OC
Like the benzene polycarboxylic acid (BPCA) method, CuO oxidation also releases BTCs as products of oxidative hydrolysis of charred organic matter [46,47,49]. While the proportion BPCAs that are made up by the BTC class has been shown to decrease with burn temperature the total amount produced increased up to a burn temperature of about 600°C [49]. With increasing temperature there is an increase in the crystalline structure which probably translates to differences in the relative reactivity of the char [50] and the ability of char-derived products to be recovered by the CuO oxidation procedure. The BTCs recovered by the CuO oxidation procedure may originate as BTCs bound in those amorphous regions of char or from freely available BTCs. BTC yields from charred pine wood have been shown to peak at 300°C but decreased when temperature achieved 350°C [15]. Since the temperatures in our study did not exceed 257°C the char was likely dominated by amorphous forms of black carbon. Further research is necessary to fully determine the portion of the char macromolecule being oxidatively hydrolyized by the CuO oxidation procedure and whether this procedure is suitable for tracing higher order, more crystalline, forms of chars such as the turbostratic chars or soot. However, amorphous chars appear to dominate the black carbon continuum up to about 500°C. Which is well over the maximum temperature experienced in this study and quite a bit higher than soil temperatures measured in many wild or prescribed fires, the main exceptions being shrub land or chaparral fires and fires burning in concentrated accumulations of fuel such as logs or slash piles [32]. Therefore, BTCs recovered using CuO oxidation appear to be able trace Py-OC generated under most natural and managed fires.
Summary and Implications
Our exploration of the response of individual and classes of CuO oxidation derived lignin and BCAs has found that lignin is consumed by burning while BCAs were preferentially preserved or produced by burning. We found that BDCs and BTCs were much more reactive to microbial degradation after burning than their lignin counterparts. However, these trends in BCAs imparted by burning survived a 6-month incubation suggesting that fire severity had first order control on BCA and lignin composition. These results also suggested that some of these compounds are robust indicators of Py-OC (in particular BTCs). We tested these ratios against samples that were specifically designed to produce a Py-OC signal without burning or charring (e.g. tannic acid) or reduce the Py-OC signal (e.g. incubated samples from this study). When combined with other CuO oxidation products, such as lignin, we found that key indicators could be used to estimate the relative contribution of Py-OC in soils.
This method may be another tool that biogeochemists could use to examine relative contributions of Py-OC to soils and sediments. Furthermore, due to the large amount of information that can be gathered from the CuO oxidation procedure this may be a useful tool for paleoecologists can use to assess fire frequency and fire severity regimes of past ecosystems and climate-vegetation-wildfire interactions and their subsequent interaction with carbon and sediment erosion from watersheds. Robust determinations of past fire regimes and their interactions with biogeochemical cycles may be developed when these markers and other CuO oxidation products are combined with other measures, such as charcoal and pollen records.
|
v3-fos-license
|
2021-08-08T05:23:41.189Z
|
2021-07-26T00:00:00.000
|
236944698
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.mdpi.com/1660-4601/18/15/7900/pdf",
"pdf_hash": "57f116335a4e324e2ac43c0fda2c41c80828da38",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44383",
"s2fieldsofstudy": [
"Agricultural and Food Sciences",
"Business",
"Environmental Science"
],
"sha1": "57f116335a4e324e2ac43c0fda2c41c80828da38",
"year": 2021
}
|
pes2o/s2orc
|
How Can Agricultural Corporate Build Sustainable Competitive Advantage through Green Intellectual Capital? A New Environmental Management Approach to Green Agriculture
Based on natural resource-based theory, this study constructed a relational model between green intellectual capital, green innovation, and an agricultural corporate sustainable competitive advantage. The samples included a total of 341 agricultural companies in China, and multiple regression methods are used for the analysis. The results showed that green product innovation and green process innovation had a mediation effect between green human capital, green structural capital, green relational capital, and the sustainable competitive advantage of agricultural corporate. Beyond the simple moderation effect, a new integrated moderated-mediation effect model was established. It was shown that environmental leadership, green organizational identification, and green dynamic capability had different moderated-mediation effects under different conditions. The study is expected to close the previous research gaps and insufficiency in agricultural corporate environmental management and green agricultural. The empirical results and conclusions bring enlightenment and meaningful theoretical guidance to managers, researchers, practitioners, and policy makers in the green and sustainable development of agricultural corporates. The new environmental management path can help agricultural corporates conduct green innovation effectively, adapt to the green agricultural products market, and achieve sustainable competitive advantage. Ultimately, this will help to accelerate the development of green agriculture.
Introduction
Global warming, desertification, haze, and other environmental problems have become the focus of all industries in the process of global economic development [1]. Against the background of excessive resource consumption, serious environmental pollution, and increasing demand for green agricultural products, it is inevitable and necessary for agricultural development to take a green and sustainable development path. It is urgent to ease the tension between agricultural economic development and environmental bearing capacity. "Our Common Future" published by the World Council for Environment and Development (WCED) in 1987 puts forward the concept of "sustainable development", pointing out that the earth's existing resources and energy are far from meeting the needs of human development, and environmental protection has a positive and far-reaching impact on sustainable development [2].
As the most important microeconomic subject for the green development of agriculture, agricultural enterprises have certain particularities compared with enterprises in other However, there are few scholars who have explored the issue of green intellectual capital and green innovation and sustainable development of agricultural enterprises from the perspective of environmental management, which await further scholarly investigation [9,17]. In addition, the existing studies still do not solve the problem of how agricultural enterprises can transform "green" capital into the source of sustainable competitive advantages, nor do they explain in detail how various types of green intellectual capital influence green product innovation and green process innovation to give agricultural enterprises a sustainable competitive advantage. Previous studies did not study green intellectual capital in different dimensions, nor did they explore their different impact effects on green innovation.
Although "leadership", "organizational identification", and "dynamic ability" have been well studied in the academic circle and widely applied in the management circle, few studies focus on their combination with natural environmental factors and ignore their application in the field of enterprise environmental management. Agricultural enterprises' environmental behaviors require the guidance and encouragement of leaders, the recognition and support of organization members, as well as strong adaptability and a quick response to the green environment [22][23][24]. Existing studies lack the comprehensive consideration of static resources and dynamic capabilities and ignore the important role of green dynamic capabilities in environmental management and green innovation. Moreover, they have not deeply explored the boundary conditions of sustainable competitive advantage formation from the perspective of the natural resource-based view and have not formed a relatively complete enterprise sustainable development model in the field of environmental management.
In the context of resource scarcity, ecological damage, and under the strict environmental laws and increasing environmental pressure of stakeholders, it is particularly necessary for agricultural enterprises to combine sustainable development with the problems of natural resources and the environment [25]. Can agricultural enterprises establish sustainable competitive advantages through the accumulation and application of green intellectual capital and green innovation? How can different kinds of green intellectual capital and green innovation be applied to make different environmental management strategy choices? What contingency factors affect green innovation and the competitive advantage of agricultural corporate? This study will try to answer these questions by exploring the influence mechanism of green human capital, green structural capital, and green relational capital on green product innovation, green process innovation, and sustainable competitive advantage. The research will integrate environmental leadership, green organizational identification, and green dynamic capability into the overall framework and propose an integrated moderated-mediation effect model that three moderating variables changed simultaneously. In addition, the different influence paths will be compared and analyzed.
The study found that green product innovation and green process innovation had a mediation effect between green human capital, green structural capital, green relational capital, and the sustainable competitive advantage of agricultural corporate. Environmental leadership, green organizational identification, and green dynamic capability had positive moderation effects, and they had different moderated-mediation effects under different conditions. The study is expected to close the previous gaps and insufficiency and draw a meaningful research conclusion and the management enlightenment and bring theoretical guidance to agricultural enterprises in the process of green intellectual management, green innovation, and agriculture green sustainable development.
The remainder of the paper is structured as follows: The literature review is conducted and a theoretical framework is presented in Section 2. Section 3 presents materials and methods. Empirical results are presented in Section 4. Sections 5 and 6 provide discussions, some key conclusions, limitations, and further research.
Green Intellectual Capital and Agricultural Corporate Sustainable Competitive Advantage
Porter (1995) put forward the importance of sustainability of competitive advantage [1]. Sustainable competitive advantage ensures the sustainable and long-term dominant position of the enterprise [26]. Jones et al. (2018) established a sustainable competitive advantage model from the perspective of stakeholder management based on the resourcebased view theory [27]. Sustainable competitive advantage is not limited to a certain calendar time but transcends the concept of a certain point in time [28]. It means that an enterprise has certain special resources or capabilities that are not quickly imitated and replaced by competitors. Most scholars who support the theory of endogenous competitive advantage believe that the sustainable competitive advantage of enterprises comes from the heterogeneous resources within enterprises [29].
The natural resource-based view (NRBV) focuses on the relationship between enterprises and the natural environment and constructs a sustainable development economic model between enterprises and the environment from the perspective of resources and capabilities [8]. Ahmad (2015) proposed that the sustainable competitive advantage of enterprises has three attributes: Economic sustainability, environmental sustainability, and social sustainability, which comes from the intellectual capital accumulated by enterprises through knowledge management [26]. Green intellectual capital is put forward under the serious environment pollution background and is a kind of intellectual capital that is related to enterprise environmental management [9]. Chang and Chen (2012) divided green intellectual capital into green human capital, green structural capital, and green relational capital [17]. According to this classification, the following analysis has been conducted.
First, in terms of human capital, Neves and Borges (2018) established an environmental management model of green human resource management based on NRBV theory [30]. The sustainable competitive advantage of agricultural enterprises requires green human resources and efficient green human resource management [31]. Employees' knowledge about environmental protection and green technology, as well as skills and commitment to environmental protection and green innovation, are the basic guarantee for agricultural enterprises to obtain sustainable competitive advantages [32].
Second, in terms of green structural capital, there are examples such as an environmental management system, environmental protection enterprise culture, environmental protection commitment, a knowledge management system, a green information technology system, a green logo, a green brand, and a green corporate image. For example, environment-oriented agricultural enterprises integrate their environmental issues into corporate culture, decision-making, and the operation system are all important resources of sustainable competitive advantages for agricultural enterprises [9,33].
Third, in terms of green relationship capital, the enterprise establishes long-term relationships with suppliers, customers, partners, investors, offering green products and services such as trust, commitment, and cooperation, making upstream and downstream products' environmental standards, sharing environmental knowledge, which can not only promote the enterprise image, increase customer satisfaction and loyalty, but still increase the trust of the stakeholders and promote the competitive advantage [34].
In the process of establishing a sustainable competitive advantage, agricultural enterprises cannot do without green intellectual capital that is valuable, scarce, unique, difficult to imitate, and difficult to replace. Green intellectual capital is the key strategic resource for agricultural enterprises to obtain sustainable competitive advantage and sustainable development in the process of environmental management. Thus, based on NRBV theory, we proposed the following hypothesis. Hypothesis 1. Green human capital (H1a), green structural capital (H1b), and green relational capital (H1c) have a positive influence on the sustainable competitive advantage.
The Mediation Effect of Green Innovation
Green innovation is the sum of new ideas and behaviors applied by agricultural enterprises in the process of environmental management for production or a series of innovative behaviors in the production process [35]. Some scholars define green innovation in a broader sense, that is, all innovation behaviors that can reduce the negative impact on the environment belong to green innovation [36]. De Marchi (2012), Guoyou et al. (2013), and other scholars, based on the innovation objects, hold that green innovation includes the innovation of the green concept and design of the product itself, as well as the innovation of resource conservation, pollution prevention and control, waste recycling, and other behaviors in the production process [19,[37][38][39]. Most scholars divide green innovation into green product innovation and green process innovation [9,23]. Papagiannakis et al. (2014) argued that the accumulated resources and capabilities of agricultural enterprises in the past could stimulate more environmental behaviors and enable agricultural enterprises to choose higher-level environmental behaviors, such as green innovation [40]. Lin and Ho (2008) found that the quality of human resources and organizational incentives had a significant positive impact on agricultural enterprises' intentions to green innovation [41]. Talke et al. (2006) believe that the development of knowledge and ability plays an important role in enterprise innovation [42]. Innovation behavior cannot be separated from the support of enterprise resources. Intellectual capital is one of the most important resources for enterprise innovation to enhance its innovation ability [21].
In the process of green innovation, agricultural enterprises need employees to provide knowledge, experience, and skills of environmental management. Green human capital is the basic element of green innovation in agricultural enterprises [30]. The existing environmental management system of agricultural enterprises can help them break through the original environmental standards and take the initiative to innovate. Corporate green culture creates a good atmosphere for corporate green innovation. Shu et al. (2020) believed that green management has a stronger positive effect on innovation than on financial performance [43]. Agricultural enterprises can promote green product innovation and process innovation with green structure capital. The establishment of green cooperative relations between agricultural enterprises and suppliers or strategic partners will facilitate the sharing of green knowledge, accelerate the process of green innovation, and promote collaborative innovation [9]. In particular, partnerships with universities and research institutions will promote the development of green products and green technologies.
According to innovation compensation theory, green innovation can promote agricultural enterprises to improve product quality and the production process, as well as boost productivity, increase the resources utilization ratio, and save energy. Green innovations also have a positive effect on a firm's environmental performance [18,44]. Therefore, green innovation has "double externalities", which can not only reduce negative external effects, but also bring positive spillover effects of "innovation compensation" and the "first-mover advantage". Yao et al. (2021) confirmed the positive impact of product innovation and green process innovation on brand equity [45]. Porter and Van der Linde (1995) pointed out that there is an "environmental premium" in green market transactions [1], that is, consumers tend to pay more expensive fees for environment-friendly products [46]. Agricultural enterprises can not only demand higher product premiums from consumers through green innovation to make up for environmental management costs, but also establish barriers to entry for industry competitors [47]. Therefore, the sustainable competitive advantage of agricultural enterprises comes not only from the green intellectual capital, but also from the green product innovation and process innovation through the accumulation and application of green intellectual capital. Thus, we proposed the following hypothesis.
Hypothesis 2.
Green product innovation (H2a) and green process innovation (H2b) have a mediation effect between green intellectual capital and sustainable competitive advantage.
The Moderation Effect of Environmental Leadership
Leaders' environmental protection values and attitudes towards environmental issues affect enterprises' enthusiasm in implementing environmental strategies [48,49]. The vision and policy of environmental protection established by the leader in the enterprise determines the level of environmental issues in the overall strategy of the enterprise, thus affecting the enterprise's environmental behavior [50].
Environmental leadership involves implementing factors concerning environmental protection and sustainable development, which will influence employees to carry out work tasks without threatening the natural environment, reduce the negative impact on the environment in production operations, and make changes that benefit the environment at work [22]. This type of leadership reflects the corporate leader's concern for environmental protection and sustainable development [51]. The stronger the environmental leadership is, the more the organization can be motivated to realize the vision of green and sustainable development [52]. Environmental leadership embodies the characteristics of transformational leadership and guides agricultural enterprises to actively carry out green innovation [53,54].
The process of agricultural enterprises applying green intellectual capital for innovation is influenced by environmental leadership. First of all, environmental leadership runs through the whole dynamic process of individuals influencing others to implement environmental management and environmental protection [55]. Environmental leadership influences individual consciousness and behavior and mobilizes organization members to identify and strive to realize the enterprise's long-term vision of ecological and sustainable development [52]. The level of environmental leadership affects the enthusiasm of agricultural enterprises to adopt green innovation, and managers can motivate employees with environmental technologies to participate in enterprise environmental behavior and green innovation [9].
Secondly, effective environmental leaders pay more attention to ecologically centered values under environmental commitment and ethical considerations, as well as paying more attention to the application of environmental resources and incentives for green innovation. Organizational leaders not only affect employees' attitude and commitment, but also affect organizational performance and other organizational outputs, including environmental performance, degree of greening, and efficiency effect of green change [56]. Agricultural enterprises with strong environmental leadership are more likely to carry out green innovation through the application of green structural capital.
Finally, environmental leadership can enhance strategic communication, knowledge sharing, and cooperation between agricultural enterprises and customers, suppliers, or other partners [57]. When environmental leadership is strong, agricultural enterprises are more likely to make use of the green relationship capital established with stakeholders for enterprise innovation [58]. Environmental leadership enables agricultural enterprises to build close relationships with suppliers or partners, learn from each other, and apply green technologies and capabilities, share environmental information and resources, and apply them to green innovation to improve innovation performance. Thus, we proposed the following hypothesis.
Hypothesis 3.
Environmental leadership has a positive moderation effect between green intellectual capital and green product innovation (H3a) or green process innovation (H3b).
The Moderation Effect of Green Organizational Identification
From the psychological point of view, identification refers to a specific emotional connection, and it is an explanatory plan jointly made by the members of the organization, giving specific meaning to their behaviors and choices [59]. Organizational identification is a set of beliefs about what is core, enduring, and different [60]. Although organizational identification has been widely discussed in previous studies, few studies have focused on the natural environmental factors in organizational identification and applied organizational identification to the field of corporate green innovation.
Based on the dual demands of corporate economic development and corporate social responsibility fulfillment, the framework of organizational identification is inseparable from the consideration of environmental protection. According to the theory of organizational identification, organizational green behavior is embedded in the cognitive and emotional foundation of organizational members, which makes green organizational identification closely related to organizational environmental strategic behavior. Fernández et al. (2003) proposed that when an enterprise identifies with its own environmental behavior, the enterprise will integrate this emotional connection into its management behavior and motivate the enterprise to carry out environmentally friendly corporate strategic behavior [61].
Green organizational identification refers to the common beliefs about environmental management and green innovation that bind individuals and organizations together. Chen (2011) believes that green organizational identification is an organizational identification mode about environmental management and green innovation jointly established by organization members that give significance to environmental protection behaviors [55]. Green organizational identification helps members clearly understand the relationship between the organization's environmental protection objectives and actions and build a shared interpretation model based on understanding and mining the profound meaning of surface behaviors. Through the structural equation model, Chen and Chang (2013) verified that green organizational identification has a positive effect on green intangible assets and green competitive advantages [23]. According to organization identity theory, green organizational identification is the key factor of environmental management [62]. Green organizational identification has a positive effect on enterprise environmental behavior such as green innovation [63,64]. Sharma (2000) found that the integration of organizational identification can integrate and summarize different knowledge structures and promote the generation of organizational innovation behaviors [65]. When the enterprise has a sense of identity towards environmental problems, the enterprise will actively develop clean energy and adopt clean technology in strategic practice to protect the natural environment. The green organizational identification of agricultural enterprises can enhance the corporate social responsibility and influence the innovation behavior of agricultural enterprises by integrating knowledge and behavior selection. The stronger this sense of identity is, the more the agricultural enterprises can apply green human capital, green structural capital, and green relational capital to innovatively integrate green elements in product design, packaging, production, and other processes to carry out green product innovation and green process innovation. Thus, we proposed the following hypothesis.
Hypothesis 4.
Green organizational identity has a positive moderation effect on green intellectual capital and green product innovation (H4a) or green process innovation (H4b).
The Moderation Effect of Green Dynamic Ability
With the acceleration of knowledge spillover and technological progress, as well as the rapidly changing green consumer market, the comparative advantage of green resources may dissipate, and the static analytical competitive advantage is challenged [66]. Traditional resource capability theory is limited to static analysis from the inside out, which cannot give agricultural enterprises an answer to how to obtain a sustainable competitive advantage in the rapidly changing and unpredictable dynamic market. In the fast and changeable global market competition environment, those agricultural enterprises with keen insight and a quick reaction ability can effectively coordinate and allocate internal and external resources and obtain a sustainable competitive advantage.
Scholars regard dynamic capability as the key factor that affects the competitive advantage of enterprises [67,68]. The "dynamic" of dynamic capability originates from the uncertainty of the external environment, which brings both opportunities and threats to agricultural enterprises [69]. Agricultural enterprises need to identify and grasp opportunities for resource reorganization [70]. Green dynamic ability refers to the ability of agricultural enterprises to make internal and external adjustment related to environmental management in time according to the dynamic development and change of the environment to adapt to the environmental protection policy orientation and the rapidly changing green market demand [23]. This ability enables agricultural enterprises to recombine internal and external green resources through organizational learning, and to establish a new enterprise environmental strategy convention that breaks through the dependence of the original environmental strategy path [71]. Green dynamic capabilities include the ability to identify opportunities for new green quickly, the ability of identifying and developing new green knowledge or green technology, and green innovation ability [72].
Chen and Chang (2013) divided green dynamic ability into green environment adaptation ability, green resource integration ability, organization learning and absorption ability, and green change ability [23]. First, only by quickly identifying stakeholders' requirements for cleaner production and consumers' demand for green products, and making strategic, operational, or organizational adjustments to the environment in a timely manner, can agricultural enterprises be promoted to bring sustainable competitive advantages through green product innovation and process innovation.
Second, in the process of green innovation, enterprises need to identify, dig, acquire, and apply green resources from different levels [73,74]. Both environmental protection knowledge and green information technology are important green resources for agricultural enterprises, which need to be updated and reconfigured constantly to respond to the changes of the external environment. As knowledge spirals within the enterprise, green innovation can be generated and bring more benefits. The stronger the dynamic ability, the more efficient the knowledge use and integration, the higher the probability of innovation success, and the more lasting the competitive advantage of agricultural enterprises.
Third, in addition to making effective use of existing environmental knowledge, enterprises' green innovation also needs to identify, acquire, analyze, and understand new environmental knowledge, process, digest, and apply new environmental knowledge and technologies [75,76]. Innovation by reference to knowledge is also included [77]. The establishment of green knowledge sharing and transfer mechanism, the effective dissemination of green knowledge and information, and the learning and training of environmental knowledge have a positive impact on the transformation of green innovation into a sustainable competitive advantage of agricultural enterprises [78].
Last, according to the dynamic ability theory and Schumpeter's innovation-based competition theory, agricultural enterprises should carry out a green revolution according to the market demand for green products and the competitive situation of the green market to obtain sustainable competitive advantage [79]. Such green "creative destruction" can enable agricultural enterprises to make more rapid responses and decisions in the face of environmental pressure from stakeholders and changing green market demands, improving the success rate of enterprise product and process innovation. Therefore, green dynamic capability can make the competitive advantage brought by green product innovation and green process innovation more sustainable. Thus, we proposed the following hypothesis.
Hypothesis 5.
Green dynamic capability has a positive moderation effect between green product innovation and sustainable competitive advantage (5a), green process innovation, and sustainable competitive advantage (5b).
An Integrated Moderated-Mediation Effect Model
The study has proposed that green innovation has a mediation effect between green intellectual capital and agricultural corporate sustainable competitive advantage and environmental leadership, green organizational identity has a positive moderation effect between green intellectual capital and green innovation, and green dynamic capacity has a positive moderation effect between green innovation and agricultural corporate sustainable competitive advantage. Therefore, according to the mediating and moderation effect proposed by Edwards and Lambert (2007) [80], we believe that environmental leadership, green organizational identity, and green dynamic capacity also moderate the mediation effect [81]. The higher the environmental leadership, green organizational identity, and green dynamic capacity are, the higher the mediation effect of green innovation between green intellectual capital and agricultural corporate sustainable competitive advantage is. In this view, this study proposes an integrated mediation and moderation effect model ( Figure 1). Hence, it is proposed that:
Hypothesis 6.
Under higher environmental leadership, green organizational identity, and green dynamic capacity, the mediating effect of green innovation between green intelligence capital and agricultural corporate sustainable competitive advantage is higher.
Data Collection and Sample
"Food and safety come as the first". Agricultural products are related to the natural environment and human life. China is a big agricultural country. The environmental behavior of agricultural enterprises directly affects the environmental protection and the safety of agricultural products. However, in recent years, the pollution phenomenon is serious, especially as the agricultural product processing industry brings a negative impact on the environment. With the improvement of consumers' awareness of environmental protection and the increasing demand for green food, pollution-free agricultural products marked with green environmental protection, organic food, and other green agricultural products, the requirements for agricultural enterprises to produce ecological environmental protection and high-quality and safe green agricultural products are also increasing day by day.
We conduct empirical research by means of questionnaire surveys. We selected several agricultural universities that have cooperative relationships with the department of agricultural economic management of the authors' university in the teaching and research area. Our team had been visiting Jilin Agricultural University, Shenyang Agricultural University, Northeast Agricultural University, Nanjing Agricultural University, Zhejiang A&F University, and Fujian A&F University since September 2019, a total of six universities. We interviewed teachers majoring in economic management from the above universities, contacted MBA students working in agricultural enterprises or members of training courses for corporate executives, and obtained a list of local agricultural enterprises. Based on the report of the "Regional distribution of China's top 500 agricultural enterprises by 2020", northeast China, Yangtze River Delta, and Pearl River Delta, where agricultural production enterprises are concentrated, are selected as the main investigation areas. The three northeastern provinces have important agricultural production bases in China, the Yangtze River Delta and the Pearl River Delta are relatively concentrated centers of innovation and application, and there are also many agricultural production and processing enterprises.
The surveyed agricultural enterprises come from Changchun, Harbin, Shenyang, and other cities in Northeast China; Suzhou, Nanjing, Nantong, Hangzhou, Ningbo, and Wenzhou in the Yangtze River Delta region; and Guangzhou, Shenzhen, Zhuhai, and Huizhou in the Pearl River Delta region. It includes 9 industries: The food processing industry, the food manufacturing industry, the beverage manufacturing industry, the tobacco processing industry, the textile industry, the wood processing industry, the furniture manufacturing industry, the paper and paper products industry, and the rubber products industry. Due to the global COVID-19 pandemic encountered in this survey, face-to-face interviews have been cancelled since the middle of January 2020, and all interviews were replaced by telephone interviews and online questionnaires.
Based on the previous studies, this study designed the questionnaire and adjusted and modified the measurement items appropriately according to the Chinese context. In this study, data were collected by means of questionnaires, and the research objects were department managers or general managers of agricultural enterprises in the above regions. In order to improve the rate of questionnaire recovery and prevent emails from being automatically blocked, members of the research group called each agricultural product enterprise to explain the purpose of the study and the contents of the questionnaire and explained that the questionnaire was filled in anonymously to guarantee a degree of confidentiality of the questionnaire.
The author conducted a preliminary survey in three provinces in northeast China, and according to the results of the preliminary survey, repeatedly improved the setting of questions and the expression of questions in the questionnaire. Finally, 600 formal questionnaires were formed from 370 questionnaires that were recovered and the recovery rate was 61.67%. Twenty-nine incomplete and invalid questionnaires were excluded, and the valid questionnaires totaled 341. Table 1 summarizes the characteristics of the enterprises.
Variables and Measure
Our dependent variable is the agricultural corporate sustainable competitive advantage. Its measure was adopted from Ahmad (2015) [26], which contained 12 items from both financial and nonfinancial perspectives. Example items are "the customer loyalty is higher for green products or services, and many are regular and introduced customers," "the green products and services make enterprises keep a high growth rate of sales revenue during a period of time," "investors have a better evaluation of enterprise environmental protection behavior, and are willing to continue the investment," and "The outstanding performance of enterprises in environmental protection attracts and retains talents." Our independent variables are green human capital, green structural capital, and green relational capital. There measures were adopted from Chen and Chang (2013) [23]. Green human capital contained 5 items, and example items are "the productivity and contribution of environmental protection of the employees in the firm is better than those of its major competitors," and "the cooperative degree of team work about environmental protection in the firm is more than that of its major competitors". Green structural capital contained 9 items, and example items are "the investments in environmental protection facilities in the firm are more than those of its major competitors," and "the management system of environmental protection in the firm is superior to that of its major competitors". Green relational capital contained 5 items, and example items are "the firm designs its products or services in compliance with the environmentalism desires of its customers," and "the cooperation relationships about environmental protection of the firm with its upstream suppliers are stable".
The mediator variables are green product innovation and green process innovation. These measures were adopted from Kam-Sing (2012) and Song and Yu (2018) [62,82]. Green product innovation contained 4 items, example items are "Enterprise chooses materials that consume the least energy and resources during product development and design," and "Enterprise chooses materials with the least environmental pollution during product development and design". Green process innovation contained 4 items, and example items are "Enterprise reduces the discharge of solids, water and other pollution in the production process," and "Enterprise reduces the use of raw materials in the production process." The one moderator variable is environmental leadership. Its measure was adopted from Chen (2011) [55], which contained four items, and example items are "Enterprise's leaders encourage organizations to establish a common vision of environmental values," and "Enterprise's leaders educate your employees about environmental protection regularly." The one moderator variable is green organizational identity. Its measure was adopted from Chen, which contained 6 items, and example items are "the enterprise's top managers, middle managers, and employees feel that the enterprise have formulated a well-defined set of environmental goals and missions," and "the enterprise's top managers, middle managers, and employees have a strong sense of the enterprise's history about environmental management and protection". The other moderator variable is green dynamic capacity. Its measure was adopted from Makkonen et al. (2014) and Chen and Chang (2013) [23,69], which contained 8 items, and example items are "Enterprise keeps abreast of consumer green demand and industry green technology changes, and take appropriate measures," and "Enterprises continue to learn and absorb knowledge about environmental protection and green innovation." The control variables are enterprise scale and enterprise type. Delgado-Ceballos et al. (2012) found that enterprise size is also one of the factors affecting enterprise environmental behavior [83], while Huang et al. (2014) found that different types of enterprises have different environmental behaviors. The enterprise scale is represented by the number of employees [49]. Enterprise types are divided into state-owned agricultural enterprises and nonstate-owned agricultural enterprises measured by dummy variables.
Measurement Validation
All variables were measured by the instruments previously developed and used worldwide. Before the questionnaires were distributed, the instruments were translated into Chinese and then into English to ensure consistency. According to the Chinese context and the purpose of this study, the questionnaire was modified appropriately. The instruments had also been pilot-tested on MBA students from Jilin university, Northeast normal university, and Jilin university of finance and economics to ensure that translations do not affect the validity and reliability of these measures.
We analyzed the reliability first, and Cronbach's alpha coefficients were regarded as the judgment standard. The results are presented in Table 2. One item of green structural capital GIC11 did not meet the threshold. When we deleted it, the Cronbach's alpha coefficient of the green structural capital increased, so we deleted it. All other constructs' Cronbach 's alpha values were greater than 0.8. Deleting any item after parameter values did not increase this value. From Table 2, we know that the Cronbach's alpha coefficients of the structures ranged from 0.818 (green relational capital) to 0.907 (green dynamic capability), which can be regarded as reliable because the three constructs were all above the acceptable threshold of 0.50 [84]. The reliability of the scale was high. Since all the measuring instruments were based on western research, it was necessary to evaluate their validity in the Chinese context. We used confirmatory factor analysis (CFA) to assess the validity of all the instruments.
We developed a measurement structure of all the variables and covariated them in a single model. The nine-factor model produced the best fit with data (χ 2 /df = 2.947; comparative fit index (CFI) = 0.957; Tucker Lewis index (TL) = 0.943; root mean square error of approximation (RMSEA) = 0.073; standardized root mean square residual (SRMR) = 0.049). The eight-factor to one-factor models produced a poor fit with data. In the nine-factor model, the standardized factor loading was higher than 0.50 [85]. We also used the average variance extracted (AVE) method to analyze the convergent and discriminate validity of all nine latent variables [86]. The AVE values exceeded the recommended threshold of 0.50. In addition, the lowest composite reliability (CR) value was 0.874, which was higher than the suggested threshold of 0.70. Thus, all constructs have high convergent validity. For assessing discriminant validity, we used the most rigorous and powerful method, and the AVE square root was more than its correlation with any other latent variable. From Table 3 we could see that the square root of AVE by each construct was greater than their following correlations, and we concluded that these constructs were different from each other (see Table 3). Therefore, we could say the convergent and discriminate validity of the scale was high. Second, we used a multivariate t-test to assess nonresponse bias by comparing early and late responses for all variables [87]. The nonsignificant results showed that nonresponse bias was not present. Moreover, we have taken some measures to make sure common method bias (CMB) is minimized. In order to reduce common method variance, all items in each of the constructs are randomized [88]. We performed Harman's single factor test to assess if the CMB could be an issue. The 60 items in the questionnaire were loaded, and exploratory factor analysis was performed using non-rotating principal component analysis (NPCA). The KMO was 0.858. The result revealed the presence of all distinct factors with eigenvalues greater than 1 which account for 68.036% of the variance, and the first factor accounts for only 21.880% of the variance. Third, we checked all the correlations between items; there were no extremely high correlations between items, so common method variance was not problematic. Therefore, we concluded that the CMB was not an issue in this study.
Hypothesis Testing
This study used hierarchical multiple regression analysis to examine the effect of green intellectual capital on sustainable competitive advantage, considering the mediating role of green innovation and the moderating role of environmental leadership, green organizational identification, and green dynamic capability. We tested for collinearity by calculating the variance inflation factor (VIF) for each of the regression coefficients in the model. Values were all below the suggested cut-off threshold of 10 (ranged from 1.067 to 1.965), suggesting a limited threat of multicollinearity [89].
From Model 1 to Model 3 in Table 4, it can be seen that green human capital (standard β = 0.300, p < 0.001), green structure capital (standard β = 0.218, p < 0.001), and green relational capital (standard β = 0.290, p < 0.001) had a significant negative effect on sustainable competitive advantage. H1 was supported. From Model 4 it can be seen that green product innovation had a significant positive effect on sustainable competitive advantage (standard β = 0.195, p < 0.001), and the effect of green human capital on sustainable competitive advantage changed from 0.300 to 0.249 (p < 0.001). From Model 6 it can be seen that green product innovation had a significant positive effect on sustainable competitive advantage (standard β = 0.223, p < 0.001), and the effect of green structure capital on sustainable competitive advantage changed from 0.218 to 0.174 (p < 0.001). From Model 8 it can be seen that green product innovation had a significant positive effect on sustainable competitive advantage (standard β = 0.224, p < 0.001), and the effect of green relational capital on sustainable competitive advantage changed from 0.290 to 0.259 (p < 0.001). H2a was supported. From Model 5, Model 7, and Model 9 in Table 4, H2b was supported. We used the structural equation model (SEM) with the bootstrap method to analyze our models. SEM offered an acceptable representation of the data. These antecedents predicted the green product innovation (R 2 = 0.681), green process innovation (R 2 = 0.562), and sustainable competitive advantage (R 2 = 0.524) to a higher extent, respectively. The results are shown in Figure 2. We also tested the moderation effect. Model 1 to 3 in Table 5 showed that, when the moderating variable environmental leadership entered the regression equation, the interaction terms of green human capital and the moderator had a positive effect on green product innovation (standard β = 0.254, p < 0.001). The interaction terms of green structure capital and the moderator had a positive effect on green product innovation (standard β = 0.122, p < 0.1). The interaction terms of green relational capital and the moderator had a positive effect on green product innovation (standard β = 0.131, p < 0.1). H3a was supported. Models 4 to 6 in Table 5 showed that when the moderating variable environmental leadership entered the regression equation, the interaction terms of green human capital and the moderator had a positive effect on green process innovation (standard β = 0.245, p < 0.001). The interaction terms of green structure capital and the moderator had a positive effect on green process innovation (standard β = 0.145, p < 0.1). The interaction terms of green relational capital and the moderator had a positive effect on green process innovation (standard β = 0.134, p < 0.1). H3b was supported. Table 6 showed that when the moderating variable green organizational identification entered the regression equation, the interaction terms of green human capital and the moderator had a positive effect on green product innovation (standard β = 0.255, p < 0.001). The interaction terms of green relational capital and the moderator had a positive effect on green product innovation (standard β = 0.275, p < 0.001). However, the moderation effect of green organizational identification on green structure capital and green product innovation was not significant. H4a was not supported. Table 6 showed that when the moderating variable green organizational identification entered the regression equation, the interaction terms of green human capital and moderator had a positive effect on green process innovation (standard β = 0.123, p < 0.1). The interaction terms of green structure capital and moderator had a positive effect on green process innovation (standard β = 0.197, p < 0.001). The interaction terms of green relational capital and the moderator had a positive effect on green process innovation (standard β = 0.267, p < 0.001). H4b was supported. Table 6 showed that, when the moderating variable green dynamic capability entered the regression equation, the interaction terms of green product innovation and the moderator had a positive effect on sustainable competitive advantage (standard β = 0.114, p < 0.1). The interaction terms of green process innovation and the moderator had a positive effect on sustainable competitive advantage (standard β = 0.173, p < 0.001). H5a and H5b were supported. Some of the moderating effect between green intellectual capital and green product innovation is shown in Figures 3-6. The moderating effect between green innovation and sustainable competitive is shown in Figure 7. We tested the indirect effect by SPSS (IBM, Armonk, NY, USA) bootstrapping macro for moderated mediation [90]. In order to test the moderated mediation effect where three moderators exist simultaneously, we constructed three equations:
The study proposed an integrated moderated-mediation effect model where three moderating variables changed simultaneously. The mediating effect was observed in eight cases when the moderating variable was higher or lower than one standard deviation. Table 7 shows the indirect effect under three moderators when green human capital was the independent variable. From the Table 7, we know that if El, Goi, and Gdc were low simultaneously, the mediation of Gpi 1 or Gpi 2 was not significant; when El was high, regardless of whether Goi or Gdc was high or low, the mediation of Gpi 1 or Gpi 2 was all significant. The tables of the indirect effect under three moderators when the other two green intellectual capitals were independent variables are not displayed. If the three moderators were high simultaneously, the mediation of Gpi 1 or Gpi 2 was significant. H6 was supported.
The Mediation Effect for Managerial Implications
Environmental sustainability is now much more important for enterprise sustainable development, and enterprise environmental management is necessary, urgent, but also challenging. NRBV believes that "green" is the key for enterprises to achieve long-term goals [10]. Agricultural enterprises should pay attention to natural resource factors while developing and the balance between enterprise behavior and the natural environment [8].
The main purpose of this study is to explore how can agricultural enterprises become greener by using green intellectual capital to carry out green innovation and establish sustainable competitive advantages and realize "win-win" situations between economic development and environmental protection. The study probed into how agricultural enterprises can use different types of green intellectual capital and different types of green innovation to make better environmental management strategy choices.
Green intellectual capital is a key strategic resource for agricultural enterprises that is valuable, scarce, difficult to imitate, and difficult to replace. Based on the NRBV theory, the study proves that green innovation plays a mediation role between green intellectual capital and sustainable competitive advantage. Pollution provides strong evidence of inefficient use of resources. Green innovation can not only improve the utilization rate of resources, but also reduce pollution [9]. The agricultural enterprises that are the first to carry out green innovation will be compensated with a product premium as the pioneers and will enjoy the first-mover advantage [1]. Green innovation has "double externalities" that are both the positive spillover effects of ordinary innovation and the externalities produced by reducing or eliminating the negative effects on the external environment. Accumulation, application, and management of green intellectual capital can promote agricultural enterprises to carry out green innovation, which not only save production factors and reduce cost but also cause agricultural enterprises to seize the potential opportunities to take the lead in the market and be more efficient and competitive.
When it comes to the three kinds of green intellectual capital and green innovation, first, in the process of green innovation, agricultural enterprises need employees to provide green human capital such as green technology and the knowledge, skills, experience, commitment, and creativity in environmental management [30]. Green human capital applications such as environmental protection knowledge reservation and sharing, green innovation awareness, and green management ability enhancement can promote agricultural enterprises to carry out green product innovation and green process innovation [32].
Second, the environmental management system, green information technology system, environmental protection commitment, green culture, green logo, green brand, green corporate image, and other green structure capital have positive impacts on corporate green innovation. The existing environmental management system can enable agricultural enterprises to break through the original environmental standards and take the initiative to innovate. The green culture creates a good innovation atmosphere for green innovation. Green databases, green patents, green copyrights, green trademarks, and other green structure capital can also support and promote enterprise green product innovation and process innovation.
Third, agricultural enterprises can establish a long-term relationship of trust, commitment, and cooperation with suppliers, customers, partners, and investors by providing green products and services. The establishment of green cooperative relations between them will facilitate the sharing of green knowledge, accelerate the process of green innovation, and promote collaborative innovation. In particular, the establishment of cooperative relations with universities and scientific research institutes will be conducive to the development of green products and green technologies, and ultimately bring sustainable competitive advantages to agricultural enterprises.
In addition, according to the research results, we know that green human capital has a higher impact on green production innovation, and managers should pay more attention to it. Because human capital is embedded in individual employees and is owned by themselves rather than the organization, it will disappear due to the resignation of employees [91]. Enterprise should strive to retain employees with green innovation technology, innovation ability, and provide employees with rewards. Managers could establish incentive systems to reward employees who make special contributions to the development of green ideas and environmental management suggestions. In order to promote green process innovation, a communication platform and knowledge-sharing mechanism could be set up to encourage employees to transform their personal environmental knowledge capital into organizational green intellectual capital, and then into organizational output.
Second, in terms of green structural capital, in order to be more effective in green innovation, agricultural enterprises could optimize the environmental management mechanism, set up specialized environmental protection departments to take responsibility for the green innovation. Agricultural enterprises should pay attention to the information asymmetry between managers and knowledge workers and encourage employees and organizations to form a two-way interactive mechanism through a reasonable incentive mechanism and a performance appraisal method, so that green intellectual capital can break down barriers and flow freely within agricultural enterprises. Agricultural enterprises could introduce a green supply chain management system and establish a green corporate culture and a green agricultural products brand to gain a longer competitive advantage.
Third, in terms of green relational capital, more and more agricultural enterprises choose environmentally friendly suppliers to provide raw materials and semi-products and establish long-term green relations with them. To prolong the competitive advantage, agricultural enterprises can be guided by customers' demand for green products and extend environmental management to the whole life cycle of products and services, and communicate and cooperate with suppliers, consumers, partners, communities, and scientific research institutions.
Previous studies did not study green intellectual capital in different dimensions, nor did they explore their different impact effects on green innovation. Our study provided different impact results of green intellectual capital on green product innovation and green process innovation. Enterprise should pay attention to the different dimensions of green intellectual capital in the enterprise value platform, and make full use of green human capital, green structural capital, and green relational capital to implement green product innovation and green process innovation, and create greater value for agricultural enterprises to support agricultural enterprises to obtain sustainable competitive advantages.
The Moderation Effect for Managerial Implications
The study proposed that the stronger an enterprise's environmental leadership and green organizational identification are, the more it can apply green intellectual capital for green innovation, and the stronger its green dynamic capability, the more success of green innovation and the more lasting its competitive advantage will be. However, in the empirical test, we found that the moderation effect of green organizational identification between green structural capital and green product innovation is not significant. This may be due to the infrastructure or process factors of green structural capital, such as green organization design, an environmental management system and a knowledge management system, an operation process, and a control and incentive system all accumulated and modified by agricultural enterprises in long-term production or operation, so green organizational identification has little influence on the relationship between green structure capital and product green innovation.
Although enterprise creates an open, relaxed, and informal innovation cultural atmosphere for green product innovation, in the process, green product innovation may also be influenced by other factors, such as lack of resources, insufficient funds, path dependence, innovation inertia, and difficulty for agricultural enterprises to correct non-environmentally friendly behaviors. The existing environmental management system will also form path dependence and reduce the enthusiasm of green innovation. There are other influencing factors, such as no high-level leaders' support for the green innovation strategy, insufficient innovation consciousness of leaders and employees, low expectations of competitive advantage, and a lack of environmental knowledge and innovation skills of organization members that affect green structure capital's support for green innovation.
What is different from previous studies is that our study proposed an integrated moderated-mediation effect model that three moderating variables changed simultaneously. So, we can clearly see the role of different variables. From the model, we found that environmental leadership plays the most critical moderating role. The leaders of agricultural corporate should focus more on ecologically values under environmental commitments, the application of environmental resources, and incentives for green innovation [53]. Environmental leadership affects the environmental behavior orientation inside and outside agricultural enterprises [54]. It has positive influences on the organization's values, commitments, and aspirations to deal with environmental issues, as well as the understanding and perception of environmental strategic behaviors. Companies could inspire this charismatic leadership style to solve environmental problems through communication and cooperation between leaders and followers, and even the exchange of rights beyond their respective power boundaries [92].
Environmental leadership can motivate organization members to identify, and work hard to realize, the long-term vision of ecological and sustainable development shared by agricultural enterprises. Borck et al. (2008) believe that environmental leadership programs can improve corporate environmental performance by setting environmental goals [93]. Responsible leaders need to create organizational cultures that facilitate green behaviors among their employees [94]. Green human resource management can be implemented successfully if top management supports green innovation performance appraisal, recruitment, reward, selection, and training [95]. So, the leaders of agricultural corporates should stimulate employees' enthusiasm for environmental protection and green innovation, improve the quantity, quality, and accumulation speed of green intellectual capital of agricultural enterprises, and promote the implementation of green innovation.
The empirical results show that green organizational identification plays a critical moderating role in the whole model. On one hand, from the perspective of organizational employees, green organizational identification enables employees to enhance their awareness and identification of environmental responsibility, and to put forward innovative suggestions in the development of environmental products. On the other hand, from the perspective of organization leaders, improving the green organizational identification can promote the organization's green innovation strategy formulation, green system establishment, and development of green process innovation [55]. Higher green organizational identification can promote agricultural enterprises to constantly seek common environmental beliefs and goals, actively explore the connection between the latest green technologies and the needs of stakeholders, and actively carry out green innovation in product development and production to solve environmental problems [63].
However, the influence of green organizational identification is less than environmental leadership in the integrated moderated-mediation effect model. A shared model is needed to give the agricultural corporate environmental management adaptive behavior. They could build a cognitive framework for environmental protection to enhance their sense of environmental identity, and then guide the green practice of agricultural corporates. Organizational identification has strong path dependence that affects the cognitive level and cognitive context of agricultural enterprises, and further affects the strategic layout and organizational behavior of agricultural enterprises. Agricultural enterprises should break path dependence, establish a common green organizational identification framework that is different from other organizations, and put green innovation into practice.
Through the empirical results, we can see that green dynamic capability has a positive moderation effect between green innovation and sustainable competitive advantage. Facing the complex and changeable external agricultural environment, there is great uncertainty about whether the competitive advantage brought by green product innovation and process innovation can be maintained. The "dynamic" capability enables agricultural enterprises to better adapt to the changing external environment, continuously digest and absorb technological innovation resources and dynamically occupy some unique resources, reduce the risk of green innovation, and improve the success rate of green product innovation and process innovation [23]. The green dynamic capability can even be regarded as the complementary assets of agricultural enterprises, which can positively promote value promotion and better apply the green intellectual capital. Green intellectual capital needs to be integrated and reorganized to improve innovation performance. Green dynamic ability can enhance the integration effect among various elements of intellectual capital, so that green intellectual capital can be better applied to agricultural enterprises' green innovation through full digestion, absorption, integration, and utilization. Therefore, in the process of building sustainable competitive advantages, organizations should improve their green dynamic ability.
Finally, from the integrated moderated-mediation effect model, the synergistic effect of environmental leadership, green organizational identity, and green dynamic capability is particularly important, and agricultural enterprises should pay attention to the synergistic effect of them. Under high environmental leadership, green organizational identification, and green dynamic capability simultaneously, agricultural enterprises should constantly identify, evaluate, acquire, analyze, integrate, utilize, and share new environmental knowledge and information that is valuable to green innovation, and establish a formal communication network about green innovation within the organization. Agricultural enterprises should identify opportunities and threats related to environmental problems in a timely manner, adjust three types of green intellectual capital application modes, and make innovative decisions and implement innovative strategies according to green market orientation. By promoting the transformation of green intellectual capital into green innovation, agricultural enterprises can improve the effect of green innovation of their products and realize the "win-win" between the economy and the environment. Finally, enterprise should become greener and establish sustainable competitive advantages.
Conclusions
Through the research, we draw the following conclusions. First, the green human capital, green structural capital, and green relational capital of agricultural corporate have a positive influence on sustainable competitive advantage. Green product innovation and green process innovation play a key mediating role in this impact. Agricultural corporates can establish a sustainable competitive advantage through green innovation.
Second, environmental leadership has a positive moderation effect between green intellectual capital and green product innovation or green process innovation. Green organizational identification has a positive moderation effect between green human capital, green relational capital, and green innovation, except for green structural capital and green product innovation. The stronger an enterprise's environmental leadership and green organizational identification are, the more it can apply green intellectual capital for green innovation. The stronger the corporate's green dynamic capability is, the more success in green innovation and the more lasting its competitive advantage will be.
Third, from the integrated moderated-mediation effect model, we know that when the environmental leadership was low, no matter how high the green organizational identification or green dynamic capability were, the mediating effect of green process innovation, green human capital, and green process innovation was not significant. It is the same for green structure capital and green innovation (green product innovation and green process innovation). So, environmental leadership plays the most important role for green innovation. When it comes to green structural capital, environmental leadership and green organizational identification both play important roles for green innovation.
This study is helpful for agricultural enterprises to carry out environmental management more effectively and carry out green innovation through the accumulation and management of green intellectual capital, while obtaining a sustainable competitive advantage. Green innovation of agricultural corporate can not only improve environmental performance, but also establish a sustainable competitive advantage, which is a management behavior of "self-interest" and "altruism". The research results on the environmental management path of agricultural corporate are beneficial to help agricultural corporates continuously improve the technical level of green agricultural products, generate more green agricultural products, adapt to the green agricultural products market, consolidate the market position, and gain sustainability competitive. Ultimately, this will help to improve the speed of green and sustainable development of agriculture.
Limitations and Further Research
Although we developed a framework to enhance agricultural corporate sustainable competitive advantages and provided some meaningful conclusions and insights into environmental management in the paper, there are certain limitations. Based on the NRBV theory, from the perspective of internal competence and organizational cognition, the research integrated environmental leadership, green organizational identification, and green dynamic capability into the overall framework.
Future studies can focus on the external impacts such as the impact of stakeholder environmental pressure on agricultural enterprises' green innovation behavior. Second, this study selected agricultural enterprises from the agricultural product-processing industry, such as food manufacturing, beverage manufacturing, tobacco processing, and textile manufacturing as our samples. Further research could be extended to other manufacturing industries and be compared with this study. Third, this study has adopted cross-sectional data in the questionnaire to test the hypotheses so that we cannot demonstrate the dynamic change of environmental leadership, green organizational identification, and green dynamic capabilities at different stages. Therefore, future research can focus on the longitudinal study to track different factors and the sustainable development level of the agricultural enterprises during different stages to do dynamic research. We hope that the research results are useful for managers, researchers, practitioners, and policy makers, and contribute to future research as reference.
|
v3-fos-license
|
2021-08-02T00:05:34.148Z
|
2021-05-14T00:00:00.000
|
236563426
|
{
"extfieldsofstudy": [
"Materials Science"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "http://article.sciencepublishinggroup.com/pdf/10.11648.j.ijmsa.20211003.11.pdf",
"pdf_hash": "2a6ae8d2c9d352791a45bc38a6e8a6829550f2ee",
"pdf_src": "Anansi",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44385",
"s2fieldsofstudy": [
"Materials Science"
],
"sha1": "9b683dd023512d6eb3303963db113752f3afdcf7",
"year": 2021
}
|
pes2o/s2orc
|
Comparative Studies on Solubility and Thermo Dynamics Properties of Natural Rubber Filled with CB/CPKS and CB/APKS Fillers
In this research, the comparative studies on solubility and thermodynamics properties of natural rubber vulcanizates filled with blends of activated palm kernel shell and carbonized palm kernel shell has been investigated. Palm Kernel Shell (PKS) was locally sourced. washed and sun dried to remove accompanying and moisture. The PKS was then pulverized to particle size, carbonized at 600°C for one hour (1hr) using Carbolite furnaces and chemically activated using 0.1M H3PO4 and 0.1M KOH solutions. The NR-filler loading concentrations of CB/APKS and CB/CPKS were compounded using two-roll mill. The solubility was done using three different solvents of water, kerosene and petrol respectively. The solubility results obtained for CB/APKS and CB/CPKS has no significance difference as the temperature varies when immersed in water. The solubility values observed for CB/APKS and CB/CPKS ranges from 1.06g to 1.19g and 1.03g to 1.19g across the samples respectively. This shows that since the filler is an organic substance, it has little or no affinity for water. In the case of kerosene and petrol, both are organics and the filler is an organic substance which follows the statement ‘likedissolves-like’ as the temperature increases, the absorption of kerosene is lower than that of petrol. The results recorded for kerosene across the samples of CB/APKS and CB/CPKS ranges from 1.18g to 4.37g and 2.02g to 4.79g while the results for petrol ranges from 2.25g to 4.92g and 2.51g to 4.88g respectively. This may be due to the fact that petrol is volatile and flammable compared to kerosene. The results of the activation energy were a reflection of the solvent’s permeability except for water which showed contrary results. The results of the activation energy obtained for the three solvents across CB/APKS and CB/CPKS were 5.55 KJ/mol for water, kerosene with 9.48 KJ/mol and petrol with 13.61 KJ/mol respectively. The results observed for water might be due to its nature as the universal solvent being entirely different from other solvents in terms of reactivity and anomalous property. This means polar solvents dissolve polar molecules while nonpolar solvents dissolve nonpolar molecules. This research shows that both CB/APKS and CB/CPKS possess great potential in rubber system.
Introduction
Natural rubber exhibits the advantages of advanced elasticity, high strength, great toughness and manufacturing versatility. A rubber band can be stretched to 9 or 10 times its original length before returning to its original condition as soon as the outside pressure is released. Similarly, a block of rubber can also be compressed, and after the load is released the block will display its original shape and dimensions in a very short time. As to the extent to which it can be distorted, the rapidity of recovery and the degree to which it recovers to its original shape and dimensions, rubber is considered as unique material. Strength, toughness and elasticity are essential properties of rubber [1,10]. The higher strength and greater toughness of rubber provides more powerful elastic qualities in some situations where most other elastic materials may fail. Due to these properties and its dependence, rubber shows excellent resistance towards cutting, tearing and abrasion. Furthermore, this combination of useful physical properties is well maintained over a wide range of temperature from low temperatures (-45°C) to relatively high temperatures (120°C) which covers the most commonly used range of climatic conditions. Also, rubber is relatively inert, resistant to the deteriorating effects arising from atmosphere and many chemicals. Therefore, it has a relatively long and useful life under a wide variety of conditions. Natural rubber when vulcanized possesses unique properties such as high tensile strength, comparatively low elongation, hardness and abrasion resistance which is useful in the manufacture of various products. The main use of natural rubber is in automobile tyres. They can also be used in houses, foot wears, battery boxes, balloons, toys and so many others [1,10].
The use of natural fibres as reinforcements or filler sin rubber systems has gained extra attention in recent years. Many studies have been carried out on the utilization of natural fillers such as sago, sisal, short silk fibre, oil palm, empty fruit bunch, rice husk ash, cornhub, jute fibre, rubber wood powders, hemp, kenaf and cellulosic fibres as reinforcement materials [21]. The presence of solvents in polymers upon blending may be assumed to be significant because most polymers after swelling in the solvent show reduction in their properties. The effects of these solvents are believed to be due to localized plasticization that allows the development of cracks at reduced stress [10]. Polymers for commercial applications should be chemically resistant and retain their mechanical integrity and dimensional stability on contacts with solvents [9]. Numerous literature sources have revealed excellent reports on the sorption processes as well as mechanical properties of elastomer/thermoplastic blends. Polymers swell if they interact with the solvents, and the degree of this interaction is determined by the degree of crosslink density. It has been reported that the degree of swelling can be measured or related to the thermodynamic properties of the system [11]. Considerable interest has been focused on the absorption and diffusion of organic solvents because their ability to permeate at different rate enhances the separation of component of their liquid mixture through polymeric membrane [7].
The physico-mechanical, solubility and thermodynamic studies of Natural Rubber -Neoprene Blends using variety of solvents has been studied. The results of swelling revealed that the blends with higher neoprene content showed better resistance to petrol (PMS), kerosene (DPK) and hexane compared to blends with lower neoprene contents. The order of increasing permeability of the solvents regardless of sample composition was; kerosene > hexane > petrol. The results of the thermodynamic studies showed that the sensitivity of reaction towards temperature as higher mass uptake values of the blends were recorded as temperature was increased in the order 30°C, 50°C and 70°C. The activation energy of the swelling process was in reverse order of the permeability of the solvents. The solvent with the least permeability (petrol) had the highest activation energies in all the selected blends. [2] investigated the equilibrium sorption properties of palm kernel husk and N330 filled natural rubber vulcanizates as a function of filler volume fraction. The result obtained showed that there was a decrease in sorption with increasing filler loading which was attributed to the fact that each filler particle behaves as an obstacle to the diffusing molecules. As concentration of filler increases in the rubber matrix, more and more obstacles are created to the diffusing molecules which ultimately reduce the amount of penetrant solvent. The effect of groundnut shell filler carbonizing temperature on the mechanical properties of natural rubber composite was studied [3]. They found that the tensile strength, modulus, hardness and abrasion resistance increased with increasing filler loadings while, other properties such as compression set, flexural fatigue and elongation decreased with increasing filler loading. The percentage swelling in benzene, toluene and xylene where found to decrease with increased carbonization. [16] studied the physicomechanical effects of surface-modified sorghum stalk powder on reinforced natural rubber, and found that fillers reduces the water absorption resistance which is in agreement with Ragumathen et al, (2011). In this study, Carbonized PalmKernel Shells (CPKS) and Activated Palm Kernel Shells (APKS) were considered as reinforcing fillers in rubber. The CPKS and APKS were blended with Carbon Black (CB) and used as fillers in Natural Rubber (NR) compounding. The aim of this research is to study the solubility of CPKS and APKS filled NR vulcanizates in some common solvents as well as determine the rapidity of these processes using thermodynamic parameters
Materials
The equipment and apparatus used for this study include: weighing balance RS232, model WT2203GH, Saumya Two roll mill (DTRM-50) for compounding rubber, Saumya Compression moulding machine 50TONS (
Carbonization
Palm Kernel Shells (PKS) were obtained from Apomu, Osun State, Nigeria and washed to remove accompanying dirt, thereafter, sun dried for 2 days. The PKS was pulverized to particulate size, weighed and recorded. Carbonization was done using a modified method of Emmanuel et al., 2017. The dried sample was then carbonized for 1 hour at 500-600°C using the muffle furnace. The sample was removed from the furnace and placed in a bowl containing water for quenching and cooling. Then, the shell was drained, dried, weighed and recorded.
Chemical Activation
The palm kernel shell/carbonized palm kernel shell (CPKS/PKS) particles were activated using a modified method of Emmanuel et al., 2017. The sample was soaked in 0.1MH 3 PO 4 for 24 hours. Palm kernel shell /carbonized palm kernel shell (PKS/CPKS) particles were dried in oven to obtain the initial mass recorded. The activated sample is then washed with distilled water and 0.1MKOH to neutralize the material being activated to pH 7 and finally sun dried for 2-3 hours followed by oven drying for 1-2 hours at about 170°C. The activated/carbonized palm kernel shell particle was weighed and recorded.
Formulation for Compounding
The formulation used for compounding in this research is presented in Tables 1 & 2, measurements were carried out using part per hundred of rubber (Pphr).
Compounding, Mastication and Mixing
The compounding of the polymer was carried out using the two-roll-mill (DTRM-150). The mastication of the rubber was carried out first where the rubber was milled continuously to make it more elastic and soft for easy incorporation of ingredients and shaping process. The speed of the two roll mill are at ratio of 1:1.25. and the nip-setting is at 0.055 -0.008 inche at a temperature of 70°C and at a speed of 24rpm.
Swelling Test
This was done to know the extent of solvent penetration in the blends. The solvents used were water, kerosene and petrol. 1.0 g of each sample was weighed and immersed in 20 ml of water for 1, 2 and 3 hours respectively. The weight of the samples was taken after each time interval. The same procedure was used for kerosene and petrol. Results were obtained in triplicates for each sample per solvent used and the average value was taken and recorded [19,20]. Filled with CB/CPKS and CB/APKS Fillers
Sorption
All vulcanizates samples were immersed in water, kerosene and petrol at 35°C, 45°C and 55°C of temperatures for 1, 2, and 3 hours respectively and the mass uptake were taken and recorded. The percentage sorption was calculated using the relation [19].
Activation Energy of the Swelling Process
The activation energy is the minimum energy required for a reaction to proceed. In determining the activation energy of the swelling process, all samples for both CB/CPKS and CB/APKS were immersed in water, kerosene and petrol at 35°C, 45°C and 55°C and their mass uptake readings were taken. The natural logarithm of percentage sorption was plotted against the reciprocal of temperature for each samples and the slopes of the graphs were substituted into the Arrhenius relation; K= Ae -Ea/RT to determine the activation energy (Ea), where R is molar gas constant, 8.314KJ/mol [1].
Arrhenius relation, K = Ae-Ea/RT Ea = the activation energy, R = the molar gas constant, 8.314 KJmol -1 T = the thermodynamic temperature in Kelvin (k)
Discussion
Solubility is the maximum amount of a substance that will dissolve in a given amount of solvent at a specific temperature. Temperature is one of the factors that affect the solubility of both solids and gases. The results for mass uptakes of CB/APKS Blends at 35°C, 45°C and 55°C at different time intervals are presented on Table 3. The results obtained were carried out using three different solvents which are water, kerosene and petrol respectively. It was observed that majority of the blends of CB/APKS showed the same sorption pattern from 1 to 3 hours at 35°C, 45°C and 55°C respectively when immersed in water. The permeability of majority of the blends from sample 1 to 7 increased from 1 to 2 hours after which it either fell or remained stable after 3 hours of immersion. This trend was observed across samples 1 to 7. The sorption values of sample 1 at 35°C increased from 1.06g to 1.09g after 2 hours, after which it remained stable at 1.09g after 3 hours. At 45°C the sorption value decreased from 1.19g to 1.07g after 2 hours after which it increased to 1.08 after 3 hours. At 55°C the sorption values increased from 1.05g to 1.06g after 2 hours after which the stable value of 1.06g was observed after 3 hours. Sorption values of 1.08g and 1.07g were recorded for sample 7 at 35°C and 55°C with the time intervals respectively while an increase sorption values from 1.06g to 1.09g was observed for 45°C after 3 hours. It was also observed that majority of the samples tend have equilibrium sorption at 2 and 3 hours at 45°C and 55°C. This may be due to the permeability reaching its maximum and the blends no longer tolerating the absorption of water. After 3 hours, majority of the blends decreased as the temperature was increased, Figure 1. This was seen at sample 1 which decreased from 1.09g to 1.06g after 55°C rise. The same trend was observed for samples 3, 4 and 6 which decreased from 1.16g to 1.05g, 1.09g to 1.07g and 1.12g to 1.08g at 55°C respectively. A maximum sorption of 1.08g was recorded for sample 2, while an increase of 1.07g to 1.08g for sample 5 was recorded and decrease of 1.08g to 1.07g for sample 7 at 55°C [3,5,14,6,8]. For kerosene, the blends across sample 1-7 showed increase in permeability for the three temperature values of the experiment Table 3. It was observed that the sorption values increased as the time and the temperature were increased across the seven samples. Sample 1 at 35°C showed increase insorption value from 2.16g to 2.89g after 3 hours; 2.86g to3.34g at 45°C after 3 hours and 3.42g to3.85g at 55°C after 3 hours respectively, Figure 2. The same trend was observed across the other samples. This observation might be due to the nature of kerosene as a solvent having higher hydrocarbon content and greater compatibility, facilitating its ability to dissolve or penetrate the blends which are also having higher hydrocarbon content due to the presence of natural rubber. It could also be that the average kinetics energy of the solvent molecules was increased due to increase in temperature facilitating the solvent molecules to permeate the blends better [12,19].
For petrol, an appreciable increase was observed from 1 to 3 hours at 35°C for all the seven samples Table 3 and Figure 3. The sorption of sample 2 increased from 2.62g to 4.73g after 3 hours; sample 3 from 2.37g to 4.56g after 3 hours. This trend was observed across the seven samples. This observation might also be due to the non-polar nature of petrol making it to penetrate the blends which are also essentially non-polar due to the organic components. However, at 45°C and 55°C the sorption decreased from 1 to 3 hours. The sorption of sample 1 at 45°C decreased from 4.39g to 4.21g after 3 hours and also decreased at 55°C from 3.95g to 3.71g after 3 hours. The sorption also decreases in sample 7 from 3.36g to 2.94g and 2.35g to 2.46g after 3 hours for 45°C and 55°C temperature rise respectively. On the other hand, the sorption values at 45°C and 55°C were slightly greater when compared with those of 35°C. This might be due to the effect of temperature on the permeability of the solvent arising from greater mobility of solvents or kinetic energy at elevated temperature [5,6,13,14]. The results for mass uptakes by CB/CPKS Blends at 35°C, 45°C and 55°C at different time intervals are presented on Table 4. The results obtained were also carried out using three different solvents, which are water, kerosene and petrol [15].
The sorption for majority of the blends immersed in water at 35°C tends to increase as the CPKS values and CB values increased and decreased respectively. The sorption as the CPKS content increased from sample A to D was found to increase from 1.07g to 1.19g. A decrease was only observed at higher CPKS composition and this might be due to the lower content of CB in the blends suggesting that higher CB loading might possess better reinforcing and strength impacting properties than CPKS. However, the sorption values of most of the blends either decreased from 1 to 3 hours or remain stable after an observable increase or decrease. This might also be because the blends no longer have capacity for absorption of the solvent making the sorption to be at maximum. At 45°C and 55°C most blends across sample A to F showed a stable sorption values after 3 hours indicating the reduction in absorption capacity of the blends, Figure 4 (18, 19). The sorption values across sample A to F increased from 1 to 3 hours when the samples were immersed in kerosene Table 4 and Figure 5 respectively. Sample A at 35°C increased from 2.02g to2.87g after 3 hours; sample C from 2.30g to 3.49g after 3 hours; sample F from 2.21g to 3.20g after 3 hours. The same trend was observed at 45°C and 55°C for most of the blends. This observation may be as a result of nonpolar solvents dissolving non polar molecules. Therefore, kerosene being a nonpolar solvent facilitates its penetrating power to penetrate the blends [17,18]. The sorption of the samples at 35°C when immersed in petrol showed an appreciable increase from 1 to 3 hours, Table 4 and Figure 6. This trend was observed for all the blends for example the sorption of sample A increased from 2.51g to 4.38g after 3 hours; sample D 2.79g to 4.53g after 3 hours; sample F 2.66g to 4.85g after 3 hours respectively. This observation may be as a result of non-polar solvents dissolving non-polar molecules. However, the sorption for most blends decreased from 1 to 3 hours at 45°C and 55°C. The sorption of sample A decreased from 3.67g to 3.10g at 45°C and 3.15g to 2.85g at 55°C after 3 hours; sample C from 4.22g to 3.84g at and 3.51g to 3.25g at 55°C after 3 hours; sample F 4.24g to 3.07g at 45°C and 3.23g to 3.03g at 55°C after 3 hours respectively. This observation could be due to the effect of temperature on the permeability of the solvent because at a given temperature the activation energy depends on the nature of the chemical transformation that takes place but not on the relative energy state of the reactants and products [8,9]. Therefore, the solubility of CB/APKS had no significant difference as the temperature is varied. This shows that since the filler is an organic substance, it has little or no affinity for water with highest absorption of 1.16 g after 3 hours (sample 3). In the case of kerosene and petrol, both are organic solvents and the filler is an organic substance which follows the statement that 'like-dissolves-like'. As the temperature increases, the absorption of kerosene is lower than that of petrol. This is evident that petrol is more volatile and flammable compared to kerosene as both are non-polar solvents [1,2,5,8,18].
In the case of CB/CPKS, there is no significant solubility in water, but petrol was absorbed better than kerosene which, may be due to its volatility and flammability. Also, increase in temperature allows the filler particles to become more mobile due to increase in kinetic energy which make the solvent molecules to interact more with the filler particles as observed in petrol and kerosene. Therefore the low solubility of the fillers in the different solvents may be due to low reaction surface of the filled vulcanizates using bio fillers used [15,17,18]. Also, the level of cross-link to filler dispersion, nature of solvent and type of fillers used are being considered [13,5,9].
Generally, petrol being a mixture of hydrocarbons with a lower molecular weight than Kerosene may be expected to diffuse faster and be accommodated in the rubber matrix with fewer hindrances. The decrease in sorption with increasing filler loading may arise from filler particles behaving as an obstacle to the diffusing molecule. As filter loading increase in rubber matrix, more and more obstacles are created to the diffusing molecule and thus reduce the amount of penetrated solvent. [1,8,10] explain why higher sorption values were obtained for low molecular weight hydrocarbons.
Activation Energy
The activation energy being the minimum energy required for a chemical reaction to occur, connotes the lesser the activation energy the easier it is for the reactant particles to overcome the energy barrier and form product and vice versa. In this context, the permeability of the solvent is inversely proportional to the activation energy for most blends i.e the better the solvent permeates the blends, the lesser the activation energy and vice versa. For sample A-F, the solvent that the blends most was kerosene, followed by petrol and water. The results of the activation energy were a reflection of the solvent's permeability except for water which showed a different pattern. The results observed for water might be due to its polar nature solvent and wide differences in solubility parameters with the majority of the ingredients in the vulcanizates. The activation energy of sample A for kerosene, petrol and water were 23.99KJ/mol, 25.06KJ/mol and 11.96KJ/mol; Sample C; 9.88KJ/mol, 22.63KJ/mol and 11.96KJ/mol; Sample F; 14.53KJ/mol, 26.61KJ/mol and 5.55KJ/mol respectively, Table 5. The permeability in kerosene and petrol is as results nonpolar solventsto dissolve nonpolar molecules [1,2,4,8,10,14,18].
Similar explanation can be given for sample 1-7, Table 6. The activation energy for kerosene, petrol and water were 17.07KJ/mol, 13.61KJ/mol and 16.85KJ/mol for sample 1 respectively with petrol being the solvent that permeated the blend most for sample 1. The activation energy for kerosene, petrol and water for sample 4 were 23.10KJ/mol, 32.83KJ/mol and 10.44KJ/mol respectively with kerosene having highest activation energy. The results recorded for sample 7 were 9.48KJ/mol 47.18KJ/mol and 5.55KJ/mol for kerosene, petrol and water respectively. The same trend was observed for sample 5. The results of activation energy of both the CB/APKS and CB/CPKS may be due to the aggregation of carbon chain in the organic compounds as a results of the increase in the fillers which reduces ignition and bring about increase in modulus and tensile strength [4] which make the reactions in petrol and kerosene with samples variation difficult [1,2,4,8,10,14,18].
Conclusion
The solubility and thermodynamics studies of CB/APKS and CB/CPKS filled NRblends were investigated. The study showed that blend loading composition and the nature of the organic molecule played a significant role in determining the mass uptake. This shows that since the filler is an organic substance, it has little or no affinity for water. In the case of kerosene and petrol, both are organics and the filler is an organic substance which follows the statement 'likedissolved-like'. As the temperature increases, the absorption of kerosene is lower than that of petrol. The results of the activation energy were a reflection of the solvent's permeability except for water which showed contrary results. The results observed for water might be due to its nature as the universal solvent being entirely different from other solvents in terms of reactivity and anomalous property. This means polar solvents dissolve polar molecules while nonpolar solvents dissolve nonpolar molecules. This research shows that both CB/APKS and CB/CPKS possess great potential in rubber science and technology.
|
v3-fos-license
|
2021-04-14T13:36:27.359Z
|
2021-04-13T00:00:00.000
|
233224969
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://bmcgeriatr.biomedcentral.com/track/pdf/10.1186/s12877-021-02186-x",
"pdf_hash": "377cf462f495a43fa0f8a8a375d29da22ab02f6d",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44388",
"s2fieldsofstudy": [
"Medicine",
"Psychology"
],
"sha1": "cc03de084a49987a5c79a1951d4a5eb187275449",
"year": 2021
}
|
pes2o/s2orc
|
Convergent and discriminative validity of the Frail-VIG index with the EQ-5D-3L in people cared for in primary health care
Background The Frail-VIG frailty index has been developed recently. It is an instrument with a multidimensional approach and a pragmatic purpose that allows rapid and efficient assessment of the degree of frailty in the context of clinical practice. Our aim was to investigate the convergent and discriminative validity of the Frail-VIG frailty index with regard to EQ-5D-3L value. Methods We carried out a cross-sectional study in two Primary Health Care (PHC) centres of the Catalan Institute of Health (Institut Català de la Salut), Barcelona (Spain) from February 2017 to January 2019. Participants in the study were all people included under a home care programme during the study period. No exclusion criteria were applied. We used the EQ-5D-3L to measure Health-Related Quality of Life (HRQoL) and the Frail-VIG index to measure frailty. Trained PHC nurses administered both instruments during face-to-face assessments in a participant’s home during usual care. The relationships between both instruments were examined using Pearson’s correlation coefficient and multiple linear regression analyses. Results Four hundred and twelve participants were included in this study. Frail-VIG score and EQ-5D-3L value were negatively correlated (r = − 0.510; P < 0.001). Non-frail people reported a substantially better HRQoL than people with moderate and severe frailty. EQ-5D-3L value declined significantly as the Frail-VIG index score increased. Conclusions Frail-VIG index demonstrated a convergent validity with the EQ-5D-3L value. Its discriminative validity was optimal, as their scores showed an excellent capacity to differentiate between people with better and worse HRQoL. These findings provide additional pieces of evidence for construct validity of the Frail-VIG index.
Background
Validity is defined as "the degree to which an instrument truly measures the construct(s) it purports to measures" [1]. Validation is a continuous process and different forms of validation can be applied. Criterion validity and construct validity are two types of validity. Criterion validity is applicable when there is the gold standard for the construct that is measured, it refers to the degree to which the scores of a measurement instrument are an adequate reflection of a gold standard [1,2]. By contrast, construct validity is applicable when there is no gold standard, it refers to the degree to which the scores of a measurement instrument are consistent with the available knowledge about the construct [1,2].
On the other hand, frailty is defined as "a clinical state in which there is an increase in an individual's vulnerability to develop negative health-related events (including disability, hospitalizations, institutionalizations, and death) when exposed to endogenous or exogenous stressors" [3]. It is a complex and multidimensional concept for which there are numerous and multiple operational definitions. This fact has contributed to the lack of an accepted gold standard [3,4]. As a result, most frailty measurement instruments assess their validity by analysing the degree of consistency of their scores with different hypotheses about their relationship with other instruments or the differences between relevant groups. In other words, hypothesis testing for construct validity studies is carried out because of the difficulty in assessing criterion validity.
A new frailty index, the Frail-VIG index, has been developed recently [5,6]. It is an instrument with a multidimensional approach and a pragmatic purpose that allows rapid and efficient assessment of the degree of frailty in the context of clinical practice. This measurement instrument has shown to have an optimal capacity to predict two-year mortality (Area Under Curve 0.85) [6]. The relationship of their scores with those of the Clinical Frailty Scale [7] has been evaluated in a cross-sectional study and a strong positive correlation (r = 0.706) has been established [8]. All of these studies have been conducted in an inpatient hospital setting and there have been no studies in primary health care (PHC) settings.
Previous research suggests an association between frailty and worse quality of life, but its findings are mixed and inconsistent. However, recent systematic reviews show a consistent negative association between frailty and quality of life among community-dwelling people [9,10]. Besides, the validity of a measuring instrument does not reside in the instrument itself but in how it is used and hence depends on its appropriateness to the target population and the specific context of administration [2]. Therefore, a good approach to further developing evidence of the validity of the Frail-VIG index would mean to analyse the relationship of its scores with those of another instrument that measures the quality of life (both instruments administered in the context of PHC).
Consequently, we carried out this study in a PHC setting to investigate the convergent and discriminative validity of the Frail-VIG index regard to health-related quality of life (HRQoL) measured by the EQ-5D threelevel version (EQ-5D-3L). Concerning convergent validity, we hypothesised that the relationship of scores between instruments was moderate to strong and negative. With regard to discriminative validity, we hypothesised that non-frail people would have higher scores on HRQoL than frail people.
Study design
We conducted a cross-sectional study on measurement properties.
Setting and participants
The study was carried out in two PHC centres of the Catalan Institute of Health (Institut Català de la Salut), Barcelona (Spain) from February 2017 to January 2019. In these centres, people who cannot visit the centre for PHC services are included under a home care programme and are cared at-home by PHC centre's professionals. Participants in the study were all people included under a home care program during the study period. No exclusion criteria were applied.
Variables and data measurements
We used the EQ-5D-3L to measure HRQoL which is one of the most widely used measurement instruments [11][12][13]. It is a generic measurement instrument because it measures HRQoL in a way that can be used across different types of patients, health conditions, and treatments. This instrument comprises two parts. The first part consists of five dimensions: mobility, self-care, usual activities, pain/discomfort, and anxiety/depression. Each dimension has three levels: no problems, some problems, and extreme problems. Unique health status is defined by combining a level of each of the five dimensions resulting in a five-digit number, with 11,111 reflecting the best possible health status and 33,333 the worst. The second part of the instrument is the EQ VAS which comprises a visual analogue scale from 0 (the worst health imaginable) to 100 (the best health imaginable). EQ-5D-3L is designed for self-completion by respondents, but several other modes of administration (interview administered, face-to-face interview, or telephone interview) are also possible. Existing research has established that self-completion and assisted completion produce equivalent scores overall and therefore both methods can be used [14,15]. For the present study we used the Spanish face-to-face interview version as a large majority of the participants were unable to read and write [16].
Frailty was measured using the original Spanish version of Frail-VIG index. It is composed of 22 items that evaluate 25 deficits based on the comprehensive geriatric assessment [5,6]. It is constructed using only variables recorded during the usual clinical evaluation process. The value of the index is obtained from the sum of the identified deficits divided by 25, the total number of potential deficits, so the higher the presence of deficits the higher the score in the index. Likewise, different index cut-off points have been established that distinguish between four [6].
Researchers developed an instruction manual for the administration of instruments. Trained PHC nurses administered both instruments during face-to-face assessments in a participant's home during usual care. These interviews had an average duration of 30 min. A pilot test with 20 participants was also carried out to detect possible problems and to introduce improvement strategies. After this pilot test, no changes in the procedure were necessary.
Statistical methods
We applied a scoring algorithm based on the Spanish population to convert EQ-5D-3L states into a single summary value [13,17]. This value is
2)
Each digit of profile label corresponds to one of the five dimensions of the EQ-5D-3L (from left to right): Mobility, Self-Care, Usual activities, Pain and discomfort, and Anxiety and depression Level 1, no problems; Level 2, moderate problems, and level 3, extreme problems "attached to an EQ-5D profile according to a set of weights that reflect, on average, people's preferences about how good or bad the state is" [18] and ranges from 1 (full health) to 0 (a state as bad as being death), although there are negative values for the value, corresponding to those states of health that are rated as worse than death. This value is often used in economic evaluations, but it can also be used to describe the health of a population or the severity of disease among patients [19]. We calculated central tendency and dispersion measures for the quantitative variables. For categorical variables, we estimate absolute and relative frequencies. The relationships between the Frail-VIG index and the EQ-5D-3L value were examined using Pearson's correlation coefficient and multiple linear regression analyses. Correlation coefficients of ≤0.29 were considered weak, 0.30-0.49 as low, 0.50-0.69 as moderate, and ≥ 0.70 was considered strong correlation [20]. To examine whether non-frail people had higher scores on HRQoL than frail people, a one-way ANOVA was conducted, with frailty status as the independent variable and the EQ-5D-3L value as the dependent variable. We used the statistical software IBM SPSS Statistics version 24 for all analyses.
Results
Four hundred and twelve participants were included in this study. Table 1 shows their general characteristics and frailty status according to the Frail-VIG index scores. Table 2 describes the 10 most frequent EQ-ED-3 L profiles according to frailty status. The worst of these profiles was most prevalent among people with severe frailty, while the best was more frequent among nonfrailty people.
Pearson's correlation coefficient between Frail-VIG index and EQ-5D-3L value was negative and moderate (r = −0.510; P < 0.001). After adjusting for age and sex variables, the multiple linear regression model revealed that Frail-VIG index independently correlated with EQ-5D-3L value (B = -0.945; 95% Confidence Interval, −1.098 to − 0.791; R 2 = 0.287). As you can see from Table 1, non-frail people reported a substantially better HRQoL than people with moderate and severe frailty people. Likewise, closer inspection of Table 3 shows that the EQ-5D-3L value declined significantly as the Frail-VIG index score increased.
Discussion
In this study, involving people cared at-home by PHC professionals, we found that the Frail-VIG index demonstrated convergent validity with the EQ-5D-3L value.
Furthermore, its discriminative validity was optimal, as their scores showed an excellent capacity to differentiate between people with better and worse HRQoL.
Consistent with the literature, this research found that HRQoL and frailty were negatively associated among community-dwelling older people [9,10]. Studies on other cumulative deficit models of frailty have also found a negative relationship between the frailty state and the scores of instruments measuring HRQoL, well-being and life satisfaction [9,10,[21][22][23]. However, the results of this study show that not only is the frailty state associated with a worse HRQoL but also that there is a linear association between both indexes (Frail-VIG index and EQ-5D-3L value). Hypothesis testing is an ongoing process, so the more hypotheses are tested the more evidence is generated for construct validity [24]. Therefore, this research supports evidence from previous studies carried out in the hospital setting on the construct validity of the Frail-VIG index [6,8].
This study has several strengths. Most validation studies of measurement instruments of frailty in community-dwelling people focus on demonstrating their predictive potential for adverse outcomes or resource use, such as disability, institutionalisation or hospital admissions [25][26][27]. In contrast, studies that analyse their relationship to more positive outcomes such as HRQoL are less common [10,28]. Likewise, the use of the Frail-VIG index in primary care has been poorly studied, and this is one of the first studies to analyse its construct validity in this care setting. However, some limitations exist. The representativeness of non-frailty people is very low (12 people), probably because the study population were people in a home-care programme. A greater representation of this population might have influenced the Frail-VIG index's discriminative validity observed. In addition, we used the EQ-5D Table 3 EQ-5D-3L value (from "0", "a state as bad as being dead" to "1","full health") for the total study population and the seven groups based on their Frail-VIG score (from "0", "absence of frailty" to "1", "severe frailty") values for this assessment of the measurement properties of the Frail-VIG index instead of other values such as the EQ VAS. The EQ VAS can be considered a measure closer to the patient's perspective than the EQ-5D values [29]. Nevertheless, as Devlin and Parkin point out [19], being able to summarise and represent a health profile with a single value has important advantages, including simplifying statistical analyses. On the other hand, both the EQ-5D values and the EQ VAS are able to discriminate between the quality of life of most groups of individuals with different sociodemographic factors, and those with or without clinical conditions [30]. Furthermore, some studies [31] and experts in the field of psychometrics [32] report that older people experience difficulties in understanding and completing direct estimation methods, such as the visual analogue scale. For all these reasons, we chose the EQ-5D values to carry out this psychometric study. People living with frailty risk experiencing a decline in their quality of life [9,10,33]. The findings of this study suggest that the interventions aimed at decreasing frailty could have the added benefit of improving the HRQoL. PHC professionals are naturally positioned to identify frailty early and to implement interventions that prevent related adverse effects on the most vulnerable people [4]. Moreover, the assessment of frailty in PHC settings requires tools that are not time consuming as well as valid and reliable which is way the Frail-VIG index could provide a useful and appropriate tool for this care setting [34].
Conclusions
This study has identified a negative moderate correlation between the Frail-VIG index and the EQ-5D-3L values. It has also shown that the Frail-VIG index was able to discriminate significantly home-dwelling older people according to their HRQoL. These findings provide additional pieces of evidence for construct validity of the Frail-VIG index. Further research is needed on this new measurement instrument to determine its suitability for screening and preventing adverse effects of frailty in PHC settings.
|
v3-fos-license
|
2018-04-03T00:16:58.089Z
|
2012-03-25T00:00:00.000
|
4687952
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://josr-online.biomedcentral.com/track/pdf/10.1186/1749-799X-7-13",
"pdf_hash": "e02d6245920dd81bb922911f1252f2406d677ff7",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44389",
"s2fieldsofstudy": [
"Medicine",
"Psychology"
],
"sha1": "89444184ee052695cfdd5bab3ab7bb205a0489a9",
"year": 2012
}
|
pes2o/s2orc
|
Predictors of excellent early outcome after total hip arthroplasty
Background Not all patients gain the same degree of improvement from total hip replacement and the reasons for this are not clear. Many investigators have assessed predictors of general outcome after hip surgery. This study is unique in its quest for the predictors of the best possible early outcome. Methods We prospectively collected data on 1318 total hip replacements. Prior to surgery patient characteristics, demographics and co-morbidities were documented. Hip function and general health was assessed using the Harris Hip score (HHS) and the Short-Form 36 respectively. The HHS was repeated at three years. We took a maximal HHS of 100 to represent an excellent outcome (102 patients). Multiple logistic regression analysis was used to identify independent predictors of excellent outcome. Results The two strongest predictive factors in achieving an excellent result were young age and a high pre-operative HHS (p = 0.001). Conclusions It was the young and those less disabled from their arthritis that excelled at three years. When making a decision about the timing of hip arthroplasty surgery it is important to take into account the age and pre-operative function of the patient. Whether these patients continue to excel however will be the basis of future research.
Introduction
Total hip arthroplasty (THA) has been shown to provide both significant improvements in the quality of life to patients with hip arthritis [1] but also an excellent cost per Quality-Adjusted Life Year (QALY) gain of half (€6710) that seen in total knee arthroplasty (€13995) [2]. Not all patients however gain the same degree of improvement and the reasons for this are not clear. Many investigators have assessed predictors of outcome after hip surgery [3][4][5][6][7]. This prospective study is unique in its quest for the predictors of the best possible early outcome.
Materials and methods
Between 1998 and 2004 a dedicated audit nurse collected data prospectively on 1318 consecutive unilateral THA. Ethics committee approval was obtained.
Data collected pre-operatively included patient age, sex, body mass index (BMI), smoking status, medical co-morbidities (presence of hypertension, coronary heart disease and diabetes), any use of non-steroidal antiinflammatory drugs (NSAIDS) or aspirin, ASA grade (American Society of Anaesthesiologists), pre-operative haemoglobin (Hb) and level of social deprivation (based on the patient's home post-code).
All of the operations were primary procedures and involved cemented acetabular and cemented femoral prostheses. All patients received prophylactic intravenous cephalosporins and the surgery was conducted in a theatre with laminar flow. They were all performed, or supervised, by a consultant orthopaedic surgeon using the approach most familiar to them. Cementing technique, rehabilitation and follow up were identical for each patient.
Outcome was assessed using two different assessment measures. The first was a joint specific measure -The Harris Hip Score (HHS) and the second was a general health questionnaire -Short Form 36 (SF-36).
The HHS is an extended hip function evaluation, which assesses the patient's perception of pain, function, ability to undertake activities and range of hip motion. The score ranges from 0 to 100, with higher scores indicating increased perceived success and satisfaction [8]. We chose a post-operative HHS of 100 to indicate a patient's perception of excellent outcome.
The SF-36 is a 36-item questionnaire that produces scores in eight domains relating to the patient's quality of life. These are physical functioning, role limitation due to physical problems, bodily pain, general health perception, emotional vitality, social functioning, role limitation due to emotional problems and mental health.
Data was collected pre-operatively and at three years of follow-up. Previous work has shown that HHSs plateau post-total hip replacement at around 18 months [1]. At 3 years therefore we would not expect our patients to see much more in the way of improvement.
Statistics
All data was held in a regional arthroplasty database and recorded in Microsoft Excel format. Data was transferred to SPSS statistical software where the association between a HHS of 100 was tested by chi-squared or t-tests for each factor separately. For factors that gave significant results in these analyses, multiple logistic regression was then used to test for the effect of each factor adjusted for the others. A p value of < 0.05 was considered significant and < 0.001 highly significant.
Results
We reviewed 1682 unilateral THAs performed within the six-year recording period. Data was incomplete for 364 patients (111 patients died before the three-year follow up and 253 did not have complete data). This left 1318 patients to enter analysis. We defined an excellent outcome as a patient having a maximum HHS of 100. In our study 102 patients (7.7%) had a HHS of 100 at three years. The average age of all the study patients was 68.5 (SD 9.9) years. The average age for the patients with a HHS of 100 was 62.0 (SD 9.9).
Highly significant independent predictors (p values < 0.001) of a HHS of 100 were; male sex, young age, low ASA grade, low body mass index, high pre-operative HHS, low deprivation levels and the absence of a history of hypertension or coronary disease. All but 2 of the 8 SF-36 variables (Role Emotional and Mental Health) were highly significant (p < 0.001). (Tables 1, 2, 3).
Multiple logistic regression analysis identified a young age (p = < 0.001) and a high pre-operative HHS (p = 0.001) as the two most significant associations with an excellent outcome (Table 4).
Discussion
The British Orthopaedic Association state the indications for THA are severe pain and disability, with accompanying radiological changes at the hip in patients where nonoperative treatment has failed or is futile [9]. It has previously been tradition for arthroplasty surgery to be delayed for as long as the patient can tolerate. This is probably a consequence of the paucity of historical longterm follow-up for joint replacements. More recent research has questioned this belief with younger patients appearing to achieve better outcomes than their aged counterparts [3,10]. Fortin et al [11] suggested performing arthroplasty surgery earlier in the course of functional decline may be associated with better outcome. Lingard et al [12] demonstrated marked functional limitation, severe pain and a low mental health score before total knee arthroplasty were predictors of worse outcome. Patients with poor pre-operative walking distance are less likely to gain the same benefits from THA [13].
Of the 1318 patients enrolled in this study the two most powerful predictors of an excellent outcome at three years (HHS of 100) were a high pre-operative HHS and a young age at the time of surgery.
The HHS is an extended hip function evaluation, which assesses the patient's perception of pain, function, ability to undertake activities and range of hip motion. The score ranges from 0 to 100, with higher scores indicating perceived success and satisfaction. Marchetti et al [14] suggested that a HHS of 90-100 indicates an excellent result, 80-90 a good result, 70-80 a fair result and less than 70 a poor result. The HHS was initially designed to assess the outcome of arthroplasty on traumatic arthritis after hip dislocation and acetabular fracture [8]. It has subsequently been shown to be both a sensitive and specific marker of hip function. It is more responsive than walking speed, pain and sub-scales of function of the SF-36 in patients with OA [15]. Soderman and Malchau [16] confirmed the HHS as having high validity and reliability when compared with other outcome scoring systems (Western Ontario and McMaster University Osteoarthritis Index (WOMAC) and the Medical Outcomes Study 36-Item Short-Form Health Survey questionnaires). A weakness of the HHS however is that it assumes a concordance of the views between the clinician and patient. Rothwell et al [17] illustrated how patients and clinicians can differ in their subjective importance of different elements in quality of life assessment.
Whether a maximum score of 100 out of 100 truly represents excellence is debatable. Johanson et al [4] noted final outcome score assessments do not take into account clinical improvement from the base line. This could be seen as a weakness in this study.
A standard was required; hence the arbitrary figure of 100 was selected. This produced 7.7% (n = 102) of patients who reported the best possible score from surgery. This itself is of significance when related to the patient's consent process. Only one patient in thirteen will express no complaints whatsoever at the three-year follow-up. As improvement in patient satisfaction is rare beyond eighteen months [1] any grievances are liable to remain.
It was the younger patients and those less disabled from their arthritis who excelled in this study. This is invaluable information to use during the consent process. At three years follow-up patients can expect the best possible result from their hip arthroplasty when they are relatively young and were less disabled from their arthritis. This would imply surgery earlier in the disease may give better early results. What is not clear however is the long term results of hip arthropasty at a young age. Hilmarsson et al [18] demonstrated in the Swedish hip registry 10-year survivorship of only 64-67% for hip replacements in patients under 55 years. Callaghan [19] saw a 29% revision rate at 20-25 years after THR when less than 50 years old. This would imply that although young patients may get an excellent early result the overall lifespan of the replacement is likely to be less. The increased levels of activity and the subsequent wear seen in the younger age group may explain this. In total knee replacements however the converse is true. In a prospective study of 622 knees Brenkel and Elson [20] demonstrated a young age as an independent predictor of pain from a knee replacement at five years. The authors speculated this could have been due to the development of a pain syndrome secondary to multiple previous operations. This is an entity not normally seen in hip arthroplasty surgery.
The overall improvement for hip replacements in the young may not be as great. In a health-status questionnaire MacWilliam et al [21] demonstrated for each 10point increase in the preoperative score patients could expect at least a 6-point decrease in postoperative improvement.
In summary, when making a decision about the timing of hip arthroplasty surgery it is important to consider the age and pre operative function of the patient. These are strong predictive factors in achieving an early excellent result at three years. Whether these patients continue to excel however is not known and will be the foundations of future research.
Authors' contributions GS wrote the manuscript. SJ collated the data, set up the data analysis and helped draft the manuscript. JAB reviewed the manuscript and was involved throughout with patient recruitment. ED conceived the original idea, was involved in patient recruitment and proof read the manuscript. IJB set up the data collection, participated throughout in patient recruitment and proof read the manuscript.
Competing interests
The authors declare that they have no competing interests.
|
v3-fos-license
|
2021-05-16T15:38:27.354Z
|
2021-04-29T00:00:00.000
|
234685322
|
{
"extfieldsofstudy": [],
"oa_license": "CCBY",
"oa_status": "HYBRID",
"oa_url": "https://essd.copernicus.org/articles/13/4693/2021/essd-13-4693-2021.pdf",
"pdf_hash": "4014a48a69c09b79c1be9a960a46263820c482e4",
"pdf_src": "ScienceParsePlus",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44390",
"s2fieldsofstudy": [
"Environmental Science"
],
"sha1": "1fcf03a8bda3d45edffa2176decc43d6fd2120a7",
"year": 2021
}
|
pes2o/s2orc
|
Comment on essd-2021-16
Fay and coauthors aim to improve the global net air-sea CO2 flux estimate and ease model-data comparisons by making a diversity of pCO2 data products (n=6) with methodological differences more consistent and releasing the results as a new data product: SeaFlux. Their approach involves relying on a climatological pCO2 data product to spatially extrapolate estimates from other pCO2 data products with more limited ocean coverage. After extrapolating all pCO2 data products to the same ocean mask, the authors calculate the net air-sea flux using three wind speed products, while accounting for gas exchange coefficient sensitivities to the individual wind speed products. The authors find that the flux estimate discrepancies between these products can be reduced most by simply using a consistent ocean domain for the pCO2 data products.
General Comments:
Fay and coauthors aim to improve the global net air-sea CO2 flux estimate and ease model-data comparisons by making a diversity of pCO2 data products (n=6) with methodological differences more consistent and releasing the results as a new data product: SeaFlux.Their approach involves relying on a climatological pCO2 data product to spatially extrapolate estimates from other pCO2 data products with more limited ocean coverage.After extrapolating all pCO2 data products to the same ocean mask, the authors calculate the net air-sea flux using three wind speed products, while accounting for gas exchange coefficient sensitivities to the individual wind speed products.The authors find that the flux estimate discrepancies between these products can be reduced most by simply using a consistent ocean domain for the pCO2 data products.
The paper is clearly written, the findings are important, and the data product will simplify model-data comparisons.However, a justification for the extrapolation approach is not provided, and a few simple analyses are required to verify that the approach is "a step forward from" existing methods.Reviewers 1 and 2 have already outlined several concerns; therefore, I will keep this brief and focus on specific suggestions and technical corrections for the authors to address.
Specific Comments:
My primary concern is with the pCO2 scaling approach.It is not clear why the MPI-ULB-SOMFFN climatology was used for gap filling rather than the time evolving JENA-MLS data product.As noted by Reviewer 2, the authors could test their scaling method by using JENA-MLS as the reference data product to see if they achieve similar results.
It is also unclear why the authors use an ensemble mean scaling factor when individual scaling factors for each data product may be more appropriate as it would allow more data to be used (i.e., a consistent mask wouldn't be required) to determine the scaling factor for most products.I can understand the desire to maintain consistency in the data extrapolation between products, but it's not clear that this approach makes more sense than creating individual scaling factors for the data extrapolations.More information is needed to explain why this decision was made.
As a sensitivity test, the authors could apply their methodology using JENA-MLS to scale MPI-ULB-SOMFFN (and vice versa) directly AND using an ensemble mean, to see which yields a better result.They could also do this using (1) the common missing data mask as well as (2) each missing data mask from the four other data products to evaluate whether the resulting extrapolation bias is sensitive to the extrapolation area.The authors could also apply the linear-scaling approach used in the Global Carbon Budget (GCP) to MPI-ULB-SOMFFN and JENA-MLS (using the missing data masks from the four other products) to quantify the resulting extrapolation biases and determine whether their approach is indeed more accurate than the GCP method.
The suggested analyses may help clarify which approach is best for achieving data product comparability.
Technical Corrections:
Title: The 2 in pCO2 should be subscripted.
Line 36: pCO2 is not yet defined.
Line 37: Add "modern" before "global mean uptake" Line 43: "variations" should be "variation."It seems that the atmospheric pCO2 growth rate is the largest driving force governing the net exchange of CO2 across the air-sea interface unless you're talking about sub-annual or pre-industrial timescales.Please clarify.
Line 57: How about: "These differences in flux calculations introduce uncertainty in comparisons between the products and with Global Ocean Biogeochemistry Models (GOBM)."Line 95: pCO2 was already defined.
Line 97: A "we" seems to be missing.
Line 100: Satellite SST and EN4 subsurface salinity data are used to calculate parameters required for the air-sea flux calculations.What depth are the EN4 salinity data from?Line 108: Slightly awkward wording.What about: "Flux is defined as being positive when CO2 is released from the ocean to the atmosphere and negative when CO2 is absorbed by the ocean from the atmosphere."Line 117: "…relationships between pCO2 and proxy variables are expected".The next sentence starting on this line doesn't seem to make sense.Maybe get rid of "in contrast."Line 130: "net global(?) fluxes"?Line 159: There seems to be a formatting issue.Additionally, it's not clear if you are talking about the original global flux for each model, or not.160: Is this because some products are missing the Arctic?That seems important to clarify.
Line 162: "…the final CO2 flux also depends on the…" Line 82: Remove equal sign.
Figure 2 :
Figure 2: I would recommend converting this a four-panel figure with the inset graph having its own panel since there is space.Add a-d lettering to match the caption.
Table 1 :
It is not clear what "unfilled area listed" means.Should this be "Area coverage"?
Table 2 :
Should the 3 rd row be titled "Mean Annual Global Flux" Powered by TCPDF (www.tcpdf.org)
|
v3-fos-license
|
2020-07-23T09:02:56.281Z
|
2020-07-01T00:00:00.000
|
221108181
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://doi.org/10.1016/j.bjorl.2020.06.004",
"pdf_hash": "9027cc677437d6603704c0bf19571fc542ddc747",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44391",
"s2fieldsofstudy": [
"Medicine"
],
"sha1": "4e2c91ae6c9701b3e2e708ab9678b5fc0e48844f",
"year": 2020
}
|
pes2o/s2orc
|
Clinical and demographic characteristics of adenomatoid odontogenic tumors: analysis of 116 new cases from a single center
Introduction The adenomatoid odontogenic tumor is a relatively uncommon odontogenic neoplasm representing about 4.7% of all odontogenic tumors. Objective The aim of this study was to determine the demographic and clinical profile of the adenomatoid odontogenic tumors in a Sri Lankan population. Methods Data gathered from the cases received for a period of 38 years from the Department of Oral Pathology, Faculty of Dental Sciences, University of Peradeniya. Request forms, biopsy reports and electronic data base of the department were used to obtain relevant information. Demographic data including age, gender and location of the tumor were included in the analysis. Results Out of 116 cases of adenomatoid odontogenic tumor, the mean age was 21.02 ± 11.24. It occurs more fre quently in the second decade of life, more prevalent in females, most often associated with the maxilla, predominantly affecting anterior jaw bones and presenting mostly in the right side of the jaw bone. The results from the present study showed the statistically significant relationship with site of occurrence (maxilla/mandible) and age (p < 0.005). Further, depending on whether it occurs in anterior/mid/posterior site also showed a significant relationship with age (p ≤ 0.001). However, side of occurrence, left or right or site of occurrence, showed no statistically significance with age (p > 0.05). Conclusion Adenomatoid odontogenic tumor occurs more frequently in the second decade of life with a significant female predominance and the commonest site is anterior maxilla. This study revealed few differences on demographic and clinical presentations of adenomatoid odontogenic tumor from some regions of the world.
Introduction
Odontogenic Tumors (OT) are uncommon lesions that originate from epithelial, ectomesenchymal and/or mesenchymal tissues of the tooth-forming apparatus. It constitutes a heterogeneous group of lesions with diverse biological, clinical, and histopathological features ranging from benign lesions to malignant tumors. 1,2 The classification of odontogenic tumors is essentially based on these interactions between odontogenic ectomesenchyme and epithelium. 3 The literature indicates that odontogenic tumors show a geographic variation in their distribution and frequency. 3---7 The Adenomatoid Odontogenic Tumor (AOT) is classified under epithelial odontogenic tumors due to histomorphologic resemblance of the components of the dental organ. AOT is a benign (hamartomatous) odontogenic lesion that has been regarded either as a non-invasive and non-aggressive neoplasm with slow but progressive growth. AOT is generally considered to be an uncommon tumor. 8 The search for the first identifiable AOT case is challenging because many names have been used for this entity. Some early cases were grouped together with other superficially similar tumors and this was further complicated as photomicrographic documentation was not available in that era. AOT was first described by Dreibaldt in 1907, as ''pseudo-adenoameloblastoma''. Harbitz et al. published in 1915 as a ''cystic adamantoma''. 9 The first series of AOT were reported by Stafne in 1948 under the title ''epithelial tumors associated with developmental cysts of the maxilla''. 10 Bernier and Tiecke published an article that was the first case to use the name ''adenoameloblastoma''. 11 Terminology used for AOT varies according to the literature. Miles from England reported it as ''Cystic complex composite odontome''. 12 Further, Oehlers from Singapore as ''an unusual pleomorphic adenoma-like tumor in the wall of a dentigerous cyst'', 13 Lucus from London as ''tumor of enamel organ epithelium'' (Lucus, 1957), Japanese author as ''adenomatoid ameloblastoma'' 14 and Smith from United States as ''adenomatoid odontoma'' 15 were the other nomenclature used for the same entity. There were AOT cases documented as ''adenoameloblastoma'', ''ameloblastic adenomatoid tumor'', ''adamantinoma'', ''epithelioma adamantinum'' and ''teratomatous odontoma''. Finally, in 1969 Philipsen and Birn proposed the widely accepted name ''adenomatoid odontogenic tumor''. 16 Like all other odontogenic tumors, the specific stimulus that triggers proliferation of the progenitor cells of AOT is unknown. AOT accounts for approximately 3%---7% odontogenic tumors and is the fourth most frequent tumor among OTs. 8 The relative frequency (RF) of AOT in Sri Lanka has been reported as 8.6% in 1990 17 and 4.7% in another study. 17 Retrospective studies was conducted in Thailand, 18 China, 19 Mexico 5 and California 20 revealing that the relative frequency of AOT was 5.3% 2.1%, 7.1% and 1.7% respectively.
Two-thirds of the AOTs are diagnosed in the second decade of life and more than half of cases are found in teenagers 13---19 years of age, 2,17 de Matos et al. 21 in a retrospective review of 15 cases from Brazil revealed a lower mean age of 16.2 years and studies from California 20 and China 22 revealed that the mean ages as 20.2 years and 22.6 years respectively.
The tumor is diagnosed more frequently in women and additional recent studies have revealed a strong female predilection as well. 20,23---25 Further, some other researchers noted that the AOT is more common in blacks. 26 Although the most common site is anterior maxilla 7,27,28 there are a few studies which showed a slight mandibular predilection. 29,30 The predominately associated tooth with AOT wasthe maxillary canine 21,27 but some studies have revealed a rare involvement of un-erupted molars. 31 Although AOT is an asymptomatic tumor, patients may be aware of a painless gingival swelling or an area of jaw enlargement which is slowly growing and often associated with an unerupted tooth. 32 Presence of calcifications gives a mixed radio-dense appearance to AOTs apart from a normal appearance of wellcircumscribed unilocular radiolucency.
Histologically it is composed of spindle-shaped epithelial cells that form sheets, strands or whorled masses (rosette like) in a scant fibrous stroma surrounded by a fibrous capsule. Central spaces of the duct-like structures are lined by a layer of columnar or cuboidal epithelial cells that show reversed polarity, suggesting secretory activity. Foci of calcifications may be seen scattered throughout the tumor, and some AOTs contain larger areas of matrix material or calcification which has been interpreted as dentinoid or cementum. 26 Complete surgical excision with enucleation is the treatment of choice; recurrence of AOT is extremely rare hence very few recurrent cases have been reported. 33 Several studies have been from different places around the world have been carried out to determine the demographic and clinical profile of AOT according to age, sex, site, extent of tumor and associated impacted teeth. However, in Sri Lanka there are no such studies related to AOT itself. The current study involved 116 cases of AOTs that need to be added to the literature. Therefore, the objective of the present study was to analyze one of the largest series of AOTs from a single center, for a period of thirty eight years, with the existing literature.
Methods
This study was a retrospective analytical study. The cases which were diagnosed as AOT with their demographic and clinical characteristics (age, sex, location of tumor) from January 1980 − 31st December 2018 were retrieved from the archives of the Department of Oral Pathology, Faculty of Dental sciences, University of Peradeniya, Sri Lanka. AOTs with inadequate data were excluded and cases with multiple biopsies were considered as a single case.
Ethical clearance was obtained from the ethics review committee of the Faculty of Dental sciences, University of Peradeniya. (ERC/FDS/UOP/1/2018/08). Details that are not in the database were retrieved from patients' request forms which are under Oral Pathology. Histopathologically all cases were evaluated by two pathologists. The cases with unusual features were recorded separately.
Collected data were entered into Microsoft Excel work sheet. Gathered details were grouped according to the age categories to identify the frequency and the 2nd decade group was analyzed separately. Distribution within jaw bones was also evaluated. Data were analyzed using the statistical software SPSS 25 (Statistical Package for Social Sciences 25). Chi-Square test was used to determine the association. Each variable with different combinations were analyzed to identify whether there is any significant relationship. The level of significance was set at (p < 0.05) throughout the study.
Results
A total of 116 cases of AOTs were identified. Age ranged from 5 to 77 years with the mean age of 21.02 years (21.02 ± 11.24) and a median of 18 years. There is a slight difference of the mean ages of females and males (21.14 ± 10.91 years and 20.82 ± 11.91 years).
During the second decade of life the incidence of AOTs is 69.8% which is the highest and followed by 21−30 years age group. From the total sample the peak incidence of AOT (12.1%) was found in the 15 years of age group followed by 18 years of age group (11.2%) and 16 years of age group (9.5%) (Fig. 1a). However, among the 11−20 age group 17.3% were 15 years and 16% of them were 18 years followed by 13.6% who were 16 years old (Fig. 1b).
There were 44 (37.9%) males and 72 (62.1%) females. The male: female ratio for all age groups was 1:1.6. There was a slight female predilection.
The maxilla was the commonest site for AOT consisting of 78 cases (67.2%), while 38 (32.8%) cases were reported in the mandible, giving a maxilla-to-mandible ratio of 2.1:1. Out of 116 cases a precise location was identified in 111 cases. The anterior region of both jaws was more frequently affected (81.1%) followed by the middle region (18.0%) and 1 case (0.9%) in the posterior region. In all, 113 cases had information whether the tumour was on right or left. The right side is more frequently affected 60 (53.1%) than the left side 53 (46.9%), giving a ratio of right: left 1.1:1 (Fig. 2). There were two cases (1.8%) in the anterior region without exact site being cited. Therefore, it was not included in Fig. 2.
The maxilla is the site of predilection in the first, second and third decade of life (maxilla: mandible ratio 2:1, 3.5:1 and 1.3:1 respectively). However, findings in the 30 years and over age group cohort, indicated that the mandible was frequently involved. The maxilla: mandible ratio for 31−40 age group was 1:1.75, and for the 41−50 age group was 1:3 and for > 50 years of age, all 3 cases were in the mandible.
According to the analysis of gender distribution and side predilection (left/right), the results indicate the ratio for left side of the jaw for male:female was 1:1.1 and right side of the jaw bone was 1:2.3. Distribution of cases between left and right side of the jawbone is further analyzed according to different decades of life. It shows that right side is the site of predilection in the first, second, third and fourth decade of life (left: right ratio 1:2, 1:1.2, 1:4, and 1:1.5 respectively). However, in the fifth decade of life it shows an equal prevalence for the left and right side of jaw bones. For the patients over 50 years of age, the right side appeared to be the most frequently involved site. Left-to-right ratio for > 50 years age group was 2:1 (Table 1).
There was a statistically significant relationship with site of occurrence (maxilla/mandible) and age (p < 0.005). Further, depending on whether it occurs in anterior/mid/posterior also showed a significant relationship with age (p ≤ 0.001). However, side of occurrence (left/right) with age and site of occurrence with gender were not significant statistically The majority of the cases showed characteristic features of AOTs (Fig. 3a) with one case presenting as a large (Fig. 3b−c).
Discussion
The adenomatoid odontogenic tumor is a benign odontogenic lesion that has been regarded either as a non-invasive, non-aggressive neoplasm or as a developmental hamar- tomatous growth. 26,34 Several studies from different places around the world have been carried out to determine the demographic and clinical profile of AOT. According to previous studies the demographic and clinical presentation of AOT does not differ significantly from one country to other. 2,17 However, in Sri Lanka there is no updated information available regarding demographic and clinical profile of AOT. Therefore, this study was undertaken to analyze demographic characteristics and clinical profile of AOTs in Sri Lanka. Adenomatoid odontogenic tumor is not a common OT. Therefore, we have only 116 cases on record at the Department of Oral Pathology, Faculty of Dental sciences, University of Peradeniya for the past 38 years. However, this is the largest sample from a single institution so far in the literature. The result of our study is in par with most studies around the world.
It had been generally accepted that the relative frequency of AOT corresponds to 2.2%−8.7% of all odontogenic tumours. 8,31 However, in a worldwide collaborative retrospective study, the relative frequency of AOT ranged from 0.6% to 38.5%. 34 The relative frequency of AOT in Sri Lanka reported as 8.6% in 1990. 17 However, more recent studies from Sri Lanka reported a lower relative frequency, which was 4.7% of all OTs. 2 Likewise, some reports from China suggested a higher relative frequency (8.3%) of AOT. 22 Although recent retrospective review of 1309 cases from China revealed a lower relative frequency as 2.1%. 19 In comparison with Asian countries, our relative frequency was lower than those reported in countries such as Thailand (5.3%). 18 Studies from Malaysia (0.3%) China (2.1%) 19,29 California, 20 Nigeria, 25 Brazil, 21 and Mexico 5 found the relative occurrence of AOT among total OTs, as 0.3%, 2.1%, 1.7%, 4.5%, 5.4% and 7.1% respectively.
Similar to the present study, retrospective studies with large case series revealed a female predominance for AOT, with global female-to-male ratio of 1.9:1. 8,34 However, the female-to-male ratio of 1.6:1 obtained on this study did not reflect the marked female preponderance with previous studies in Asia. A recent study from Sri Lanka revealed female-to-male ratio as 2:1. 2 Toida et al. 35 in Japan have reported that the female-to-male ratio of 3.0:1. In contrast, Swasdison et al. 18 in a retrospective review of 67 cases for Thai population showed female-to-male ratio as 1.8:1 which was closely similar with our findings. Furthermore, Arotiba et al. 27 in a previous study from Nigeria and de Matos et al. 21 in a review of 15 cases from Brazil showed a female-to-male ratio of 1.4:1, which was more in line with our findings.
With respect to age distribution, it has been reported that more than two-thirds of AOTs are diagnosed in young patient, especially those in the second decade of life and more than 80% are found before the age of 30 years. 8,27,34 Our study findings are in agreement with these reports.
The mean age of the patient at the time of diagnosis was 21.02 years. It was 21.14 years for females and 20.82 years of mean age for males in the current study. Our previous studies showed a lower mean age probably due to a lesser number of cases compared with the present study (17.6 years and 18 years respectively). 2,17 However, Swasdison et al. 18 in a retrospective review of 67 cases from Thailand indicated that 21.1 yearsis the mean age: 21.4 years as a mean age for males and 20.9 years as a mean age for females. Current results were more in parallel with this study. Mean ages from studies by Adisa et al. 25 from Nigeria and Lu et al. 22 from China were 20.4 years and 22.6 years respectively. In addition a study composed of 1088 cases of OTs from Northern California reported 20.2 years as the mean age. 20 The findings of the above studies were more in keeping with the findings of the current study. Furthermore, Ochsenius et al. 7 in a retrospective review from Chile revealed a similar mean age (21.03 years) compared to current study findings. Leon et al. 36 in a clinicopathological and immunohistochemical study of 39 cases of AOT from three oral diagnostic services (Brazil, Mexico and Guatemala) and de Matos et al. 21 in a retrospective review of 15 cases from Brazil showed lower mean ages of 16 years and 16.2 years respectively. Arotiba et al. 27 in a retrospective review of 57 cases for Black African population reported the mean age as 17 years. However, current study findings are not compatible with these findings.
AOT occurs predominantly in the maxilla (67.2%) compared to mandible (32.8%), with maxilla-to-mandible ratio of 2.1:1 (showing female-to male ratio of 1.4:1 for maxilla and 2.1:1 for mandible), anterior part of the jaw (81.1%) were much more affected than the mid-(18.0%) and posterior regions of the jaw (0,9%). Furthermore, the right side (53.1%) was affected slightly more compared with the left side (46.9%).The maxilla-to-mandible ratio of AOT for Sri Lanka previously reported as 2.3:1, in parallel with the findings of the current study. 2 Similar studies carried out worldwide revealed a maxillary predilection with maxillato-mandible ratios of 1.25:1 (25), 1.8:1 (36), 1.9:1 (18) and 2:1. 31 The predilection for the anterior region of the jaw was revealed in some studies. 18,25 In addition to a retrospective review of 15 cases from Brazil, a similar study from Chile also revealed an anterior jaw predilection. 7,21 Current study results were more in line with those studies.
Although, retrospective studies from Malaysia and Brazil showed a slight mandibular predilection, 29,30 our results and most other studies contradict the above findings. Some older studies from Nigeria suggested a mandibular predilection for AOT. 37 However more recent studies from Nigeria by Arotiba et al. 27 and Effiom et al. 28 agree with an anterior maxillary preponderance.
Conclusion
The results from the present study observed that AOT occurs more frequently in the second decade of life, more prevalent in females, most often associated with maxilla, predominantly affecting anterior jaw bones and presenting mostly in the right side. The results from the present study showed the statistically significant relationship with site of occurrence (maxilla/mandible) and age (p < 0.005). Further, depending on whether it occurs in anterior/mid/posterior also showed a significant relationship with age (p ≤ 0.001). However, side of occurrence (left/right) with age (p > 0.05) was not statistically significant and there was no statistically significant relationship with site of occurrence with age. In addition, this study revealed a few differences on demographic and clinical presentation of AOT from region to region.
Data availability statement
Data used for analysis can be produced whenever requested. Raw data also can be provided with request.
|
v3-fos-license
|
2022-07-09T15:24:54.089Z
|
2022-07-01T00:00:00.000
|
250370554
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.mdpi.com/1420-3049/27/14/4335/pdf?version=1657180917",
"pdf_hash": "45686e2c2bf114e8de8b85b6197760c556957702",
"pdf_src": "MergedPDFExtraction",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44392",
"s2fieldsofstudy": [
"Environmental Science"
],
"sha1": "ebc2f79b0f76be28932505ea79e00e4fe745431e",
"year": 2022
}
|
pes2o/s2orc
|
Rare Earth Elements and Bioavailability in Northern and Southern Central Red Sea Mangroves, Saudi Arabia
Different hypotheses have been tested about the fractionation and bioavailability of rare earth elements (REE) in mangrove ecosystems. Rare earth elements and bioavailability in the mangrove ecosystem have been of significant concern and are recognized globally as emerging pollutants. Bioavailability and fractionation of rare earth elements were assessed in Jazan and AlWajah mangrove ecosystems. Comparisons between rare earth elements, multi-elemental ratios, geo-accumulation index (Igeo), and bio-concentration factor (BCF) for the two mangroves and the influence of sediment grain size types on concentrations of rare earth elements were carried out. A substantial difference in mean concentrations (mg/kg) of REE (La, Ce, Pr, Nd, Sm, Eu, Gd, Tb, Dy, Ho, Er, Tm, Yb, and Lu) was established, except for mean concentrations of Eu, Gd, Tb, Tm, and Lu. In addition, concentrations of REEs were higher in the Jazan mangrove ecosystem. However, REE composition in the two mangroves was dominated by the lighter REE (LREE and MREE), and formed the major contribution to the total sum of REE at 10.2–78.4%, which was greater than the HREE contribution of 11.3–12.9%. The Post Archean Australian Shale (PAAS) normalized values revealed that lighter REE (LREE and MREE) were steadily enriched above heavy REE. More so, low and negative values of R(H/M) were recorded in the Al Wajah mangrove, indicating higher HREE depletion there. The values of BCF for REEs were less than 1 for all the REEs determined; the recorded BCF for Lu (0.33) and Tm (0.32) were the highest, while the lowest BCF recorded was for Nd (0.09). There is a need for periodic monitoring of REE concentrations in the mangroves to keep track of the sources of this metal contamination and develop conservation and control strategies for these important ecosystems.
Introduction
The Red Sea is a channel that forms a linkage between the Mediterranean Sea (north) and the Indian Ocean (south). The sea is a marine biodiversity hotspot with a high abundance of coral reefs, mangroves, and sea grass [1,2]. In aquatic ecosystems such as the Red Sea, suspended sediments and particulate matter may account for almost 90% of metal burden [3].
Rare earth elements (REEs) are a collection of seventeen chemical elements in the periodic table and are generally trivalent elements, excluding Ce and Eu, which tend to exist as Ce (IV) and Eu (II). REEs starting from La and ending with Sm are considered light rare earth elements (LREEs), while those ranging between Gd and Lu are considered heavy rare earth elements (HREEs) [4]. The light rare earth elements (LREEs) and heavy rare earth elements (HREEs) have analogous geochemical behaviors. They give a better understanding of complex procedures of a geochemical nature that single proxies cannot readily discriminate due to their coherent and expectable characteristics [4,5].
REEs do not occur in pure metal form even though they occur in nature, although Promethium, the rarest, only occurs in trace quantities in natural materials as it has no stable or long-term isotopes [6]. Globally, REEs are recognized as emerging micro-pollutants in aquatic ecosystems [7,8]. Modern technologies, on the other hand, utilize REE for its unique physicochemical properties in high-tech applications [9]. For example, AgInSe 2 (AIS) is one of the most attractive materials in thin film solar cell applications because of its high optical absorption coefficient [10]. As a result, it is unlikely that REE would spread naturally in most coastal habitats such as mangroves because they have already been impacted by anthropogenic activities [8,11,12].
Mangroves are important intertidal coastal systems that provide multiple ecological functions. They regulate material exchange at the interfaces between land, atmosphere, and marine ecosystems [13,14]. Mangrove ecosystems are dynamic in nature, often subjected to rapid changes in physicochemical properties such as water content, pH, salinity, texture, and redox conditions due to tidal flushing, and the associated flooding could influence metal contamination. Flooding may develop redox cycles in the aquatic environment, with alternating periods of oxidizing and reducing conditions [12,[15][16][17]. Therefore, the application of REE can be useful in tracing the channels and processes in which these elements are involved, particularly in contaminated environments such as those found in the Jazan and AlWajah mangrove ecosystems and their biota.
Notably, there are few or no studies providing a comprehensive investigation of the bioavailability of rare earth elements in the mangrove ecosystems of Jazan and AlWajah in the northern and southern central Red Sea. As a result, this study will open the way for periodic monitoring of REE concentrations in mangrove ecosystems, as well as tracking the sources of metal contamination, allowing for the development of policies for the control and conservation of these important ecosystems.
REE Composition in Sediment and Influence of Grain Sizes
In this study, the results for REE composition in sediment showed significant variation (t-test, p < 0.05) between the two mangrove ecosystems investigated (Table 1). A substantial difference in mean concentrations (mg/kg) of REEs (La, Ce, Pr, Nd, Sm, Eu, Gd, Tb, Dy, Ho, Er, Tm, Yb, and Lu) was also recorded; however, except for the mean concentrations of Eu, Gd, Tb, Tm, and Lu, significantly higher concentrations of REEs were recorded in sediment samples collected from the Jazan mangrove ecosystem (Table 1). Generally, the concentrations of REEs were lowest in the Al Wajah mangrove ecosystem. The sum of REEs (∑REE = 112.54 ± 10.48 mg/kg) recorded at the Jazan mangrove was about double that of Al Wajah (∑REE = 78.47 ± 7.89 mg/kg). Nevertheless, the REE composition in the two mangroves was dominated by the lighter REEs (LREE and MREE) and formed the major contribution to the total sum of REEs at 10.2-78.4%, which was greater than the HREE contribution of 11.3-12.9%. In addition, the sum of LREEs (La, Ce, Pr, Nd) was about sevenand eight-fold that of the composition of MREEs (Sm, Eu, Gd) and HREEs (Tb, Dy, Ho, Er, Tm, Yb, Lu), respectively (Table 1). The principal component analysis biplot revealed the influence of sediment grain size types on REE concentrations in sediment and the site's contribution to the total variation ( Figure 1A). The relationship revealed by the PCA was based on component 1 (52.3%) and component 2 (16.7%), accounting for the total variation of 69.0% ( Figure 1A). The coarse sediment correlation grain size and clay silt grain size showed a positive correlation with REEs ( Figure 1A,B). The PCA loadings further confirm the positive relationship between the coarse sediment (r = 0.04) and clay silt sediment (r = 0.58) with the REEs to be true ( Figure 1B). In addition, clay silt sediment (r = 0.58) has more influence on the concentrations of REEs in the sediment than the other sediment grain sizes; this is revealed to be true in Figure 1B. Relationship analysis based on the site revealed that the Jazan mangrove ecosystem is more influenced by rare earth elements than the Al Wajah mangrove ecosystem.
Fractionation of REE and Sediment Quality Index
The Post-Archean Australian Shale (PASS) (Taylor and McLennan, 1985) normalized REE patterns of the sediments for the two mangrove ecosystems plotted provide a better understanding of the pattern of accumulation of REE in this study ( Figure 2). The results reveal ∑REE relative enrichment and comparative trends of fractionation for REE. The fraction (La/Yb)n was higher (0.49) in the Al Wajah mangrove than the Jazan mangrove (0.41), with an average of 0.45 ± 0.04 for the two mangroves (Table 1). For the fractions using (Sm/La)n, a significantly higher value (2.17) was revealed at Jazan, while the lowest value was recorded in Al Wajah sediment samples. The average value for the (Sm/La)n fraction was 2.07, which is the highest median proportion when compared to other fractions, revealing a significant LREE and MREE accumulation ( Figure 2; Table 1). The Al Wajah mangrove had the lowest value (1.03) of (Yb/Sm)n, while a significantly higher value (1.13) was recorded for sediment sampled from the Jazan mangrove ecosystem. The average for (Yb/Sm)n in the two mangrove ecosystems was 1.08.
Fractionation of REE and Sediment Quality Index
The Post-Archean Australian Shale (PASS) (Taylor and McLennan, 1985) normalized REE patterns of the sediments for the two mangrove ecosystems plotted provide a better understanding of the pattern of accumulation of REE in this study ( Figure 2). The results reveal ∑REE relative enrichment and comparative trends of fractionation for REE. The fraction (La/Yb)n was higher (0.49) in the Al Wajah mangrove than the Jazan mangrove (0.41), with an average of 0.45 ± 0.04 for the two mangroves (Table 1). For the fractions using (Sm/La)n, a significantly higher value (2.17) was revealed at Jazan, while the lowest value was recorded in Al Wajah sediment samples. The average value for the (Sm/La)n fraction was 2.07, which is the highest median proportion when compared to other fractions, revealing a significant LREE and MREE accumulation ( Figure 2; Table 1). The Al Wajah mangrove had the lowest value (1.03) of (Yb/Sm)n, while a significantly higher value (1.13) was recorded for sediment sampled from the Jazan mangrove ecosystem. The average for (Yb/Sm)n in the two mangrove ecosystems was 1.08.
The multi-elemental ratios R(M/L) and R(H/M) indicate positive values corresponding to patterns of fractionation with average MREE enrichment and average HREE depletion. This was supported by the positive range values for R(H/M), the very low positive value for R(H/M) in Jazan, and the negative value at the Al Wajah mangrove, and also a range value from negative to positive (Table 1). There was more enrichment of MREEs at Jazan, with the highest value of R(M/L) (0.28). The low values of R(H/M) and the negative value for Al Wajah indicate HREE depletion, with even more depletion at the Al Wajah mangroves. There exists a significant difference (t-test; p < 0.05) in the multi-elemental ratios (R(M/L) and R(H/M)) between the two mangrove ecosystems. Ce and Eu anomalies in the two mangroves were revealed by computing the anomalies during the normal and expected shale-normalized REE concentrations, to enable quantification of the probable anomalous concentration. The result revealed a small negative anomaly for Ce, as the average value was almost equal to one. Congruent with the average value, the values of Ce in Al Wajah (0.97) and Jazan (0.86) are still small and negative, with that of Al Wajah being slightly lower. In contrast, the Eu anomaly is small and positive (1.63), with slightly lower (1.27) and higher (1.98) Eu anomaly values at the Al Wajah and Jazan mangroves, respectively ( Figure 2; Table 1). (Table 1). There was more enrichment of MREEs at Jazan, with the highest value of R(M/L) (0.28). The low values of R(H/M) and the negative value for Al Wajah indicate HREE depletion, with even more depletion at the Al Wajah mangroves. There exists a significant difference (t-test; p < 0.05) in the multi-elemental ratios (R(M/L) and R(H/M)) between the two mangrove ecosystems. Ce and Eu anomalies in the two mangroves were revealed by computing the anomalies during the normal and expected shale-normalized REE concentrations, to enable quantification of the probable anomalous concentration. The result revealed a small negative anomaly for Ce, as the average value was almost equal to one. Congruent with the average value, the values of Ce in Al Wajah (0.97) and Jazan (0.86) are still small and negative, with that of Al Wajah being slightly lower. In contrast, the Eu anomaly is small and positive (1.63), with slightly lower (1.27) and higher (1.98) Eu anomaly values at the Al Wajah and Jazan mangroves, respectively ( Figure 2; Table 1).
The sediment quality index used in this study was the geo-accumulation index (Igeo), which has seven classes of enrichment as described by Muller (1969). Using the Igeo, sediments from the two mangroves were strong to extremely contaminated (4 ≤ Igeo ≥ 5) with La, Ce, Pr, Nd, Sm, and Gd, and moderately to strongly contaminated (1 ≤ Igeo ≥ 3) with Dy, Er, and Yb ( Figure 3). In addition, Igeo revealed that the sediment was not contaminated (<0) with Eu, Tb, Ho, Tm, or Lu, with negative Igeo values ( Figure 3). The sediment quality index used in this study was the geo-accumulation index (Igeo), which has seven classes of enrichment as described by Muller (1969). Using the Igeo, sediments from the two mangroves were strong to extremely contaminated (4 ≤ Igeo ≥ 5) with La, Ce, Pr, Nd, Sm, and Gd, and moderately to strongly contaminated (1 ≤ Igeo ≥ 3) with Dy, Er, and Yb ( Figure 3). In addition, Igeo revealed that the sediment was not contaminated (<0) with Eu, Tb, Ho, Tm, or Lu, with negative Igeo values ( Figure 3).
REEs in Mangrove Avicennia Marina and Bio-Concentration Factor (BCF)
The REE concentration in A. marina leaves was not significant (t-test; p > 0.05) between the two mangrove ecosystems assessed (Table 1). However, a similar pattern of REE distribution in A. marina leaves and mangrove sediment was established, with higher concentrations in Jazan mangrove leaves. The higher sum total of REEs in A. marina leaves from the Jazan mangrove was about 1/18 of the total sum in sediment, while the lowest value for A. marina leaves in the Al Wajah mangrove was about 1/20 of the sum total in sediment ( Table 1). This indicates that the ∑REE in the Al Wajah and Jazan mangrove sediments were 20-and 18-fold that of the ∑REE in their A. marina leaves, respectively. This is supported by the distribution of REEs in sediment and leaves presented in Figure 4, with higher distributions in sediment than leaves and in the Jazan and Al Wajah mangrove ecosystems. For specific REEs in A. marina leaves and sediment, the heat map shows that all the concentrations of the 14 REEs determined in A. marina leaves are associated Values are mean ± standard error.
REEs in Mangrove Avicennia Marina and Bio-Concentration Factor (BCF)
The REE concentration in A. marina leaves was not significant (t-test; p > 0.05) between the two mangrove ecosystems assessed (Table 1). However, a similar pattern of REE distribution in A. marina leaves and mangrove sediment was established, with higher concentrations in Jazan mangrove leaves. The higher sum total of REEs in A. marina leaves from the Jazan mangrove was about 1/18 of the total sum in sediment, while the lowest value for A. marina leaves in the Al Wajah mangrove was about 1/20 of the sum total in sediment ( Table 1). This indicates that the ∑REE in the Al Wajah and Jazan mangrove sediments were 20-and 18-fold that of the ∑REE in their A. marina leaves, respectively. This is supported by the distribution of REEs in sediment and leaves presented in Figure 4, with higher distributions in sediment than leaves and in the Jazan and Al Wajah mangrove ecosystems. For specific REEs in A. marina leaves and sediment, the heat map shows that all the concentrations of the 14 REEs determined in A. marina leaves are associated (0.0-0.5) with the concentrations in the sediment ( Figure 5). However, the BCF values are less than 1 for all the REEs determined; the recorded BCF values for Lu (0.33) and Tm (0.32) were the highest, while the lowest BCF recorded was for Nd (0.09) ( Figure 6).
Influence of Sediment Grain Size on REE Concentrations and Fractionation
Studies indicate that sediment grain size type is a vital factor for REE accumulation in sediments from the Al Wajah and Jazan mangrove ecosystems; this was supported by results from the two mangroves subjected to data analysis. Aquatic ecosystems encompass a combination of diverse products of physicochemical processes functioning in various aspects of drainage basins [18]. This could be linked to or support the influence of grain size types on elements concentrations and distribution, which has also been reported previously to partly reflect the impacts of the hydrodynamic environment [19]. It is important to note that transportation, resuspension, and deposition of allochthonous sediments into an aquatic environment such as the mangrove ecosystem could be influenced by hydrodynamics [19,20]. However, the fractionation of REEs is largely due to mineralogical controls and the diversity of sediment detrital minerals composed of different grain size types triggered by the complex nature of patterns for drainage weathering and hydrodynamic sorting [21,22]. In other studies, different minerals were reported to compose specific REE characteristics due to different grain size types, and differentiation in sizes during transportation and sediment deposition caused some level of differentiation in mineralogy [18,21].
Fine-grain particles have large surface area as one of their properties that can influence an increase in element sorption [23,24]. As such, it is reasonable to hypothesize that there exists a positive correlation between clay silt sediment particles and concentrations of REEs. This could constitute a reason for or enable a better understanding of the correlation between the sediment grain size types and concentrations of REEs in this study [25,26]. Variations in metal concentrations and distribution and their relationship with sediment grain size types have been reported elsewhere, and notably, these studies demonstrated that fine-grain sediment in mangroves possess the capacity for adsorption of REEs [27,28].
The Jazan mangrove is a coastal area open to the public with lots of industrial activities and other anthropogenic activities, and this together with changes in sediment nature (grain size) could form a major reason for higher concentrations of REEs [12,29]. Notably, ∑REE in Jazan mangrove sediment was almost two-fold that of the ∑REE determined in Al Wajah mangrove sediment, which is a natural reserve area with rare or few anthropogenic activities. Elsewhere, in the assessment of REEs in six mangrove ecosystems in the central Red Sea [19], the average of ∑REE (42.56 mg/kg) was lower than (about 1/3 of) the value reported from Jazan mangrove (112.54 mg/kg) in this study was established. In another study on the Egyptian coast of the Red Sea [30], the ∑REE (47.55 mg/kg) reported was about 1/2 of the ∑REE (112.54 mg/kg) in the Jazan mangrove ecosystem. The depletion of HREEs relative to the lighter ones observed in this study is similar to previous observations elsewhere on the assessment of REEs in mangroves and mangrove soil profiles [11,24,31]. HREE depletion in benthic sediment could be as a result of their greater tendency towards the formation of soluble carbonates in a stable state and complexes of organic forms with ligands than the lighter REEs (LREE and MREE) [32]. HREEs have a high tendency towards being less reactive than LREEs and MREEs, and are well adapted or linked with phases in solid states because they have more pronounced complexation together with ligands located on surfaces of colloids and particles [19,33]. The removal and/or preference of LREEs could also be based on this phenomenon [11].
REE Fractionation and Sediment Quality Index (Igeo)
The utilization of Post-Archean Australian Shale (PAAS) [34] in the normalization of REE concentrations in sediments of marine ecosystems is vast and quite vital in revealing the comparative fractionation of REEs and the source of REEs, and enables an easy comparison of various findings in ecosystems. The results of REE fractionation are widely utilized as tracers for the determination of the effect of contamination sources and mangrove sediment on flora and fauna diversity, and determination of the chemistry of the environment [8,19].
The use of REE fractionation in the determination of detailed LREE, MREE, and HREE enrichment is vital in the assessment of REEs in an ecosystem, even though direct computation using the sum of the concentrations is measured initially. It is important to note that (La/Yb)n, (Sm/La)n, and (Yb/Sm)n fractions of Post-Archean Australian Shale (PAAS) normalized values were used as a model for REE fractionation and represent LREEs, MREEs, and HREEs, respectively [12,24]. Higher fractions of PAAS normalized (Sm/La)n were recorded in Al Wajah (1.96) and Jazan (2.17) compared to (Yb/Sm)n, and the average (Sm/La)n (2.07) being higher than the average (Yb/Sm)n (1.08) is an indication of the predominance of lighter REEs (LREEs and MREEs) over HREEs in sediments, and more precisely, the levels of lighter REEs (LREEs and MREEs) in Jazan are higher than those of the Al Wajah mangrove. However, the higher fraction (La/Yb)n (0.49) at Al Wajah than (La/Yb)n (0.41) is an indication of the predominance of LREEs at Al Wajah, while MREEs and HREEs were predominant at the Jazan mangrove, as reflected by the higher values of (Sm/La)n (2.17) and (Yb/Sm)n (1.13).
Fractionation of REEs has also been defined by some authors using a scale of 1; a ratio equal to 1 was stated to be an indication of no fraction, whereas a ratio < 1 indicates depletion, and a ratio > 1 implies enrichment of REEs [12,35]. In support of the aforementioned, multi-elemental ratios R(M/L) and R(H/M) with positive and negative values signify fractionation patterns enriched with lighter rare earth elements (MREEs and LREEs), and depletion of HREEs [36,37]. In another study, in the Pichavaram mangrove ecosystem, similar findings were reported on REE fractionation, with an insight into higher concentrations of lighter REEs being linked to clay-silt sediment composition in the mangrove ecosystem [37,38]. This was supported by the positive correlation (r = 0.58) between the clay silt grain size type and the lighter REEs. Nonetheless, it is critical to note that deposition of sediment in a high-sediment regime and rapid burial could lead to a reduced time frame of exposure to REEs in dissolved form with sediment. This causes a restriction in the capacity of adsorption of the sediment and the possibility of REE depletion, leading to dissimilarities in concentrations of REEs in mangrove sediment [11,37].
Geo-accumulation index (Igeo) is an important sediment quality index used for the determination of the extent of contamination of metal and the role of human activities in sediment metal accumulation [29,39,40]. Anthropogenic sources such as industrialization and chemicals or substances from anthropogenic activities such as pesticides and fertilizers contained in agricultural waste could form the key reason for strong to extreme contamination (4 ≤ Igeo ≥ 5) of certain REEs determined in this study [41][42][43][44]. Notably, REEs are often used as fertilizers, which is a direct approach or application to plant/sediment interphase for the purposes of growth, yield, and quality improvement. However, this process or usage of REEs could increase their concentration and sediment or soil contamination [9].
The use of the various applications in several technologies for different materials' production and finished products involves the exploitation of REEs; this has led to pristine ecosystem contamination as a result of poor methods of industrial effluent disposal [9,19]. Dy, Er, and Yb might have originated primarily from anthropogenic activities and crustal material due to their moderate to strong (1 ≤ Igeo ≥ 3) contamination of the sediment. Conversely, the negative Igeo (Igeo < 0) values for Eu, Tb, Ho, Tm, and Lu suggest possible sources of these REEs to be local and natural [19,45]. Similarly, in China, high REE contamination was linked to anthropogenic activities such as industrial activities, with iron and steel smelting as the major activities [29,46].
Bio-Concentration Factor (BCF) and REE in Avicennia Marina
Bioaccumulation of metals in plants due to the interaction between plants and sediment is commonly evaluated using Bio-Concentration Factor (BCF) [19,24]. The values for BCFs for all the REES determined in this study were less than 1 (BCFs < 1); this either signifies hypo-accumulation of the REEs by A. marina mangrove or the role of an effective mechanism that is involved in detoxification or exclusion of chemicals in A. marina [19,47]. Elsewhere, in islands of Indian Sundarban and some mangroves stands in the central Red Sea, results of BCFs reported from these mangroves were in line with our findings with BCFs for REEs less than 1 (BCFs < 1). However, the highest BCF (0.33) determined in this present study was almost three-fold that of the highest value of BCF (0.10) established in Indian Sundarban [12], and 0.01 times greater than the highest BCFs previously determined in six mangroves from the central Red Sea [19].
The composition of bioavailability forms of sediment REEs and the presence of an efficient mechanism or capacity for REE uptake in the plant can significantly affect the phytoextraction of REEs [19,48]. The major reason for the substantial dissimilarity in REE composition in A. marina leaves could be due to the bioavailable REE content in the two mangroves investigated. Previous studies have established a positive correlation between increased REE phytoextraction, the concentration of REE in sediment, and environmental variations due to anthropogenic influence or weathering of chemical nature, with the tendency of affecting REE sequestration [19,49].
Study Area
The Red Sea of Saudi Arabia encompasses an area of mangroves of approximately 135 Km 2 , and the mangroves are disseminated to the northern boundary at 28.207302 • N [50]. The arid environment with high temperatures and sparse rainfall is associated with the central Red Sea. In the central Red Sea of the kingdom of Saudi Arabia, some mangrove habitats appear as a narrow fringe supporting halophytes. These mangroves could at times be flooded [26,51]. The abundance of mangroves and levels of anthropogenic activities were used as criteria in the selection of the mangrove ecosystems investigated. Two (2) mangrove ecosystems in Jazan and AlWajah (Figure 7) were selected to accomplish our objectives. In the Jazan mangrove (16 •
Sampling and Determination of REE
Thirty (30) samples in each mangrove ecosystem (a total of 60 samples) were collected from two mangrove ecosystems in the northern and southern central Red Sea, Saudi Ara-
Sampling and Determination of REE
Thirty (30) samples in each mangrove ecosystem (a total of 60 samples) were collected from two mangrove ecosystems in the northern and southern central Red Sea, Saudi Arabia. A. marina leaves and mangrove surface sediments between 0-20 cm were sampled twice monthly from May 2020 to April 2021 from Jazan and Alwajah mangrove ecosystems. At the time of sample collection, variation in the water depth was from 1 to 11 m. For each ecological unit, 15 sediment samples and leaf samples each from 15 mangrove trees were collected in replicate. Van Veen grab-250 cm 2 was utilized in the collection of sediment samples, which were kept in zip lock bags inside an icebox to be transferred to the laboratory for further analyses. For sediment samples, 0.4 g was weighed into a digestion vessel of 50 mL capacity, and 8 mL HNO 3 :HCl (1:1) for acid digestion was added. An Anton-Paar PE Multiwave 3000 microwave oven was then used for the digestion of samples at 200 • C for 1 h [52]. Digested samples were kept in a volumetric flask at room temperature, and topped up to 50 mL with Ultrapure Millipore Q water, shaken, and allowed to sit overnight. Filtration of the solution was done using a GF/F filter (Whatman), which was then analyzed for concentrations of rare earth elements (La, Ce, Pr, Nd, Sm, Eu, Gd, Tb, Dy, Ho, Er, Tm, Yb, and Lu) using an Agilent 7700× dual pump Inductively Coupled Plasma-Mass Spectrometer (ICP-MS) [53].
For leaf samples, samples were cleaned using deionized water. Both leaf and sediment samples were dried in an oven at 40-45 • C for 48 and then crushed into powder form with agate mortar and pestle and sieved through 53 µm nylon mesh. The leaf sample was acid digested in HNO 3 :H 2 O 2 (3:1) at 180 • C for 45 min using 0.2 g of the sieved samples. The formation of calibration curves was achieved by analyzing standard mixture solutions comprising 14 elements at concentrations of 0.5, 1, 5, 10, 20, 50, and 100 µg/L, with 0.999 linear fitting rates. The analytical method for quality control was assessed using standard reference materials GSS-1 and GSV-2 for sediments and leaves, respectively. To confirm repeatability and sensitivity, the solutions of known concentrations used as standard solutions were placed into the sequence of samples for every eight samples. The repossessions of REEs in percentage from the accuracy of the analytical method ranged from 92.68-103.21% and 81.82-116.67% for sediment and leaves, respectively (Table S1, Supplementary Materials). The acceptance of analytical precision and accuracy was based on when the standard deviation was <5% for the rare earth elements from the results of the replications of measurements of the samples and standards.
Grain Size Analysis in Sediment
The total weight of the oven-dried sediment was measured. Fragmentation of solidified aggregates was done by soaking the dried sediments in distilled water for 24 h. The sediments were washed and separated into fractions of gravel (>2 mm), coarse grain (0.063~2 mm), and mud (clay and silt, <0.063 mm) after passing through 0.063 mm and 2 mm sieves. The computation of percentages of sediment grain sizes relative to the total weight was achieved after the fractions from the residue were dried at 40 • C and weighed [26,38,54].
Sediment Quality Index and Bio-Concentration Factor (BCF)
The geo-accumulation index based on seven enrichment classes (Table S2, Supplementary Materials) [55] was utilized as the sediment quality index, and was used to measure REE contamination levels in the sediment of the two mangroves investigated using the following formula: where C n = Concentration of a particular REE in the sediment, B n = Geochemical background level of that REE obtained for sedimentary rocks (Shales) as described by Turekian and Wedepohl [56], and 1.5 = Correction factor [57] to reduce the effect of variations as a result of sediment lithology.
The bioavailability of REE in A. marina was determined using the bio-concentration factor (BCF) to reveal the efficiency of the mangrove to accumulate REEs using the following formula: where C leaf and C sediment = Concentrations of a given REE in leaves and sediment, respectively.
Data Analyses
The Student's t-test was used for comparison between mean REE concentrations in sediments, leaves, BCF, Igeo, and sediment grain sizes of the two mangrove ecosystems. Principal component analysis (PCA) biplot and loadings were used to determine the relationship between sediment grain sizes and REE concentrations in sediments, while a heat map was used to determine the relationship between REEs in sediment and A. marina leaves. The data were analyzed using R for Windows (v. 4.0.3).
The pattern of distribution and REE bioavailability were categorized using fraction ratios (La/Yb, Sm/La, and Yb/Sm) after the concentrations of REE were normalized (n) to the Post-Archean Australian Shale (PAAS) [34]. The calculation of multi-elemental ratios was done as described by Duvert et al. [36] and Noack et al. [58] using the formulas below: R ( H /M) = log HREE n MREE n = log (Tm n + Yb n + Lu n ) /3 (Gd n + Tb n + Dy n ) /3 (4) where R (M/L) = Ratio between medium and light REEs, R (H/M) = Ratio between heavy and medium REEs, and n refers to PAAS-normalized concentrations.
The non-inclusion of Ce and Eu in the formulas is because of their potential to exhibit an oxidation state. However, a geometric method was used to compute the anomalies of Ce and Eu; this was achieved by assuming that the closest neighboring elements act linearly on log-linear plots [36,59]. The formulas below were used to compute the anomalies: δCe = 2Ce n La n + Pr n (5) δEu = 2Eu n Sm n + Gd n (6) where δCe and δEu are the measure of the anomalies for Ce and Eu, and n refers to PAAS normalized concentrations.
Conclusions
Fractionating causes a significant enrichment of lighter REEs over HREEs in Al Wajah and Jazan mangroves. This is supported by positive and negative multi-elemental ratios R(M/L) and R(H/M), and the enrichment of lighter REEs over the HREEs is attributed to HREEs having a high tendency towards being less reactive than LREEs and MREEs, and the preference for removal of the lighter REEs. However, a higher fraction (La/Yb)n (0.49) at Al Wajah than (La/Yb)n (0.41) is an indication of the predominance of LREEs at Al Wajah, while MREEs and HREEs were predominant at the Jazan mangrove, as reflected by the higher values of (Sm/La)n (2.17) and (Yb/Sm)n (1.13). In addition, the anomalies of Eu were negative for the two mangroves investigated, possibly as a result of dominant reducing conditions in mangrove sediments.
The REE concentrations in Al Wajah and Jazan mangrove ecosystems were significant, with higher concentrations in the Jazan mangrove ecosystem. Different anthropogenic impacts in these two mangroves could form the key reasons for the differences recorded. Clay silt sediment grain size type influences increase REE concentration; however, BCF reveals hypo-accumulation potential or capacity of REE by A. marina, with similarity in REE distribution patterns in sediment and A. marina. In addition, using the scale of Igeo, based on six classes of classification of the level of contamination, the mangrove sediments were not significantly different and were strong to extremely contaminated with La, Ce, Pr, Nd, Sm, and Gd, and moderately to strongly contaminated with Dy, Er, and Yb, but not contaminated (<0) with Eu, Tb, Ho, Tm, or Lu, showing negative Igeo values. In addition, BCFs for all the REEs determined in this study signify hypo-accumulation of the REEs by A. marina mangrove or the role of an effective mechanism that is involved in detoxification or exclusion of chemicals in A. marina. There is a dire need for periodic monitoring of REE concentrations in the mangroves investigated, especially the Jazan mangrove ecosystem. This is to keep track of sources of this metal contamination and to develop strategies for control and conservation of these important ecosystems.
Supplementary Materials: The following supporting information can be downloaded at https://www.mdpi.com/article/10.3390/molecules27144335/s1. Table S1: Analytical results achieved on certified reference materials for sediment and leaves; Table S2: Classification of Sediment quality (Geo-Accumulation Index).
|
v3-fos-license
|
2017-09-20T09:25:43.907Z
|
2011-11-01T00:00:00.000
|
9557192
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "HYBRID",
"oa_url": "https://www.cambridge.org/core/services/aop-cambridge-core/content/view/F82E50552E66BB8ECA4F9631D0C00D89/S175832090000086Xa.pdf/div-class-title-systematic-review-of-workplace-based-assessments-in-psychiatry-surgical-dissection-and-recommendations-for-improvement-div.pdf",
"pdf_hash": "19f1f33777d70a812b3764f5aa5e14ad333b37e1",
"pdf_src": "Anansi",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44396",
"s2fieldsofstudy": [
"Medicine",
"Psychology"
],
"sha1": "b74640409bae8c13f882d404cfb409d7647996b7",
"year": 2011
}
|
pes2o/s2orc
|
Systematic review of workplace-based assessments in psychiatry : surgical dissection and recommendations for improvement
Recent surveys have highlighted widespread criticisms of the use of workplace-based assessments (WPBAs) in psychiatric training. We describe our systematic review of psychiatric WPBAs, including a ‘surgical dissection’ of their format and process. From our review, we identified seven overarching WPBA themes, and have drawn on these to make further recommendations to strengthen the wider acceptability of WPBAs in psychiatric settings. We hope this will encourage further debate on ways of improving these tools, rather than them becoming side-lined as ‘top-down’ tick-box exercises.
The past 5 years have seen major changes to UK psychiatric training, with a move towards competency-based curricula.Many of these changes have occurred within the context of wider national reforms to postgraduate medical education, in response to Unfinished Business, 1 Tomorrow's Doctors, 2 Modernising Medical Careers, 3 The Gold Guide to Postgraduate Specialist Training, 4 The Core Curriculum Guide, 5 and the establishment of the Postgraduate Medical Education and Training Board (PMETB).By aiming to explore all of Miller's pyramid assessment of competencies, 6 including what the trainee actually does in daily clinical practice, workplace-based assessments (WPBAs) have a central role in monitoring the progress of trainees.They also offer a number of other advantages compared with traditional methods of trainee assessment, including: the opportunity to observe real-life 'long case' assessments of patients; 7 time for immediate trainee debriefing, highlighting their strengths and weaknesses; opportunities for sampling a wide range of clinical scenarios; assessments of a trainee's teaching skills to peers in journal clubs and case presentations; and encouragement of formative learning. 80][11] Between a half and two-thirds of trainees feel that WPBAs do not accurately reflect their progress, have questionable validity and reliability, and have no real beneficial effects on supervision, training, clinical practice and confidence. 10lthough the College's core training WPBA guide 12 provides greater guidance in mapping scenarios against levels of performance, wider concerns about these tools remain.
Within the context of the current evidence base, drivers to change within postgraduate medical education, and likely future WPBA changes, how can we improve WPBAs to make them more meaningful within psychiatric settings?In order to address this question, we set about to: systematically review the existing evidence relating to WPBAs; record the salient WPBA themes identified from articles meeting the inclusion criteria for our review, including any points made by the authors of those papers; and draw on our systematic review findings to make recommendations for change, to improve WPBAs in psychiatry.
Method
We systematically reviewed the literature with an objective to search for and review the evidence base of WPBAs, primarily focusing on their use within psychiatric settings.We initially searched the four electronic bibliographic databases MEDLINE, EMBASE, CINAHL and PsycINFO (all from 1981 to 2010).Our search terms (English language only) in the title and abstract were: ''workplace-based assessment''; ''360 degree feedback'' OR ''multisource feedback'' OR ''mini peer assessment tool'' OR ''mini PAT'' OR ''mini assessed clinical encounter'' OR ''mini ACE'' OR ''case based discussion'' OR ''assessment of clinical expertise''; and ''psychiatry''.Results from these searches were combined with AND and duplicate searches removed.Using this search strategy we found 332 articles, which were independently screened by two authors (K.S. and S.P.) on the basis of title, abstract and (where necessary) full texts, and those articles clearly not relevant excluded (e.g.those where WPBAs only occurs as a citation, but without further expansion).We evaluated the remaining articles for quality using our modified version of the Best Evidence Medical Education (BEME) guide 13 (Box 1), which was modifed to take into account the relatively few published quantitative comparative studies evaluating WPBAs in psychiatry.Only those articles which met five or more of the quality indicators, from our modified guide, were included, with any discrepancies in the modified BEME score reviewed by one of the consultant authors (A.N.).Using this method for assessing quality, 14 articles satisfied our inclusion criteria: 12 full text articles, including 1 invited commentary 14 in response to 2 useful WPBA surveys; 10,11 and 2 published correspondences, 1 with valid statistical information from a regional WPBA survey, 9 and 1 by the Dean of the Royal College of Psychiatrists 15 in response to a regional WPBA survey. 10n order to widen our search for further psychiatryspecific papers we also conducted focused database searches of the Royal College of Psychiatrist journals The Psychiatrist (previously Psychiatric Bulletin), Advances in Psychiatric Treatment and the British Journal of Psychiatry (all from January 2000 to June 2010).We used the same search strategy (but excluding 'psychiatry' as a search term) as per our electronic database review, with the search being applied in all fields and for all types of published articles within the paper journals.We found 243 articles, which were independently screened as above (including removal of duplicate articles identified from our previous search), and those articles clearly not relevant were excluded.After applying our inclusion criteria (modified BEME score 55), we found three additional articles: two full text articles and one published correspondence 16 highlighting important issues in response to a regional WPBA survey. 10n additional 6 articles were found to meet our inclusion criteria (modified BEME score 55), by searching the references lists of the 17 articles so far included in our systematic review.To be fully inclusive of all national guidelines relating to WPBAs and postgraduate training, we also conducted an electronic website search (all from 2000 to 2010) of: the Royal College of Psychiatrists; PMETB; Department of Health; Modernising Medical Careers; Medical Education England; and the General Medical Council (GMC).We found a further ten articles that met our inclusion criteria using this final search method.
Overall, 33 articles met our inclusion criteria for this review (Box 2).Two pairs of authors independently extracted information relating to WPBAs from these articles, using a predesigned and piloted template.This included recording: any salient WPBA themes highlighted in each article; any findings (from formal studies, surveys or other methods) and recommendations from the articles' authors; and any areas of concern for using WPBAs in psychiatric settings.The accuracy of information extraction was double-checked by the corresponding pair of authors, with inconsistencies resolved by review by one of the consultant authors.
Results
Seven overarching themes relating to WPBAs were identified from the articles included in our systematic review, with some articles covering more than one theme.
Assessment of psychiatric competencies
Workplace-based assessments were not specifically designed to assess postgraduate psychiatric competencies in the first place. 7,18Therefore, concerns expressed by trainees and trainers [9][10][11] should hardly be surprising.Psychiatry remains unique in placing greater emphasis on working in partnership, promoting a longer-term 'recovery model', negotiating meaningful goals that are achievable by the patient, promoting safety and positive risk-taking and challenging inequalities.Demonstration of a trainee's ability to devise diagnostic formulations, alongside awareness of differentials and classification systems, remain important core curricula skills. 18Likert-type assessment scales fail to capture the complexities of these competencies, consistent with evidence that a checklist approach is not appropriate for assessing higher-level trainees. 19This can lead to complex psychiatric skills, that take time to master, being reduced to isolated box-ticking competencies. 10 Box 2 Systematic review of psychiatric workplacebased assessments: articles included in our review are less likely to correctly administer a tool that significantly limits their freedom to employ their professional judgement. 27lthough the current WPBA forms do allow for some 'free text' comments by assessors, they remain secondary to lists of Likert-scale scores, limiting the opportunity for trainees to gain specific advice on how to improve their performance.Workplace-based assessments must provide relevant feedback to trainees, 4 which is a powerful instrument in furthering their personal and professional development. 24However, even the best-intended feedback may be unhelpful if it is not descriptive or specific enough. 28ssessing performance against an agreed exit standard for all trainees, including those who have just started in their post, remains problematic.Some assessors judge where trainees should be at their current stage of training, whereas others score against the 'end-point' of their training (which is what these tools were originally intended to do). 36lthough WPBA assessor training has the potential to improve standardisation, wider concerns exist if assessors and trainees regard this part of the system to be inherently unfair. 11,14How many other professions assess their workforce according to what they should be able to do at the end of the year?Medical trainees are generally competitive, and view low scores as 'failure'. 36
Core v. higher specialist training assessments
The European Working Time Directive has resulted in concerns about the impact of restricting the range of clinical experiences trainees are exposed to during their training. 33ome aspects of the medical curriculum require more rigorous assessment than others, such that a trainee needs to be assessed carrying it out a number of times. 36Within psychiatry, these include risk assessments, ability to formulate complex diagnostic situations and explaining specific treatments.Concerns continue to be raised by the low success rate of trainees at the MRCPsych Clinical Assessment of Skills and Competencies (CASC) examinations, 16 where all subspecialty scenarios are assessed.There is a lack of established standards for various training grades, 10 especially for those in higher (ST4-6) specialist training.
Formative v. summative assessments
Workplace-based assessments should act as supportive learning tools for trainees, primarily designed to assess their readiness for progression to summative tests. 36owever, this is where the paradox lies for both trainers and trainees.If WPBAs are primarily tools for giving formative feedback, 5 why are they given significant weight during the Annual Review of Competence Progression (ARCP)?Non-fulfilment of WPBA requirements can result in trainees being failed at their ARCP. 10lternatively, given that WPBAs assess up to the highest level of Miller's pyramid of competencies, 6 how can ARCP panels accurately gauge a trainee's progress in the clinical workplace without WPBAs contributing evidence to inform summative decisions? 5By making a clear divide between 'formative' and 'summative' assessment roles, we can end up under-or overestimating the role of WPBAs in monitoring the progress of trainees.
The role of non-medical assessors
Compared with other medical specialties, psychiatry places greater importance on multidisciplinary team working.Although assessments by non-medical members of teams remain vital to providing a broad psychosocial perspective, devolving trainee assessment to generic mental health workers carries significant concerns. 10Trainees perceive non-medical assessors to have significantly less knowledge of WPBAs, and be less likely to assess them accurately. 10imited evidence exists for using assessors who are not senior medical clinicians to assess experienced postgraduate trainees. 21Assessors need to be able to demonstrate their competence in using WPBAs in psychiatry, including their ability to give feedback. 3,4
360-degree feedback assessments
The concept of 360-degree feedback originated within industrial organisations who wished to improve the leadership qualities of their workforce via self-awareness and positive behaviour change.The mini-Peer Assessment Tool (mini-PAT), a shortened version of the Sheffield Peer Review Assessment Tool (SPRAT), was introduced by PMETB as one of its multisource feedback tools.Whereas some feasibility data are available for the SPRAT, the mini-PAT currently lacks robust data regarding its validity and reliability. 23Concerns exist that the validity of multisource feedback tools can be limited by systematic bias from the wide range of assessors, resulting in problems with unregulated self-selection of assessors by trainees. 17For core trainees, in the context of their predominant shift system of on-call work and shorter posts, how many team colleagues are in a position to accurately comment on their performance? 29It can be difficult for team members to confidentially mention their concerns without being easily recognised.
Quality assurance
Standardising WPBA judgements by trainers remains problematic. 9,11Even the Assessed Clinical Encounter (ACE), one of the tools that trainees find to be most useful, 11 has weak interrater reliability. 21Although assessor training can reduce the potential 'hawk' or 'dove' effects of individual assessors, there are wider problems.These include the fears of trainers damaging their trainer-trainee relationship by giving low scores, 24 the 'acquaintance effect' where trainees who are well-known to the rater are scored higher, 21 and that more senior staff tend to give lower but more accurate ratings compared with less senior staff. 22We have all come across the core trainee with a portfolio of perfect WPBA scores, baffled by their failure to pass the MRCPsych CASC exam. 15,16
Consultant supervision
The '1 hour per week' supervision has been the cornerstone of the delivery of direct teaching of psychiatric trainees by their supervisors.For supervisors, the time required to deliver this is estimated to be 0.25 programmed activities per week per trainee, which should be incorporated into their job plans. 34Supervision and WPBAs can have complementary functions, but the increased emphasis on assessment of competencies may displace other educational needs that would have previously been covered within the 'protected hour'. 20Quality of supervision remains the most important factor in determining overall trainee satisfaction. 26
Discussion
In this systematic review, we identified and examined the available current evidence relating to WPBAs and their use within psychiatry.Seven overarching WPBA themes were identified from the reviewed literature, which we have grouped into three areas (Box 3) that pose the greatest challenge in terms of wider acceptability of these tools within psychiatric settings.Our discussion draws on the review findings covering these three grouped areas, making further recommendations for change to help improve the use of WPBAs within psychiatric training.
Structure and format of WPBAs in psychiatry
Our review identified widespread concerns regarding the use of multiple Likert-type assessments to assess complex psychiatric skills.The alternative, global marking schemes, can increase the 'halo effect', however, they allow expert assessors to 'weight' the components of the task in a more situation-specific way. 31There is emerging evidence that narrative information enriches the assessment process, 35,36 and can improve the aspiration towards 'excellence' rather than just 'competence'.This would also allow supervisors to factor in components such as their 'professional trust' of the trainee. 7,30Some specialties have already designed their own specific WPBA tools (for example Assessment of Psychotherapy Expertise).The concept of a marking system where scores progressively improve with time is not adopted in most medical schools across the world, and remains problematic. 11,14Our recommendations for change include the following.
1 We support plans to replace the Likert-scale scoring system with a single global marking scheme, as employed in the MRCPsych CASC examinations.However, allowing greater 'free text' space, rather than MRCPsych tick-box lists of 'areas for development', would encourage assessors to provide more personalised and context-specific feedback to trainees.Examples would include commenting on trainees' awareness of resource management and National Institute for Health and Clinical Excellence guidelines, their 'higher order' thinking in assessing diagnostic uncertainties, and their skills at imparting therapeutic optimism to patients.2 Trainees would be better informed of their progress if they were predominantly assessed according to their current level of training, with the global exit standard score appearing at the bottom of the WPBA form.This would reduce the risks of trainees clustering their assessments at the end of their post, when their scores may be highest.
WPBAs and psychiatric training
Acquiring competencies in one branch of psychiatry may not generalise to acquiring skills in another context. 27For example, a trainee may be able to elicit a history of dementia in a generic setting, but may have more difficulty doing so within an intellectual disability context.This problem may be exacerbated by the impact of the European Working Time Directive 33 and New Ways of Working, 32 and reflected by low MRCPsych CASC pass rates.
Although WPBAs have an important formative assessment role for trainees, they should retain a summative role for ARCP purposes, alongside other portfolio documentation.This dual formative-summative role should be made clear to trainees at the outset. 25ur systematic review has highlighted the extremely limited evidence base for using non-medical assessors, particularly for more experienced trainees.This is consistent with our findings that non-medical assessors score trainees more generously, limiting the usefulness of WPBA tools.
Predictive validity studies of multisource feedback tools, correlating them with clinical and examination performance, are necessary to establish their longer-term credibility. 27We have found that mini-PATs have consistently low response rates from multidisciplinary team members.Allowing core trainees to nominate assessors from previous posts within the last year may help to resolve some of these difficulties.However, we question the validity of the mini-PAT in being able to pick up core trainees in difficulty, as they are more likely to be identified by their supervisors through direct observation, or via formal or informal feedback from those working with them.Our recommendations for change include the following: 1 Training programme directors should ensure that all core trainees continue to get opportunities to do a post
Conclusions
Within the context of changes to the delivery of postgraduate medical education, WPBAs play a central role in the assessment of trainees.Although many of the principles underpinning them were sound, there have been significant limitations in their effectiveness when applied to routine clinical practice.We hope our systematic review and recommendations offer practical ways of improving the structure and administration of these tools so that they become more meaningful longer-term assessments.It also offers opportunities for widening the currently limited evidence base on the use of WPBAs in psychiatry, via piloting of some of our recommendations before any general implementation.Areas for College research could include comparative studies looking at the correlation of original and amended WPBA outcomes with MRCPsych CASC scores (including those in subspecialty stations) for core trainees, and with the recently validated psychiatric patient satisfaction (PatSat) scale for all trainees. 38As competencybased curricula and their associated assessments develop internationally, there would also be scope for other countries to pilot original and amended UK WPBA formats before any general implementation.
Limitations
We used a modified BEME guide for selecting eligible articles, as we found few published comparative descriptive or observational studies of the impact of WPBAs.We set a relatively low threshold number for our modified BEME guide, to ensure we considered the broadest range of currently published evidence.As we did not include e-letters, unpublished articles or informal evidence from conferences, there is a risk of publication bias.There could also be a risk of selective outcome recording (reporting bias), although we tried to minimise this by using 'hierarchical' internal consultant peer review of the information extracted from the articles included in our review.We did not assess for bias in each included article, and not all papers meeting the inclusion criteria in our review were used in constructing our recommendations of improving WPBAs.In designing our recommendations, we did not survey psychiatric trainees, but did incorporate triangulation methodology, utilising the experiences of training programme directors involved in the ARCP process.
As the use of WPBAs is still in its infancy, it may be too early to establish clear conclusions as to their efficacy.There may also be future changes to their implementation due to wider financial constraints.
Dr
Kanchan Sugand and Dr Swapnil Palod are ST5 specialty registrars in psychiatry on the St George's ST4-6 Training Scheme, London.Dr Kalu Olua is a ST6 specialty registrar, and Dr Satyajit Saha is a ST4 specialty registrar, in psychiatry on the St George's ST4-6 Training Scheme.Dr Asim Naeem is a consultant psychiatrist and honorary senior lecturer at South West London & St George's Mental Health NHS Trust/St George's University of London, and is also the Higher Training Programme Director (Psychiatry of Learning Disability) for the St George's ST4-6 Scheme.Dr Samina Matin and Dr Mary Howlett are consultant psychiatrists at South West London & St George's Mental Health NHS Trust, and are also the Core Training Programme Directors for the St George's Scheme.
Assessors EDUCATION & TRAINING Sugand et al Workplace-based assessments in psychiatry Box 1 Modified version of the Best Evidence Medical Education guide . Psychiatry-specific articles .Peer-reviewed (or equivalent) publications .Published national guidelines .Study subject group appropriate for the study being carried out .Studies with data collection methods that were reliable and valid for the research question and context .Data-set complete, with an acceptable questionnaire response rate .Appropriate methods of analysis (statistical or other) .Article recommendations reproducible and applicable for workplace-based assessments in psychiatry .Article data or discussions justify a clear set of conclusions .UK published articles Adapted from Buckley et al. 13 37 remains unclear as to how much, if any, of the supervision 'hour' is devoted to completing WPBAs, and if there is individual variation among supervisors.Medical Education England has emphasised the need for commissioner levers to be strengthened to incentivise training.37Ourrecommendations for change include the following: 1 It should be mandatory for each trainee to have one to two 'external' direct clinical contact WPBAs in each post.Within our training scheme, we have introduced a system of 'paired' supervisors, where core trainees have a number of assessments by a consultant psychiatrist who is not directly supervising them.Alternatively, 'external' assessor pools can be built up from neighbouring training schemes in a reciprocal way, although this has resource implications.2 Time for additional training responsibilities, such as WPBA assessments, must be maintained within consultants' job plans.
|
v3-fos-license
|
2014-10-01T00:00:00.000Z
|
2003-06-01T00:00:00.000
|
7096089
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "HYBRID",
"oa_url": "http://erj.ersjournals.com/content/21/6/944.full.pdf",
"pdf_hash": "63d427055cad27ec4e5d9c934ecaf4f3e1720090",
"pdf_src": "CiteSeerX",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44398",
"s2fieldsofstudy": [
"Medicine"
],
"sha1": "63d427055cad27ec4e5d9c934ecaf4f3e1720090",
"year": 2003
}
|
pes2o/s2orc
|
Evaluation of a quantitative real-time PCR for the detection of respiratory syncytial virus in pulmonary diseases
Respiratory syncytial virus (RSV) is known to cause acute lower respiratory tract infections (ARI) in young children and is involved in exacerbation of chronic obstructive pulmonary disease (COPD) in adults. The role of RSV in stable COPD and the viral load in different respiratory diseases has not been investigated to date. The present authors established and evaluated a quantitative TaqMan® real-time polymerase chain reaction assay specific for RSV subgroup A. Absolute quantification for the determination of viral load input was achieved using a control plasmid. The assay allowed for a quantification over a >6-log range and a detection limit of <10 RSV copies per reaction mixture. The assay was 30 times more sensitive than conventional nested polymerase chain reaction assays. Interassay sd was 1.3 and coefficient of variation 4.7% on average. Clinical specimens from infants with ARI (n=62) and elderly adults with COPD (n=125) were compared for viral loads. A total of 47% RSV-positive samples were found in the ARI study group and 28% in the COPD study group. The viral load of both study groups was found to differ significantly. In the ARI study group the viral load was increased almost 2000-fold, suggesting acute infection in this group and former or latent infection in the COPD group. Respiratory syncytial virus-A specific TaqMan® real-time polymerase chain reaction assay is a sensitive and rapid method for the determination of viral load in clinical samples. It enables differential statements concerning the involvement of respiratory syncytial virus in acute lower respiratory tract infections and chronic obstructive pulmonary disease to be achieved.
Human respiratory syncytial virus (RSV) is one of the most important and frequent viruses for respiratory tract infections. Worldwide RSV causes severe lower tract infections like bronchiolitis or pneumonia in infants and young children [1] and is a common cause for hospitalisation. An association between primary RSV infection and chronic abnormalities of pulmonary function, especially childhood asthma, can be suggested as a result of a long-term prospective study in children after RSV infection [2]. RSV is also common in adults, but usually causes mild upper respiratory tract disease. However in certain adult populations, for instance elderly adults and/or adults with chronic obstructive pulmonary disease (COPD), RSV can cause serious lower respiratory tract infections [3,4].
RSV is an enveloped ribonucleic acid virus of the genus Pneumovirus within the family Paramyxoviridae. Two antigenically distinct subgroups, group A and B, are known [5]. Epidemiologic studies have shown that there are three types of RSV epidemics, those in which group A or group B viruses were dominant and those in which both groups circulate concurrently [6]. In children subgroup A strains were detected at least three times as often as subgroup B in most years [7]. Moreover it has been shown that the course of disease for infections with RSV strain A is usually more severe [8,9].
A diagnostic method used increasingly for the detection of viral pathogens, as causes for infections, is the polymerase chain reaction (PCR). Compared with the traditionally used standard laboratory methods, viral culture or enzyme-linked immunosorbent assays, PCR not only achieved higher specificity and sensitivity [10][11][12], but it also facilitated practical performance, particularly in screening large numbers of samples [11,13]. There are many different PCR techniques presently in use. Until recently PCR was used mainly in a qualitative fashion by visualising the amount of amplified target molecules at the end of the reaction (conventional PCR). Sensitivity of such assays can be increased with a nested PCR (nePCR), where samples with very few starting viral genomes are analysed, but this requires a second timeconsuming PCR run and increases the risk of crosscontamination. Quantitative real-time PCR (qPCR) based on the 59-39 exonuclease activity of Thermus acquaticus (Taq) polymerase on the other hand has shown increased sensitivity and specificity [14][15][16]. Due to the measurement of the amplification product in the exponential phase of the reaction, differences in the amount of starting viral molecules can be detected [15,17] and the initial concentration can be measured in a definite volume (viral load). The additional information of viral load can be a useful diagnostic tool to predict virus-associated diseases, assess disease status, identify different states of viral infection or monitor the efficacy of antiviral therapy [18][19][20][21][22][23]. The results of qPCR experiments, which usually needv2 h, can be analysed directly without any post-PCR steps.
The aim of the current study from members of the Clinical Research Group "Viral infections in acute and chronic respiratory disease of children and adults" was to evaluate a sensitive and rapid RSV-A specific qPCR assay for the determination of viral load in different respiratory diseases. Specimens of hospitalised children with acute respiratory tract infection (ARI) and of adults with COPD were analysed for the presence and quantity of RSV-A. The second aim of the present study was to determine whether RSV-A could be found in COPD patients with and without signs of acute exacerbation by using a highly sensitive method.
Materials and methods
Virus stock and culture RSV subgroup A Long strain (ATCC number VR-26) and HEp-2 host cells (ATCC number CCL-23) were kindly provided by the Institute of Medical Microbiology and Virology (Ruhr-University Bochum, Germany). After incubation of 72 h the virus supernatant was titrated, by serial dilution and the 50% tissue culture infective dose (TCID50) was calculated using the Kä rber formula. RSV stock was divided into aliquots and stored at -70uC.
Patients and respiratory specimens
Clinical specimens were obtained from two study groups. In the first study group nasopharyngeal secretions were collected between January and June 2001 from 62 consecutive children hospitalised for ARI by suction of both nostrils. All patients aged up to 36 months and presenting with primary (n=54) or nosocomial (n=8) infection of lower airway were included. Diagnoses were as follows: spasmodic croup, bronchitis, wheezing bronchitis, bronchiolitis and pneumonia. Exclusion criteria consisted of existing contraindications of nasal suctioning and immunosuppressive therapy. Patients with primary or secondary immunodeficiency or congenital heart disease were not included. Nine of the 29 RSV-positive patients (31%) and five of the 33 controls (15%) were born prematurely (v35 weeks gestational age). All specimens were obtained by a current method [24] at the Children9s Hospital Bochum (Bochum, Germany) and stored at -70uC.
The second group consisted of 125 hospitalised elderly patients (median age 69 yrs, range 43-81 yrs) with COPD. A total of 79 of these elderly patients were hospitalised for acute exacerbation of COPD (AE-COPD) characterised by the following: worsening in dyspnoea, cough and expectoration. The remaining 46 elderly patients had stable COPD and had been hospitalised for other medical reasons. These patients had suffered no exacerbation within the last 30 days prior to their hospitalisation and had undergone no changes in their therapy within the last 14 days (including inhaled and oral medication). Both COPD groups had moderate disease (Global Initiative for Chronic Obstructive Lung Disease: GOLD II) [25]. The most frequent comorbidities in the AE-COPD group were as follows: hypertension (45/79, 57%), coronary artery disease (23/79, 29%), hyperlipoproteinaemia (20/79, 25%) and diabetes (18/79, 23%). In the stable COPD patients the most prevalent comorbidities were: hypertension (27/46, 59%), hyperlipoproteinaemia (16/46, 35%), coronary artery disease (10/46, 22%) and diabetes (7/46, 15%), none of these patients were immunodeficient. Nasal lavage fluid and induced sputum were collected between October 1999 and October 2001 at the Dept of Internal Medicine, (University Hospital Bergmannsheil, Bochum, Germany). All specimens were obtained by the method described by ROHDE et al. [26]. Samples were subjected to centrifugation and both cells and cell-free supernatants were frozen at -70uC.
Ribonucleic acid extraction and reverse transcription
RNA isolation procedure was carried out immediately after thawing using 250 mL of cells (RNeasy Mini Kit; QIAGEN, Hilden, Germany) and 1000 mL of cell-free supernatant (QIAamp DNA Blood Mini Kit; QIAGEN) according to the manufacture9s instructions. RSV-infected HEp-2 cells and RSV stock supernatant used as a control were processed in the same way as cell samples and cell-free supernatants. The final volumes of isolated RNA were 150 mL for cell samples and 100 ml for cell-free supernatants. All samples were aliquoted and stored at -70uC. Complementary deoxyribonucleic acid (cDNA) was synthesised with 2.5 mM random hexamers using 10 ml extracted RNA in a final volume of 50 ml by using TaqMan1 Reverse Transcription Reagents kit (Applied Biosystems, Foster City, CA, USA). The cDNA samples were stored at -20uC.
Design of primers and probe
Forward and reverse primer and probe sequences were designed with Primer Express software (Primer Express TM Version 1.0; Applied Biosystems, USA) with regard to general rules of primer and probe design. The present primer and probe system (table 1) is located within a RSV genome region encoding for the F1 subunit of the fusion protein and the predictive amplicon length is 76 base pairs (bp). A search alignment was done confirming specificity of primers and probe for RSV-A Long strain. The probe contained the fluorescent reporter dye 6-carboxyfluorescein (FAM) at the 59-end and the fluorescent quencher dye 6-carboxytetramethylrhodamin (TAMRA) at the 39-end. Primers and probe (Applied Biosystems, Darmstadt, Germany) were aliquoted to avoid lost of stability by freezing and thawing and stored at -20uC.
Quantitative real-time polymerase chain reaction
PCR reactions were carried out as reverse transcriptase (RT)-qPCR with 5 mL cDNA in a volume of 25 mL with Negative controls were carried out with water instead of RNA. PCR runs were carried out according to the standard TaqMan1 PCR profile. Amplification of target DNA and detection of PCR products were performed with a GeneAmp1 5700 Sequence Detection System (Applied Biosystems, USA). Amplification of the target sequence was detected by an increase of fluorescence above a baseline with no or little change in fluorescence. In order to analyse data, the reporter (FAM) fluorescence was automatically normalised to a passive reference to avoid the measurement of non-PCR-related fluorescence. A threshold was set above the baseline and a threshold cycle value (Ct) was defined as the cycle number at which the fluorescence passes the fixed threshold and a statistically significant increase in fluorescence is first detected.
Nested polymerase chain reaction
The nePCR was carried out using RSV subgroup A specific primer sequences as described by ROHWEDDER et al. [27] and QIAGEN Taq Polymerase Protocol (QIAGEN). All nePCR reactions were performed as RT-PCR.
Hexaplex1 Multiplex reverse transcriptase polymerase chain reaction
All multiplex RT-PCR reactions were carried out at the Institute of Medical Microbiology and Virology, Ruhr-University (Bochum, Germany) using the Hexaplex1 Multiplex RT-PCR system (Prodesse, Waukesha, WI, USA).
Enzyme immunoassay
Enzyme immunoassay detection of RSV was made with ABBOTT testpack RSV (Abbott, Wiesbaden, Germany) and carried out at the Children9s Hospital of the Ruhr-University (Bochum, Germany).
Exclusion of cross reactivity
Respiratory tract specimens positive for Rhinovirus, Influenza A or Parainfluenza 3, or positive for multiple infections with these viruses and RSV (analysis by nePCR or multiplex PCR) were chosen to study qPCR specificity.
Quantitative real-time polymerase chain reaction product analysis and construction of a standard
The predictive length of the qPCR amplification product was confirmed by a single clear band of 76 bp length on an ethidium bromide-stained 4% NuSieve1 agarose gel (FCM BioProducts, Rockland, ME, USA) under ultraviolet illumination. Additionally PCR products were cloned with the TOPO TA Cloning1 Kit for Sequencing (Invitrogen, Karlsruhe, Germany) into the vector pCR41-TOPO and transformed into an appropriate bacterial strain (TOP10). The obtained clones were evaluated by qPCR to confirm the presence of a plasmid containing the RSV insert. Isolation of plasmid DNA was carried out with the QIAprep Spin Miniprep Kit (QIAGEN) and concentration was calculated from optical density measurements at 260 nm in a Beckman Du1-70 spectrophotometer. Plasmid DNA was sequenced with BigDye Terminator Cycle Sequencing Ready Reaction Kit (Applied Biosystems, USA) on a ABI PRISM TM 310 Genetic Analyzer (Applied Biosystems, USA).
Absolute quantification
For standardisation of the assay the plasmid DNA (pCR4-T1/6) isolated from positive-prescreened clone T1/6 was used. The concentration was calculated accurately from optical density measurements and copy numbers of plasmid DNA were calculated by multiplying the product of concentration (pM?mg -1 ) and dilution (mg?mL -1 ) with the Avogadro constant. Serial 10-fold dilutions of known quantities, ranging from 10 pg?mL -1 to 1610 -4 pg?mL -1 corresponding to 4.8610 6 -48 RSV-copies?mL -1 were made, aliquoted and stored at -20uC. The results of 22 measurements of each standard concentration were used for the design of a general standard curve for mean Ct values in order to calculate copy numbers in samples of different study groups when means had to be compared (see Results section).
Statistical analysis
The Applied Biosystems ABI PRISM TM Sequence Detection System, was used to monitor the increase of the reporter fluorescence during PCR and the entire process of calculating Ct values, preparing a standard curve and determining the starting copy numbers for unknown samples was performed by the software. Data were presented as amplification plot, showing the fluorescent values plotted versus the cycle number and as standard curve, displaying the threshold cycle versus the logarithm of defined quantities (copy numbers) of the constructed plasmid standard samples. Copy numbers of unknown samples were automatically inferred from the regression line. Viral load values are declared as copy numbers per PCR reaction mixture and copy numbers per mL, respectively in the case of clinical specimens. To evaluate reproducibility, intra-assay and interassay SDs and coefficients of variation (CVs) were calculated for each standard concentration within and between individual PCR runs. To compare results achieved from different study groups the Kolmogorov-Smirnov test for normality was used. Mann-Whitney Rank Sum test was used in case of failed normal distribution. For all tests significance level of 5% was chosen. 2a). The authors found a highly significant linear relationship between the log of the input target DNA copy numbers and Ct values, thus admitting interpolation of the input RSV concentration (viral load) of samples containing unknown quantities of RSV-RNA. Based on the standard curve the limit of detection of viral RNA was eight RSV copies per reaction mixture (table 2).
Evaluation of respiratory syncytial virus subgroup
To determine precision and reproducibility of the present assay the Ct values obtained from 22 replicates of each standard dilution were analysed. The SDs ranged from 0.58-1.71 for intra-assay runs and from 0.56-1.95 for interassay runs, with higher values found for lower copy number dilutions (table 3). These results allowed the design of a general standard curve for mean Ct values, in order to calculate copy numbers in samples of different study groups (figure 2b). The higher SD value for the lowest concentration corresponds to sensitivity findings of the assay which was reduced to 77.3% in the range of 48 copies per reaction mixture. Although precision of quantification below this copy number may become slightly inaccurate, the authors9 qPCR detected v48 RSV copies per reaction mixture (table 2). Intra-assay and interassay CVs were 4.15% (range 3.1-5.4%) and 4.68% (range 3.2-5.9%) on average and showed no significant difference.
Respiratory syncytial virus detection by quantitative realtime polymerase chain reaction and nested polymerase chain reaction
The sensitivity of the qPCR primers and probe system was tested with cDNA generated from RSV stock. Tenfold serial dilutions of isolated RSV-RNA corresponding to a TCID50 of 1.2610 4 to 1.2610 -9 were used, reverse transcribed and analysed by qPCR. The authors found successful amplification of RSV-cDNA in the first ten dilutions, until TCID50 of 1.2610 -5 . Ct values spread over the measuring range from Ct 15.21-Ct 39.44, corresponding to computed copy numbers of 1.67610 7 copies per reaction mixture to 8.43 copies per reaction mixture.
The results of the authors9 assay were compared with those achieved by a well established nePCR for RSV detection described by ROHWEDDER et al. [27]. Positive nePCR amplification results in a 326 bp PCR product that could be detected as a single band by agarose gel electrophoresis. Both PCR protocols were applied to the same RNA dilutions from RSV stock as described above. The results of both PCR methods were corresponding for the first eight dilutions of RSV-RNA, but nePCR detection stopped at TCID50 of 1.2610 -4 , whereas qPCR was more sensitive by two logs (table 2).
Study of a respiratory syncytial virus-infected human cell line
RSV-infected HEp-2 cells were studied in order to examine the ability of the authors9 assay to detect RSV-RNA, specifically and sensitively in a complex mixture with human RNA. HEp-2 cells were RSV-infected with different multiplicities of infection (moi 1610 -2 -1610 -6 ) under a constant incubation time of 2.5 h. RNA was isolated, reverse transcribed and the qPCR protocol was applied to all samples. The assay was found to be successful in detecting different amounts of virus nucleic acid in RSV-infected HEp-2 cells, corresponding to different multiplicities of infection (data not shown). Moreover viral replication in HEp-2 cells could be demonstrated. Figure 3 shows the increase of RSV-RNA in infected HEp-2 cells (moi 0.01) at six time points within 24 h of incubation. The authors9 qPCR assay indicated that at the end of the incubation time, the viral load of RSV-infected HEp-2 cells had increased 43-fold compared with the value measured after 2 h of incubation. Analysis of the qPCR-amplification product by agarose gel electrophoresis showed a single clear band corresponding to the predictive length. Subsequent sequencing of the PCR product confirmed specificity for RSV-A Long strain.
Cross-reactivity with other human respiratory viruses
In order to further analyse specificity of the assay the current authors9 investigated whether other human respiratory viruses (Rhinovirus, Parainfluenza-3 virus and Influenza-A virus) were detected by the qPCR protocol or inhibited RSV detection when multiple virus infections were present. For this purpose patient samples with confirmed non-RSV infection were studied. Neither Rhinovirus-, Parainfluenza-3 virus-or Influenza-A virus-positive specimens yielded any PCR-amplification with RSV-A specific qPCR primers and probe. Moreover no tested specimens with confirmed multiple infection failed to yield a positive PCR result whenever RSV was present.
Study of clinical samples by respiratory syncytial virus-A specific quantitative real-time polymerase chain reaction
The qPCR detection of RSV in clinical samples was studied using nasopharyngeal secretions of infants with ARI (n=62).
A total of 29 (46.8%) of these 62 specimens (cell samples) were positive for RSV-A, and 33 samples did not show any amplification in the PCR reaction. Furthermore the authors9 qPCR assay provided information on the quantity of RSV nucleic acid in the ARI study group. The viral load found in these infant respiratory samples ranged between 5.4610 3 -8.5610 8 copies?mL -1 (median 1.2610 7 copies?mL -1 ). Additionally the qPCR results of the ARI group were compared with the results of two other diagnostic tests, the Hexaplex1 Multiplex RT-PCR and an enzyme-immunoassay test for rapid RSV antigen detection. Both tests detect RSV group A and group B but only the multiplex PCR system can differentiate both strains. To analyse specificity and sensitivity of the qPCR assay the results were compared with those specimens positive or negative for both, the multiplex and the antigen test (n=43), defined as a standard. Specificity and sensitivity of RSV-A specific qPCR were 95% and 91.3%, respectively.
In a second study group of elderly hospitalised COPD patients RNA was extracted from 125 nasal lavages and sputum cell samples. A total of 35 samples (28%) were detected to be RSV-A positive (27.9% in the AE-COPD and 28.3% in the COPD subgroup). There was no significant difference between the viral load of positive nasal lavage samples and the viral load of positive sputum samples. The viral load in this second study group differed significantly from the viral load results obtained from the child study group (fig. 4). The median viral load of the respiratory specimens of the COPD study group was 6.1610 3 copies?mL -1 (range 3.2610 3 -1.5610 7 copies?mL -1 ), reduced 1967-fold compared to the ARI study group. The authors found as few as 15 RSV genomes (median) on average per reaction mixture in respiratory samples of the COPD study group showing that the qPCR is able to detect even very low copy numbers of viral nucleic acid molecules in clinical samples.
In addition the authors studied cell-free sample supernatants with the qPCR assay and compared the results with those of cell samples. In the ARI group positive and negative results of cell and cell-free samples corresponded in 100% (n=62/62) whereas in the COPD group correspondence was only 17% (n=82/125).
Discussion
In the present study an RSV-A specific qPCR assay was evaluated and applied to clinical samples. RSV is the main cause of childhood viral ARI [28] and has been detected among other respiratory viruses, mainly influenza A, rhinovirus, coronaviruses and parainfluenza viruses, in exacerbations of COPD [29][30][31][32]. For older adults it has been shown that the diagnosis of RSV with less sensitive tests can be difficult and are not always reliable, leading to the complementary use of different methods for RSV detection [33,34]. The main advantages of the RSV-A specific qPCR assay are high specificity and sensitivity as well as its exact quantification and reproducibility. The detection of RSV-A was reliable both in respiratory samples of children and in those of elderly patients. Comparison of qPCR results to data obtained by nePCR revealed an almost 30-fold increase in sensitivity, corresponding to results of other studies [35]. Moreover minimal assay variation represented by a mean CV of 4% of standard plasmid DNA was found compared with 14% reported for conventional PCR assays [36].
The importance of respiratory viruses in exacerbations of COPD has been shown in a number of studies using less sensitive methods [29,30] and in studies using PCR technology [26,37]. Moreover viral pathogens have been documented in COPD without worsening of respiratory status [26,29,37,38]. RSV-A was found in the present study at similar percentages in the COPD study groups with and without exacerbation (27.9 and 28.3%, respectively). The implication of RSV and other viruses in COPD has recently been studied by SEEMUNGAL et al. [37] by nePCR, whose results confirm a high number of RSV-positive specimens (23.5%) among stable COPD patients. In a previous study RSV could not be detected in this group using nePCR technology [26]. GREENBERG et al. [32] found RSV detection rates in documented respiratory tract viral infections of 11.4% for COPD and 9.9% for control patients by using viral serology. Due to the high sensitivity of the qPCR assay, very low viral load values can be detected, probably not included in studies using less sensitive methods.
However a comparison of results from diverse studies is only partly possible because of differing study design, study population, definition of illness and detection methods with different specificity and sensitivity.
The main goal of the present study was to quantify exactly the amount of viral particles in clinical samples and to compare different respiratory diseases by analysing viral loads. In recent years viral load assays have been established and successfully applied for several infectious diseases [18][19][20][21][22][23] even as a criterion for distinguishing between acute and latent infection [39]. The present study9s results show that the RSV-A specific qPCR is able to detect and to quantify a wide range of different viral loads in respiratory tract specimens. Because of the genetic heterogeneity of both RSV subgroups detection of RSV-B by a RSV-A specific qPCR is excluded. This limits the method to detection of RSV-A only, but since RSV-A is more prevalent [7][8][9] it does not limit the interpretation of the data presented here. Moreover the assay outlined in this study allows for a differentiated statement on the implication of RSV in different diseases. The authors found a significant difference between the mean of RSV-copies detected in children with ARI compared to COPD patients. In the child-ARI group the authors found very high viral loads. These children suffered from a disease which was acute in its onset and required hospitalisation. At this very special time point the samples were collected and analysed. Combining these findings and considering the additional diagnostic methods used the authors have no doubt that these children suffered from acute infection with RSV-A. In the COPD group, however the authors found two different clinical presentations. One suffered from an acute exacerbation whereas the other showed a stable disease. Although one subgroup was exacerbated and needed hospitalisation the authors found the same low viral loads in both groups without significant differences. As observed in the child study group, it would be expected that high viral loads are associated with acute infection. Therefore the authors hypothesised that low viral loads indicated a latent infection or a small amount of residual RSV genome from a previous infection. Given the low viral loads found in AE-COPD group the authors are convinced that RSV-A was not the cause of the exacerbation. From the present study9s data it may be hypothesised that low viral loads of RSV-A may have facilitated infection with other respiratory pathogens. However, the present study was not designed to investigate latent RSV-A infection in COPD, as the analysis of several samples taken at different time points, in a prospective longitudinal study, would be needed.
The difference between the correspondence of results of cell and cell-free samples of the ARI (100%) and COPD group (17%) may also indicate different roles of RSV in these diseases. High viral load in acute infection always seems to result in cell lysis whereas in chronic infection, with low viral load, this may be not the case.
In summary the authors9 respiratory syncytial virus-A specific quantitative real-time polymerase chain reaction assay is a specific, sensitive, reliable and rapid method for application for the medical diagnosis and quantification of the respiratory syncytial virus genome in clinical samples. It has several advantages compared to traditional methods of viral detection and to other polymerase chain reaction technologies. The exact determination of viral load allows differentiated interpretation of results and further access to study the role of respiratory syncytial virus in respiratory tract diseases. As an example the authors demonstrate that the different roles of respiratory syncytial virus infection in children compared to adults with chronic obstructive pulmonary disease can be detected by a quantitative real-time polymerase chain reaction assay. Further studies will show whether this is specific for RSV viral loads are shown for respiratory specimens taken from hospitalised children with acute respiratory tract infections (ARI; n=62) and of elderly chronic obstructive pulmonary disease (COPD) patients (n=125). A total of 29 and 22 specimens were found to be RSV-positive in the ARI and the COPD groups, respectively. The median number of 1.2610 7 RSV-genomes?mL -1 was found in the ARI group (range 8.5610 8 -5.4610 3 RSV-genomes?mL -1 ) significantly higher than the median number of 6.1610 3 RSV-genomes?mL -1 found in the COPD group (range 1.5610 7 to 3.2610 3 RSV-genomes?mL -1 ). ***: pv0.001. respiratory syncytial virus infection or can also be found with other viruses.
|
v3-fos-license
|
2014-10-01T00:00:00.000Z
|
2011-08-07T00:00:00.000
|
18001596
|
{
"extfieldsofstudy": [
"Computer Science",
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://doi.org/10.4061/2011/697856",
"pdf_hash": "c24dc1479bee33a3df31c2cf2e4aa0974255bb7c",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44399",
"s2fieldsofstudy": [
"Education",
"Psychology"
],
"sha1": "4b3996519fa40e2403bd52fa9048f14b94a31a0a",
"year": 2011
}
|
pes2o/s2orc
|
The Importance of Autonomous Regulation for Students' Successful Translation of Intentions into Behavior Change via Planning
Physical activity has a high prevention potential in adolescents. This study investigated the relations between physical activity and intention, autonomous regulation, and planning. We hypothesized that planning mediates the relationship between intention and behavior and that this mediation should depend on the level of autonomous regulation. Stratified randomization sampling method was administered to assemble a sample of N = 534 students among two schools in China. To test the hypothesis, autonomous regulation, intention, and physical activity were assessed at baseline as well as planning and follow-up physical activity four weeks after the pretest. A moderated mediation model confirmed that planning mediated the intention-behavior relation with the effect of planning being moderated by autonomous regulation. Study results demonstrated that autonomous regulation facilitated the translation of intention into behavior change via planning. To promote physical activity among adolescents, interventions targeting planning and autonomous regulation might facilitate successful translation of intentions into behavior change.
Introduction
There are lots of benefits from physical activity (PA) engagement. Regular physical activity participation can prevent premature mortality, coronary heart disease, as well as the prevalence of overweight and obesity and reduce the risk of diabetes 2, cardiovascular disease, and some types of cancer in adulthood [1][2][3]. Regular physical activity participation can also benefit psychological health by reducing depression and anxiety and increasing self-esteem and life satisfaction [4,5]. During childhood and adolescence it had short-term as well as long-term effects on health [6]. Some studies revealed that formation of exercise habits during adolescence is an important foundation for physical activity in older age [7,8].
Even with all those benefits from physical activity, lots of previous studies showed that adolescence undergoes a steep physical activity decline. At present the prevalence of physical inactivity is decreasing, not just in western developed countries [9] but also in developing countries such as in China [10]. Study results from Sun et al. showed that only about one third of the adolescents accomplished the recommended daily rate of about one hour of regular participation. Thus, it is important to have a better understanding of those factors that affect students to be physically active.
In preventive medicine, intention was comprehensively used as a predictor of behavior change. Different theories, such as protection motivation theory [11], theory of planned behavior [12], and health action process approach [13], include intention as a main component of predicting behavior change. However, even with the comprehensive use of intention in the prediction of behavior change, high intention did not guarantee subsequent actual behavior change. Studies showed that there existed a gap between intention and behavior [14].
Advances in Preventive Medicine
In order to bridge the intention-behavior gap, some self-regulatory variables should be additionally regarded. Evidence shows that volitional factors-such as planning (implementation intention) specifying when, where and how to carry out the intention-are effective in initiation and maintenance of the intended behavior [15]. Even though planning was found to be a strong predictor of behavior, it can be expected that not everyone benefits to the same extent from the same planning intervention program. Planning's mediating effect might depend on some other influential variables. Koestner et al. showed that the kind of motivation, that is, whether individuals are autonomously regulated, interacts with the effectiveness of planning [16,17]. Autonomous regulation is characterized by goals that reflect personal interests and values. In contrast, goals which reflect a feeling of being controlled by external pressures characterize rather nonautonomous regulation [6].
Autonomously regulated behaviors are those performed for the satisfaction perceived by engaging in the activity itself. According to most theories the primary satisfaction associated with autonomously regulated actions are experiences of competence and interest or enjoyment. By contrast, not autonomously regulated behaviors are those that are performed in order to obtain rewards or outcomes that are separate from the behavior itself [18,19]. In some previous studies high autonomous regulation was a critical factor for exercise adherence whereas low or no autonomous regulation resulted in poor adherence [6].
Physical activity participation among adolescents in school environment differed from that of among adults in worksite or clinical context. During school time adolescents can develop a heightened autonomy and start making their own decisions on their behaviors [20]. Students are rather motivated to perform physical activity by enjoyment than by disease prevention. Studies show that enjoyment and autonomous regulation were important for physical activity adoption [6]. Autonomous regulation can also facilitate continued involvement in physical activity in later life [21]. Generally, more autonomous regulation significantly predicts more health behavior [22] and its predictors, such as intention and self-efficacy [6]. However, little is known about the role of autonomous regulation in concert with other predictors of behavior, such as intention and planning. Previous studies have tested different moderators of the intention-planning-behavior relation [23,24]. The question is whether autonomous regulation can serve as such a moderator of the intention-planning-behavior relation.
We hypothesized that the mediating effect of planning depends on autonomous regulation. Only if autonomous regulation is high, planning helps to translate intentions into behavior. If one feels not in control of the own behavior (low autonomous regulation), then planning does not help him to become more physically active even in face of high intentions.
Participants and Procedures.
Adolescents from grade 7 to 12 were recruited by stratified randomization sampling method from two high schools in the central region of China. The survey was conducted at two points in time within a 4-week period. 693 adolescents participated in the baseline study and provided valid data on exercise intention, autonomous regulation, and physical activity at pretest. Four weeks later, posttest questionnaires were handed out to those students who completed the baseline study. 534 students completed the follow-up study and provided data on planning and physical activity. The final sample consisted of 534 participants with a mean age of 13.95 years (SD = 1.67) and with 52% girls.
At the two dates of the survey, all the questionnaires (taking about 20 minutes to complete) were handed out by trained study assistants. Informed consent was obtained and the study was performed in accordance with the Helsinki Declaration [25].
Measurement.
The questionnaire packets contained assessments of intention, autonomous regulation, planning, behavior and sociodemographic information. All original materials were developed and validated in German and English. The questionnaires were translated into Chinese by a bilingual researcher.
Physical activity intention was assessed with one item worded "I intend to do physical activities for 30 min or longer at least three times per week, or accumulating at least 90 min per week on a regular basis." Responses could be on a sevenpoint bipolar Likert score ranging from (−3) completely disagree to (+3) completely agree in this study Cronbach's Alpha was α = .83.
Autonomous regulation was measured with the behavioral regulation in exercise questionnaire (BREQ) [26] consisting of 16 items. Responses were given on a 7-point Likert score ranging from (1) not at all true, (4) somewhat true, to (7) completely true. The index of autonomous regulation for this scale was computed using the following equation to combine the subscale scores (Motivation Index = 2 × Intrinsic + Identified − Introjected − 2 × External [26]). Negative numbers reflect that one is not autonomously regulated for change whereas positive numbers reflect one is autonomously regulated to be active. In this study the Cronbachs Alpha was α = .88.
Planning was measured by eight items adapted to adolescents [27]. Example items worded "I have made a detailed plan regarding when and where to engage in regular moderate or vigorous physical activity" or "I have made a detailed plan regarding to what to do when running into bad weather or lack of sport resources". All the items were scored on a four-point Likert scale ranging from (1) completely unable to (4) completely able. In this study the Cronbach,s Alpha was α = .93.
Physical activity behavior in a usual week was measured using the 7-day PA recall questionnaire (IPAQ) adapted for Chinese adolescents [28]. Physical activity frequency, duration, and intensity were assessed; responses for frequency and duration were then multiplied to obtain an index of total physical activity per week. In this study test-retest reliability for IPAQ was r = .35, which is comparable to other studies conducted outside of China.
Data Analysis.
Attrition analysis showed that the original sample at T1 (N = 693) did not differ from the follow-up sample (N = 534) in terms of sex, age, intention, autonomous regulation, and school or physical activity (all P > .05), showing that the 534 participants in the followup were a representative sample of the initial one. Pearson correlation analysis was conducted to examine the association between intention, autonomous regulation, planning, and physical activity.
To test autonomous regulation's moderating effect on intention, planning, and posttest physical activity, a mediation model was specified with intention as the independent predicting variable, posttest physical activity as the dependent variable, and planning as a mediator between intention and physical activity.
Moderated mediation was expressed by an interaction between intention and the index of autonomous regulation (intention × autonomous regulation) on behavior which affects the mediation process [29]. The analyses were based on procedures recommended by Preacher et al. [30] using the MODMEDC macro (Version 2.1; Model 2). To avoid multilinear influence, centered variables recommended by Aiken were used [31]. Missing data were imputed using the expectation maximization (EM) algorithm in SPSS [32]. A significance level of P < .05 was used throughout the analysis.
Results
Correlation analysis showed that all variables were significantly interrelated (see Table 1). Autonomous regulation proved discriminant validity with all other variables (r < . 25), providing confirmation to include all variables in the subsequent analysis.
In the mediator model, T2 physical activity was significantly predicted by T1 intention, β = .25, P < .05. When T1 physical activity was included, intention accounted for 6% of the variance of physical activity change at T2. After T2 planning were included into the regression equation, T1 intention was not a significant predictor for T2 physical activity change anymore (β = .05, P = .22). However, baseline physical activity (β = .23, P < .05) and T2 planning (β = .35, P < .05) acted as significant predictors of followup physical activity. Thus, planning fully mediated the path from intention to behavior change.
In the moderated mediation model, the moderator (T1 autonomous regulation) and the interaction variable (intention × autonomous regulation) were conjointly included into the regression equation. T1 intention significantly predicted T2 planning (β = .22, P < .05), together with autonomous regulation (β = .21, P < .05), and the interaction of intention and autonomous regulation (β = .11, P < .05). In total, 11% of the variance of T2 planning was explained. Baseline physical activity, T1 intention and T2 planning jointly accounted for 24% of the variance of T2 physical activity change (Figure 1).
The significant interaction effect supported the assumption of a moderated mediation: planning mediated the intention-behavior relation, and this mediation was moderated by autonomous regulation. Follow-up analysis tested how high intention needed to be. Students required a mean value of at least 1.5 on the autonomous regulation scale to translate their intention into behavior via planning (P < .05) ( Figure 2).
Thus, the mediation effect of planning appeared to be conditional upon the value of autonomous regulation index. Only if autonomous regulation was 1.5 or higher, planning mediated significantly between intention and subsequent behavior.
Discussion
This study aimed at shedding more light on the mechanisms underlying physical activity change processes in adolescents. The mediating effect of planning as well as the moderating effect of autonomous regulation was confirmed: planning mediated the relation between intention and behavior, whereas the mediating effect was moderated by one's levels of autonomous regulation index. Those who perceived higher levels of autonomous regulation were more likely to translate their intentions into behavior change. This is consistent with many previous studies, for example with findings by Beauchamp et al. [6] that autonomous regulation is associated with higher levels of regular physical activity intention and self-efficacy.
However, some limitations need to be mentioned. The study's four-week follow-up period is rather short. Furthermore, the study included only self-reported questionnaire measurements. Although this is a typically used procedure to measure physical activity among large samples [6], reporting bias might have occurred. In future studies, objective parameters of physical activity (e.g., pedometers, heart rate monitor etc.) can test the results' reliability. Above that, some of the measures were single items only. Also, prospect research need to test the revealed findings in experimental designs.
Despite those limitations, some implications can be drawn from this study. Firstly, the moderated mediation model extended the understanding of factors associated with adolescent physical activity promotion: the effectiveness of planning mediation between intention and behavior change has been identified in some previous studies [15,33]. However, planning was found to benefit not all participants equally [23,24]. In this study the moderated mediation model reveals that planning intervention might be more effective among those adolescents with high autonomous regulation than among those with low autonomous regulation, which is important for health behavior educators who should take adolescents' current motivation status into consideration. As previous studies have mainly shown the importance of high intentions [23,34] or high autonomous regulation [6], this study revealed the significant interaction of the two, that is, intentions and autonomous regulation. Autonomous regulation in turn offers opportunities for intervention, especially when teaching adolescents [6].
For health behavior promotion among adolescents, interventions matched to the characteristics of the adolescence might be more effective. If autonomous regulation is high, planning should be trained. However, if autonomous regulation is rather low, strategies to increase autonomous regulation are needed first. Such strategies are, for example to provide a training in transformational leadership to the teachers of students. This was done successfully by Beauchamp and colleagues [6].
Adolescence is a key phase for autonomy developing. Adolescents participate in physical activity, rather motivated by pleasure seeking and enjoyment (autonomous regulation) than by means of health gains and prevention of diseases [35,36]. In planning intervention for adolescents, not only planning should be trained. Adolescents should also get help to increasing their autonomous regulation. This might be achieved by providing more choices, increase enjoyment experience and activity competence. With that their autonomous regulation might improve their long-term behavior adoption and maintenance, and accordingly their health. Thus, this might be an effective approach in preventive medicine. These findings are important because they add to the current knowledge of age-specific health promotion. Not only cognitive-rational factors are important to consider. But also affective factors such as autonomous regulation are crucial in adolescents. Hopefully, this opens avenues for effective prevention strategies across the life-span.
|
v3-fos-license
|
2016-05-12T22:15:10.714Z
|
2015-11-23T00:00:00.000
|
14261168
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "HYBRID",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s10552-015-0685-2.pdf",
"pdf_hash": "aeebc91cb3508890a2761c63e7e220951dc960d0",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44401",
"s2fieldsofstudy": [
"Medicine"
],
"sha1": "aeebc91cb3508890a2761c63e7e220951dc960d0",
"year": 2015
}
|
pes2o/s2orc
|
Family history of hematologic malignancies and risk of multiple myeloma: differences by race and clinical features
Purpose Multiple myeloma (MM) is the most common hematologic malignancy affecting Blacks in the USA, with standardized incidence rates that are twofold to threefold higher than Whites. The rationale for the disparity is unclear. Methods Using participants enrolled in the Molecular And Genetic Epidemiology study of myeloma (259 MM cases; 461 controls), we examined the risk of MM associated with family history of cancer, differences by race and among cases, defining clinical features. Risk estimates were calculated using odds ratios and corresponding 95% confidence intervals from logistic regression adjusted for confounders. Results Overall, MM risk in cases with relatives affected with any hematologic malignancy was significantly elevated compared to controls (OR 1.89, 95% CI 1.25–2.86). Myeloma risk associated with a family history of MM was higher than the risk associated with any hematologic malignancy (OR 3.75, 95% CI 1.75–8.05), and the effect was greater for Blacks (OR 20.9, 95% CI 2.59–168) than Whites (OR 2.04, 95% 0.83–5.04), among cases with early onset (≤60 years; OR 4.58, 95% CI 1.21–17.3) and with increasing numbers of affected relatives (p trend = 0.001). Overall, frequencies of end organ damage differed in cases with relatives affected with any hematologic malignancy and significantly more cases exhibited κ light chain restriction (OR 3.23, 95% CI 1.13–9.26). Conclusions The excess risk of MM observed in Blacks and the variation in clinical features observed in MM patients according to family history of hematologic malignancy may be attributed to a shared germline and environmental susceptibility. Electronic supplementary material The online version of this article (doi:10.1007/s10552-015-0685-2) contains supplementary material, which is available to authorized users.
Introduction
Multiple myeloma is a plasma cell malignancy characterized, in part, by prolonged survival and accumulation of clonal plasma cells in the bone marrow microenvironment, presence of monoclonal protein in serum, urine or both, and end organ damage [1]. Standardized incidence rates of MM are increasing, advancing it to the second most common hematologic malignancy and accounting for 1 % of all cancers in the USA [2]. Although the etiology of MM is unclear, it is preceded by an asymptomatic plasma cell dyscrasia known as Monoclonal Gammopathy of Undetermined Significance (MGUS) [3,4] that carries a risk of progression to frank MM of 1 % per year [5]. Other confirmed risk factors for MM include increasing age, male sex, Black race and a family history of cancer [6].
Multiple myeloma is the most common hematologic malignancy affecting Blacks in the USA, with standardized incidence rates that are twofold to threefold higher than Whites [7,8], and with an earlier age of onset [9]. Rationale for the observed disparity is unclear. However, evidence suggests a shared genetic predisposition.
Several lines of evidence support an inherited germline susceptibility. Familial clustering of MM in several case series [10][11][12][13], in addition to family aggregation [14,15], epidemiologic case-control [16,17], and registry-based [18,19] studies have consistently shown excess MM risk among first-degree relatives of patients with MM. In addition, in the only study published to date that included both Blacks and Whites, Brown et al. [20] showed that MM risk was significantly increased in Black MM patients with an affected first-degree relative, providing a possible rationale for the difference in incidence observed by race.
Familial aggregation of MM and the epidemiologic differences observed by race suggest a complex etiology, which may be influenced by shared genetic factors, environmental exposures, behaviors and underlying differences in tumor biology. We conducted a comprehensive investigation to expand upon the existing report to evaluate differences in the contribution of hematologic malignancies and solid tumors among relatives of Black and White patients with MM. To our knowledge this is the first study to include evaluations of MM-defining clinical features with family history of cancer, which may provide important insight into underlying differences in the clinical presentation of MM by race.
Study population
We included participants enrolled in the Molecular And Genetic Epidemiology (iMAGE) study of myeloma to characterize the contribution of family history of cancer on the risk of MM, differences by race and among cases only, the presence of defining clinical features. The iMAGE study was designed to evaluate the effects of biological, chemical, physical, social and genetic influences on the risk of MM and direct comparisons by self-reported Black and White race. Approvals from the appropriate Institutional Review Boards in accordance with the Declaration of Helsinki were obtained prior to study initiation, and informed consent was obtained from all individual participants included in the study.
Case definition
Eligible cases were recruited from the University of Alabama at Birmingham Hematology and Medical Oncology clinics (Birmingham, Alabama) and the Morehouse School of Medicine (Atlanta, Georgia). Patients with a diagnosis of MM were identified based on the ICD-9 classifications (203) or International Classification of Diseases for Oncology third revision code 9732/3 and confirmed based on the revised and updated International Multiple Myeloma Working Group classification criteria for MM. Criteria include the cumulative presence of clonal bone marrow plasma cells C10 % or biopsy proven bony or extramedullary plasmacytoma and presence of one or more MM-defining events including organ damage (hypercalcemia, renal insufficiency, anemia, or lytic bone lesions or severe osteopenia or pathologic fractures attributed to a plasma cell proliferative disorder), or in the absence of end organ damage, clonal bone marrow plasma cells C60 %, serum free light chain (FLC) ratio C100, or more than one focal bone lesion ([5 mm) identified using magnetic resonance imaging (MRI) [21]. Each MM case was reviewed by an expert panel to ensure consistent case definitions and to minimize phenotype misclassification.
Control selection
Controls were sampled from an existing and updated population-based database established and maintained by the Survey Research Unit (University of Alabama at Birmingham). This database includes US Census and Centers for Disease Control population databases established from list-assisted random digit dialing methods and used previously for this, and other large-scale populationbased epidemiology studies [23,24]. Eligible controls were residents of Alabama and Georgia, 21 years of age or older without a self-reported history of MGUS, smoldering myeloma (SMM), MM, or other cancer excluding nonmelanoma cancers of the skin. One to two controls were randomly selected and frequency matched to cases on age (±5 years), sex, race (Black, White), and geography.
Definition of family history of cancer
Detailed information, including family history of cancer, sociodemographic features, smoking and alcohol use, medication use, as well as residential, lifetime occupational, medical, surgical, and reproductive histories, was obtained using a structured questionnaire administered by trained interviewers at the time of enrollment. We defined family history of cancer as a self-report of one or more first-degree (parent, sibling, child), second-degree (grandparent, aunt, uncle, niece, nephew), or third-degree (firstcousin) relatives with any hematologic malignancy including MM, non-Hodgkin lymphoma [NHL; which included lymphoma not otherwise specified (NOS)], Hodgkin lymphoma (HL), leukemia, or any solid tumor (non-hematologic malignancy). Family history of any hematologic malignancy was defined using ICD-9 classification including MM (203), NHL (202), HL (201), or leukemia (204-208). As a sensitivity assessment, MM was defined with and without self-reported affected relatives with bone cancer NOS and later excluded from the MM definition to minimize misclassification. We categorized affected relatives as first-degree and jointly as any relative. Family size was not collected.
Statistical analysis
We evaluated family history of cancer with MM risk overall and stratified by race, early age of onset (B60 years, defined by median) and sex of the MM case as well as the affected relative to evaluate sex-linked germline susceptibility. Among cases only, we evaluated family history of cancer with the presence of defining clinical features. We estimated the risk of MM (case-control analysis) and risk of family history of cancer in MM patients (case-only analysis) using the odds ratio (OR) and corresponding 95% confidence interval (CI) calculated from logistic regression adjusted for confounders including sex, age (continuous), level of education (Bhigh school graduate vs. some college, college graduate, or post-graduate education) and race (White, Black) in analyses not stratified by these variables. Other potential confounders were evaluated, including smoking status, alcohol consumption, and annual household income at the time of enrollment, but were excluded from final models because they were not substantially related to MM or family history of cancer. Tests for statistical significance of trend were conducted using multivariable logistic regression with an incremental increase in the number of affected relatives per category modeled as a continuous variable. The strength of linearity between clinical laboratory variables and a family history of any hematologic malignancy among MM cases was examined using regression coefficients and standard errors generated by linear regression adjusted for confounders. Statistical significance, based on multivariable logistic models, was calculated using the maximum likelihood v 2 test, and differences between strata were determined using the Mantel-Haenszel v 2 test for homogeneity. Individuals with missing data for family history of cancer variables or clinical features were excluded from analyses. A two-sided p value B0.05 was considered statistically significant. All analyses were conducted using SAS version 9.4 (Cary, NC). ). An additional case withdrew participation and was terminated from the study. After initial eligibility screening, participation rates for controls were 80.8 % (79.7 % for Whites and 82.3 % for Blacks). Enrolled controls later discovered to have MGUS (n = 1), be duplicates (n = 2), related to a case (n = 4), reported a shared residential area with a case or other enrolled control for 2 or more years (n = 32) and with reported diagnoses of cancer, myelodysplastic syndrome (n = 7), HIV-1 infection (n = 4) or solid organ transplant (n = 2) were excluded leaving a total of 259 cases and 461 controls available for analysis. Distributions of demographic characteristics of participants enrolled in the iMAGE study of myeloma are shown in Table 1. In the combined population, cases and controls did not differ substantially by race; however, modest nonclinically significant differences were observed by age and sex despite frequency matching on these factors, of which, the latter is indicative of a disproportionately higher participation rate among female controls. Of the total 259 cases, the majority were male (54.8 %) with a mean age of 60 years at the time of diagnosis. Black cases were significantly younger at diagnosis compared to White cases (mean age, 58 vs. 61 years; p = 0.005) and Black cases reported less education (p = 0.006), annual household income (p = 0.004) and fewer relatives affected with any cancer (p = 0.0002) than their White counterparts.
From
The estimated risk of MM associated with a family history of cancer is shown in Table 2. In the combined population, the majority of participants reported a family history of cancer (79.9 %), including any solid tumor (74.3 %) and any of the combined four hematologic malignancies (NHL, HL, leukemia and MM; 16.4 %). Among controls with any relative affected with any hematologic malignancy, family history of leukemia was the most prevalent (n = 32; 7 %) followed by NHL (n = 20; 4 %), MM (n = 11; 2 %) and HL (n = 6; 1 %), consistent with the prevalence of these hematologic malignancies in the general US population.
In cases with any relative affected with any hematologic malignancy, the risk of MM was significantly elevated compared to controls (OR 1.89, 95% CI 1.25-2.86). The magnitude of this effect was greater in Blacks (OR 2.43, 95% CI 1. 13-5.22) than in Whites (OR 1.77, 95% CI 1.08-2.91), although the difference in the magnitude of effect by race was not statistically significant (p = 0.532).
The risk of MM associated with a family history of MM was higher than the risk associated with any hematologic malignancy (OR 3.75, 95% CI 1.75-8.05), and this effect was greater for Blacks (OR 20.9, 95% CI 2.59-168) than Whites (OR 2.04, 95% 0.83-5.04). Although risk estimates are based in a small sample, these relationships were substantiated in an analysis restricted to participants who reported MM among first-degree relatives only (Blacks: OR 10.8, 95% CI 1.22-94.8; Whites: OR 1.19, 95% CI 0. 28-5.16; data not shown). In contrast, increased risk of MM among cases with a family history of NHL, HL or leukemia (hematologic malignancy excluding MM) was present in Whites (OR 1.71, 95% CI 1.01-2.89), whereas no association was observed in Blacks.
Sample size precluded our ability to evaluate MM risk by race further stratified by sex or age. In the combined population, risks associated with a family history of MM were elevated among cases with two or more affected relatives with any cancer, any hematologic malignancy or MM (p trend C0.001) (Supplementary Table 1). In addition, the influence of a positive family history of myeloma had a greater magnitude of effect in patients with early age of onset (B60 years of age; OR 4.58, 95% CI 1.21-17.3), although the difference by age strata was not statistically significant, and risk estimates were similarly elevated in males and females ( Table 3).
The estimated risk of MM associated with a family history of solid tumors is shown in Table 2. In the combined population, the risk of MM was modestly elevated with a family history of any solid tumor (OR 1.55, 95% CI 1.06-2.27) and for the combined category of gynecologic cancers (OR 1.95, 95% CI 1.11-3.43). Affected relatives with a history of head and neck cancer were strongly associated with MM risk only in Blacks (OR 6.98, 95% CI 1.85-26.4), whereas the excess risk among those with a family history of genitourinary cancers (excluding prostate) was present only in Whites (OR 2.69, 95% CI 1.12-6.46), albeit findings may be limited by sample size. Although the risk of MM was modestly elevated with a family history of a variety of solid tumors, no single solid tumor type included in any of the combined solid tumor categories achieved a level of statistical significance.
Differences in the distribution of clinical features of MM cases with and without a family history of any hematologic malignancy are shown in Table 4. Of the 57 MM cases with a family history of hematologic malignancy, kappa (j) light chain restriction was detected in 43 (78.2 %) MM cases compared to 115 (64.3 %) MM cases without a family history of hematologic malignancy (p = 0.045). No notable difference in MM risk was observed for light chain MM (p = 0.616). However, in cases with heavy-chain MM, individuals with a family history of hematologic malignancies were more likely to exhibit IgG kappa MM, with a notable j light chain restriction (OR 3.23, 95% CI 1.13-9.26; p = 0.029) after the heavy-chain isotype (IgG, IgA) was held constant. Of the diagnostic criteria for end organ damage, the presence of anemia and renal insufficiency, attributed to MM, was notably less frequent consistent with a twofold reduction in risk of MM in cases with a family history of hematologic malignancy compared to those without, whereas hypercalcemia and lytic bone lesions were more frequent, albeit not significantly (p C 0.230). We found no other notable differences in the distributions of clinical characteristics among MM cases with and without a family history of hematologic malignancies. Insufficient sample size precluded our ability to evaluate MM-defining clinical features stratified by race.
Discussion
MM is significantly more common in Blacks. However, our current understanding of MM is largely based on studies from patients of European origin. Thus, epidemiologic studies that include well-characterized MM patients from racially diverse populations are warranted to significantly improve our understanding of MM etiology and to provide a rationale for the differences observed in Black and White MM patients. To our knowledge, this is the first report of a comprehensive evaluation of the contribution of family history of hematologic malignancies and other cancers on the risk of MM, which included differences in Blacks and Whites and among cases, the presence of MM-defining clinical features. We observed a 3.75-fold increased overall risk of MM among participants who reported a family history of MM, and the effect was notably greater, by an order of magnitude, in Blacks than Whites (ORs 21 and 2, respectively), albeit our sample was small. In an evaluation of clinical features in MM cases with and without a family history of hematologic malignancy, anemia and renal insufficiency, attributed to MM, were less common, whereas hypercalcemia and lytic bone lesions were more common, albeit not significantly. In addition, we found a significant proportion of j light chain restricted disease The overall elevated risk of MM observed in our study is consistent with previous findings from case-control studies of patients with first-degree relatives with MM, yielding risk estimates ranging from twofold to sixfold [16,17]. Our risk estimates are also similar to estimates generated from large, registry-based studies, where family history data were verified, thereby providing support for the validity and generalizability of our findings despite the possibility of bias in recalling cancer diagnoses in family members, which may differ by case-control status [25]. In the largest study published to date, which included 37,838 first-degree relatives of 13,896 patients with MM diagnosed in Sweden between 1958 and 2005, the risk of MM was increased 2.1-fold in first-degree relatives with MM (95% CI 1.6-2.9) [19], and in the Swedish registry study preceding this, the risk of MM was increased 4.25-fold (95% CI 1.81-8.41) [18]. In addition, Camp et al. [26] confirm this association in 177,226 first-, second-, and third-degree relatives linked to 1,354 MM patients included in the Utah Surveillance, Epidemiology and End Results (SEER) cancer registry. Findings originating from these large, population-based, registry studies yield precise estimates of association by virtue of providing sufficient statistical power; however, interpretations from registrybased studies thus far have been limited to persons of European origin.
Evidence for a stronger familial association of MM in Blacks observed in our study coincides with findings from the only population-based case-control study published to date, in which Brown et al. [20], report an elevated risk of MM in patients with affected first-degree relatives with MM of 17.4-fold (95% CI 2.4-348) in Blacks and 1.5-fold (95% CI 0.3-6.4) in Whites. Thus, despite the relatively small number of affected relatives with MM, strength and consistency of findings from this study and ours suggests a familial predisposition to MM, which is greater for Blacks than Whites. Together, these observations suggest that the excess familial risk of MM contributes, at least in part, to the overall increased incidence of MM observed in Blacks. However, because the frequency of familial MM in the general US population is low in both racial populations, germline susceptibility appears to contribute to only a proportion of the overall risk, emphasizing that both genetic and environmental factors play an etiologic role in this common complex disease.
Our observation that coaggregation of hematologic malignancies (i.e., NHL, HL, leukemia) in families of patients with MM occurs only in Whites, could suggest a common etiology of select lymphomas and leukemias in persons of European origin and conversely, specificity for a germline susceptibility to MM in Blacks. Several lines of Table 3 Risk estimates of multiple myeloma associated with family history of any cancer, any hematologic malignancy and multiple myeloma with early age of onset and sex Any cancer [16,17,19,[26][27][28], suggesting a shared etiology. However, these studies have largely been restricted to populations of European origin. Positive evidence for a shared etiology with lymphoma and leukemia subtypes in Blacks has not been observed [20], perhaps due, in part, to the disproportionately lower incidence observed in this population. In our analysis of solid tumors in blood relatives of patients with MM, we provide evidence for familial coaggregation of any solid tumor with MM in the combined population, consistent with prior reports [16,19,26]. In addition, we found modest non-significant evidence for a shared etiology with select tumor types previously shown to co-occur in families of MM (i.e., prostate, malignant melanoma, genitourinary cancers) [26], with the co-occurrence of malignant melanoma and genitourinary cancers observed only in Whites. Our observation of familial aggregation of head and neck cancers in relatives of MM patients among Blacks has not been previously reported. Additional studies are required to confirm this preliminary finding and to investigate a biological basis for a possible shared etiology.
Additional support for an inherited germline susceptibility arises from several gnostic and agnostic gene association and sequencing studies, which have been used to significantly expand the repertoire of confirmed MM susceptibility loci [29][30][31]. Despite recent advances in gene discovery, it is unknown how these MM loci influence the increased risk observed in Blacks because prior analyses have been conducted exclusively in populations of European origin. Further evidence for a germline susceptibility points to the Major Histocompatibility Complex (MHC) as a genomic region with sufficient allelic variation by race to account for the higher incidence of MM observed in Blacks [32]; however, findings from genome-wide association studies have not confirmed these relationships.
To our knowledge, this the first comprehensive casecontrol study used to evaluate the contribution of family history of any hematologic malignancy on the presence of defining clinical features and laboratory characteristics in MM patients. We hypothesized that MM patients with a stronger familial predisposition are more likely to present with clinical features and laboratory characteristics consistent with increased disease burden. Although we note a significantly younger age of MM onset and a modest nonsignificant increase in the presence of lytic bone lesions in MM patients with a familial coaggregation of hematologic malignancies, we did not observe differences in laboratory characteristics that are typically associated with disease burden including M-protein, abnormal FLC ratio, percent clonal bone marrow plasma cells, and b2-microglobulin, nor did we observe differences by the presence of cumulative organ damage or ISS staging. Lack of association with laboratory characteristics, typically associated with increased disease burden, may reflect inadequate statistical power to detect modest effects. However, we did observe a significant proportion of cases with j light chain restriction.
One of the hallmarks of MM is the clonal proliferation of malignant plasma cells, which produce M-protein and cause lytic bone lesions. Because IgG is the most common isotype and j is the most common light chain, which constitute the M-protein, we acknowledge the possibility that our finding could be due to chance. However, we did not observe an over-representation of the IgG isotype in MM patients with a familial coaggregation of hematologic malignancies, suggesting that light chain restriction may have a stronger familial component either by germline susceptibility or shared environment. Findings from a familial case series do not support a germline susceptibility to M-protein [33]. However, j restriction in MM patients with a familial coaggregation of hematologic malignancies may reflect the impact of environmental exposures on a common genetic background capable of driving an antigenic-dependent process. In this capacity, antigen may play a role in selecting and expanding B cells, which eventually promote the monoclonal expansion of plasma cells with a predominant j light chain restriction. Evidence for an antigenic-dependent process in the etiology of MM comes from findings that prior exposure to select pathogens and autoimmune disease are associated with MM risk [34][35][36][37][38]. Additional epidemiologic and molecular studies are warranted to confirm these findings and to elucidate the role of an antigen-dependent process in MM etiology.
This investigation was specifically designed to evaluate risk factors associated with MM and differences in wellcharacterized Black and White MM cases and matched controls. However, interpretation of our findings is not without limitation. Despite efforts to minimize the effect of recall bias by adjusting for factors related to the accuracy of self-reported family history and with the disparity in MM incidence by race (i.e., age, sex, race, education) [25,39], residual bias resulting from potential differences in case-control reporting may lead to an overestimation of risk. However, the consistency of findings with previously published reports from population-based registry studies suggests that any potential bias was unremarkable. Although we do not anticipate differences in family size by case-control status or in Blacks and Whites in our region, we recognize the possibility that family size could influence the effect of family history of cancer on the risk of MM because larger families provide more persons at risk for disease. Finally, sample size and the inability to systematically obtain and validate family history data precluded our ability to evaluate familial coaggregation of leukemia subtypes by race and to delineate relationships of MM-defining clinical features, family history of MM in first-degree relatives and differences by race or other meaningful strata (e.g., early age of onset). Additional large, well-characterized and racially diverse populations, made available through multi-center cancer consortia, will be required to further delineate these relationships.
In summary, we confirm a positive association of familial risk of MM, which is greater in Blacks, and describe for the first time, variation in the presence of defining clinical features in MM patients according to family history of hematologic malignancy. Although we cannot exclude the possibility that our observed associations and patterns of inheritance might be due to chance, the consistency of our results supports a combined germline and environmental susceptibility. Our findings underscore the importance of further characterizing germline and somatic variation in addition to the mechanisms by which previous environmental exposures modify the genetic predisposition to disease [40][41][42] in similarly well-characterized racially diverse populations. Such characterizations may facilitate improvements in our ability to predict clinical progression, response to treatment and underlying biologic mechanisms.
|
v3-fos-license
|
2024-06-27T15:23:54.861Z
|
2024-06-25T00:00:00.000
|
270758249
|
{
"extfieldsofstudy": [],
"oa_license": "CCBY",
"oa_status": "HYBRID",
"oa_url": "https://iieta.org/download/file/fid/134708",
"pdf_hash": "4487d934be41d41a0952babe50ca83a2e2fe523b",
"pdf_src": "Anansi",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44403",
"s2fieldsofstudy": [
"Engineering",
"Environmental Science"
],
"sha1": "f2bf511cbfdc810e2cd6a659529f09555cd8831e",
"year": 2024
}
|
pes2o/s2orc
|
PLC-SCADA Automation of Inlet Wastewater Treatment Processes: Design, Implementation, and Evaluation
ABSTRACT
INTRODUCTION
Within the inflow region of the treatment plant, three fundamental operations occur: The untreated wastewater must be conveyed to pump stations without any interruption.Consequently, it is necessary to continuously monitor the rotational velocity of the delivery pumps to enable the system to transition to a backup pump in case of a malfunction.Simultaneously, the rate at which the wastewater flows and the level at which it is present are measured to avoid overwhelming the succeeding stages of the plant.The conveyor systems activate and deactivate as needed, ensuring the most efficient utilization of energy [1,2].The preservation of the environment is one of the main advantages of wastewater treatment.Untreated wastewater may contain chemicals, nutrients, pathogens, and other dangerous materials.Untreated wastewater discharge into bodies of water can cause pollution, endangering ecosystems and aquatic life.The water quality is monitored to safeguard the ambient conditions for the bio-organisms in the succeeding biological stage of the treatment plant.In addition to temperature and salt content, the pH value is an important measure that might indicate either alkaline conditions or excessive acidity [3].This implies that suitable actions can be implemented to safeguard the biological phase of the treatment facility.Aside from the specific process areas of a wastewater treatment facility that necessitate a variety of intrinsically safe solutions, it is also crucial to monitor the water filtration equipment and the entire wastewater treatment plant to safeguard against corrosive, dusty, dirty, and explosive environments.Additionally, measures such as lightning protection and purge suppression should be implemented.Offers a diverse array of protective measures to ensure the safety and efficient operation of the wastewater treatment facility [4].
Intelligent wastewater systems may meet the freshwater needs of the Internet of Things (IoT) smart community by utilizing IoT sensors to identify and prevent the occurrence of combined sewage and chemical overflows in wastewater.Freshwater is a rare and highly valued natural resource that is not readily accessible on a daily basis [5,6].The IoT employs the notion of deploying sensing devices at various locations within the water environment to facilitate aquatic care.These sensors acquire and relay data to surveillance systems.The data may encompass water quality, fluctuations in temperature, variations in pressure, detection of water leakage, and identification of chemical leakage [7].These sensors acquire and relay data to surveillance systems.An intelligent water sensor, utilizing the IoT, can monitor and assess the quality, pressure, and temperature of water.Indeed, a sensor solution can regulate the movement of fluids within the treatment facility and might be employed by a water utility provider [8,9].A wastewater treatment facility removes solid particles before discharging the liquid waste into the surrounding ecology.The wastewater treatment facility incorporates a PLC with human-machine interface (HMI) capabilities to construct a water level monitoring system [10].The injection of an adequate quantity of chemicals is a key aspect of boosting the overall efficacy of a typical wastewater treatment facility.Modify the water level to match the desired quantity.In addition, the conventional plant provides reliability through the incorporation of high-speed protective mechanisms, which efficiently mitigate any system malfunctions [11].Challenges in wastewater treatment: Enhancing the treatment processes based on data analyzing the parameters of flowing water, incoming wastewater, and the liquid waste in treated form, the establishment of the communications network, and the system scenario programming process.
CONTRIBUTION OF THE PAPER
The contribution of the paper is as follows: The research proposes improving the biochemical treatment system for incoming wastewater units by designing algorithms with a scenario different from previous studies in terms of the type of system used, the program used, and the algorithm, as follows: PLCS7-300 performs control (monitoring and treatment) first.To achieve performance and low processing costs.Secondly, this research proposes creating new units to collect data from remote terminal units (RTU) to create a larger smart control node that helps in collecting various data, and thirdly, a PC Station to obtain data from RTUs and PLCs through the use of communications.Networking and Communications Protocol (TCP/IP), thus achieving other important processors in the main data center to provide high accuracy in real-time and send data back to the subunits.
This system can also achieve real-time operation in milliseconds because the program used for the design is one of the best programs used by Siemens for data acquisition and processing speed.
LITERATURE SURVEY
Włodarczak et al. [12] describe the design of a flow meter utilizing a PLC.The implementation of a flow meter regulated by a PLC enhances the potential for managing and automating the movement of fluids.Furthermore, the educational capacity of employing basic automation through a PLC was taken into account.A fluid flow rate measurement device was designed, manufactured, and tested using a PLC controller.
According to Wang and Zhang [13], the objective is to enhance the intelligence of sewage detection during the sewage treatment process, thereby increasing the precision and immediacy of monitoring data.This study focuses on analyzing the PLC platform developed by the Allen-Bradley Company.The primary objective is to utilize a basic monitoring apparatus to measure the PH value and COD value in treated sewage.By optimizing the architecture of the distributed control system (DCS) structure and communication network level in the sewage treatment system, using a PLC as the platform, more effective monitoring of sewage has been obtained.
Zhang et al. [14] introduced an application case based on IoT sensor control and supporting maintenance businesses.They introduced a method of bringing the IoT into the membrane bioreactor (MBR) system of industrial wastewater treatment, which can continuously and accurately monitor the nodes of the system.Salem et al. [15] developed an industrial IoT cloud-based model for real-time wastewater monitoring and controlling, which monitors the power of hydrogen and temperature parameters from the wastewater inlet that will be treated in the wastewater treatment plant, thereby avoiding impermissible industrial wastewater that the plant cannot handle.Morales and Lawagon [16] suggested an IoT system for monitoring the pH of wastewater in real-time using web browsers.This system allows for the regulation of waste disposal through web browsers.The authors demonstrated that the pH level fluctuated with temperature, but the variations were rather minimal.
The proposed design in this study, in comparison to previous studies, utilizes advanced programs from Siemens, which necessitate significant expertise and meticulous handling to construct an accurate scenario for the processing units.Previous studies predominantly employed systems that connect a single PLC.This study, however, demonstrates a network of PLCs integrated with a central control unit, capable of achieving real-time operation in milliseconds.This is in contrast to the PLC systems used in other studies, whose specifications will be clarified in the designed work.
PROGRAMMABLE LOGIC CONTROLLER (PLC)
A PLC is an industrial computing device employed to regulate a particular process, machinery system, or, occasionally, an entire assembly line.A PLC is a type of computer used in industrial settings that does not require a mouse, keyboard, or monitor.Given that it is an industrial computer, logical software is developed to manage the process on the main computer [17].The program is subsequently sent to the PLC using cables.The program is stored in the memory of the PLC.The logical program is created using a programming language known as ladder logic, data list, or function block diagram.The program is designed to be simply understandable for anyone with expertise in electrical or hardware domains.A PLC comprises diverse inputs and outputs.A PLC monitors the condition of switches and sensors by utilizing input terminals.Based on this status, it issues commands to output devices via output terminals.
Figure 1.PLC hardware configuration [18] Figure 1 shows the internal and external circuits for PLC S7 300.The SIMATIC S7-300 universal controllers offer a compact installation and are designed with a modular structure [18,19].The system can be expanded either centrally or in a decentralized manner using a diverse selection of modules, depending on the specific purpose.This allows for efficient storage of spare components while minimizing costs.SIMATIC is renowned for its consistent performance and exceptional quality.Effortlessly scalable as the range of features increases.Enhanced by a multitude of integrated functionalities [17].
SUPERVISORY CONTROL AND DATA ACQUISITION (SCADA)
Computers, networked data flows, and graphical user interfaces (GUIs) make up a control system architecture, which enables high-level machine and process supervision.Additionally, it consists of sensors and additional hardwarelike PLCs-that connect to machinery or process plants [20].The operator interfaces that facilitate the monitoring and execution of process directives, such as adjustments to controller set points, are managed by the SCADA computer system.
Ancillary processes, including controller computations or real-time control logic, are handled by modules that are coupled to field sensors and actuators.The SCADA concept shown in Figure 2 was devised as a versatile method for remote access to various local control modules, regardless of their manufacturer, and enabled access using standard automation protocols [21,22].Large SCADA systems have evolved to closely resemble DCS in terms of functionality, employing various methods to interact with the plant.They possess the ability to manipulate extensive operations that encompass numerous locations, functioning effectively across both significant and limited distances.Despite worries regarding the vulnerability of SCADA systems to cyberwarfare and cyberterrorism attacks, they are widely utilized in industrial control systems [23].
DESIGN AND IMPLEMENTATION OF THE PROPOSED SYSTEM
When constructing the wastewater treatment lifting unit, it is crucial to take into account many functions, including precise monitoring of the water level and biochemical sensor signals relaying this signal through the sensors to the treatment unit (PLC).The system is equipped with processing algorithms that enable precise and vital decision-making for running the equipment in the given scenario.In addition, a multitude of biological signals are evaluated and relayed to the same processor.Moreover, these algorithms are organized sequentially by using a ladder algorithm within the CPU for PLC S7 314 PN/DP, as illustrated by the programming parameters in Figure 3.The signals acquired from the sensors are analog signals (4-20 mA or more) that fluctuate by the measured ratios.Moreover, the system's behaviors (run, trip, faults) can be perceived by analyzing digital signals.This project implementation (in real-time) transforms a biological system into an electrical system to perform necessary chemical treatments for the effective functioning of the inlet-station wastewater-process unit.After completing the programming of the system, as shown in Figure 4, the system will send the specified signals over a communication system and communication protocol (TCP/IP) to the central processing and monitoring unit.This enables the operators to acquaint themselves with the system and its functioning, as well as observe the presented data and surveillance on user interfaces.Furthermore, the system database architecture displayed in Figure 5, offers numerous benefits for creating and implementing the system.
Central subunits, RTUs, are designed to create a larger smart control node that helps in collecting various data, through which information is exchanged with the main (PC Station) through communications networks designed and communication protocol (TCP/IP) shown in Figure 6, thus conveniently achieving other processors.Main data center to provide high accuracy in real-time and send data back to subunits.
Specifications PLC Siemens, as explained in Figure 3: • Modular Design: The S7-300 circuit is grouped in a modular form, which makes it easy for customers who wish to upgrade it.To configure a controller differently to meet my requirements, you can choose any type of CPU and add I/O modules for digital (ON/OFF) or analog (0-10V or 4-20mA or more) signals.
• Memory Capacity: The size of the memory is also dependent on the chosen CPU but provides adequate space for storing logical programs as well as data processing in wastewater treatment as explained in Appendix (Figure A1).• Communication Protocols: The S7-300 supports several communication protocols, including the PROFIBUS protocol, which enables the use of the S7-300 to connect with other devices such as sensors, HMIs, and other PLCs for easy control of a central system.• Rugged Construction: These PLCs are made for durability and industrial applications and are also suitable for temperature and relative humidity factors that exist in wastewater treatment plants.
Capabilities relevant to wastewater treatment: • Logic Control: With the S7-300, logic programs of different levels of complexity can be performed for various tasks in WWTP.Currently, it can manage pump functions according to water levels, manage the amount of chemicals to be released, and set off alarms when reading levels are not normal as in Appendix (Figure A2) showing the symbols for the basic ladder algorithm.• Data Acquisition and Processing: It can record information produced by sensors that control some parameters of the treatment, such as the flow, pH, and DO.It may then analyze this data and make control decisions within preprogrammed values that have been inputted into its hardware.
•
Scalability: About the design of the wastewater treatment plant: It must be modular.You have the option to add more I/O modules or upgrade the RISC CPU if you need more software processing capabilities.
RESULT AND DISCUSSION
The results of the design and implementation of a PLC to automate the process of introducing wastewater into the treatment plant are based on a fully integrated automation gateway program, including the ladder for programming the system, which achieved several tasks, including controlling the inlet based on various parameters, where data is acquired.In real-time sensors that monitor water levels, flow rates, and other essential factors based on these inputs, as we can see from Figure 7 (a) and 7 (b), the online system process, the operating status of the ladder for the system, through which the status of the input to the processing unit can be seen (open, closed, failure), and also through the GUI of the SCADA system shown in Figure 8, the GUI provides a visual representation of the state of the system in terms of motor operation and signals from biochemical sensors.Thus, operators can monitor critical parameters such as water levels, flow rates, valve positions in real time, and other chemical signals.In addition, the GUI enabled an alarm function to alert operators about critical events such as overflow or pump failure.Table 1 represents the transition from the biochemical signal to the proposed system's electrical signal.
Measurement: The sensor signal (typically a 4-20 mA current output signal) may not go in direct relation to measurement units (for instance, a degree Celsius temperature).It measures the current-to-voltage level on the analog input module, which is present inside the PLC.This voltage is then scaled to represent the actual measuring range of the sensor; the relationship between these values must be directly proportional.For example, 4 mA may mean that the temperature is 0 degrees Celsius, while 20 mA would mean that the temperature is 100 degrees Celsius.
Filtering: sensor signals are vulnerable to electrical interference, thus being different.
Analog-to-digital conversion (ADC): It has a modal voltage input from the actual process placed on the analog input module, the ADC of the PLC converts it into digital form.This conversion entails partitioning the voltage range into a given number of bits, e.g., 12-bits or 16-bits.Every numerical value is a numbered voltage value, starting from the lowest voltage level in the mentioned range.
Data processing and use: After this code, the PLC program can obtain the digital representation of the sensor reading.With this data, it can perform the necessary calculations, comparisons, and also basic logical operations.Depending on the values received from the sensor and the logic employed, the PLC can manage other associated parameters like ON/OFF collars of pumps or valves and alarms ON.
Integration of the PLC with the SCADA system: Compatibility: When the communication protocols of the SCADA don't match those of a PLC, data sharing is compromised.Solution: utilize a suitable or common communication protocol.
Data synchronization: There can be timing disparities resulting in non-concurrent information among systems.Solution: Set data update rates and do error checking for reliable communications.
Security: A single-point failure could leave both systems exposed to cyber threats.Solution: To minimize security risks, install firewalls, access controls, and secure communications protocols.
The improvement of process automation compared to manual operation is evident through the following points: Process efficiency: • Reduced cycle time: Automated operations save between 20% and 90% of the cycle times compared to manual operations, thus increasing production within a particular duration.• Improved material handling: This results in about a 10%-60% decrease in the wastage of materials due to an automation system that handles materials better and faster.• Reduce human errors: Automating repetitive tasks such as workstations can eliminate human errors from the process, saving companies over fifty percent of their workplace accidents.
RTUs play an important role in achieving an intelligent approach to data collection through their use, how they are implemented and programmed according to the designed scenario, and their results are summarized in several points: • Decentralized decision-making: RTUs can develop simple controls, unlike traditional centralized systems.They receive data from sensors, interpret it in terms of pre-programmed rules, and react accordingly.This includes actions such as opening or closing valves without the need for frequent communication with central command.It facilitates fast response times and efficient control, especially for remote site systems.• Adaptability: RTUs are programmed using different control algorithms to adapt to changing situations or processing conditions.This flexibility allows adjustments to be made to these tactics so that they can be executed optimally.• Bi-directional Communication: RTUs allow bidirectional transmission between field sensors/operators and the home control room.The above scenario allows real-time monitoring of system feedback and parameters from the field.
• Data collection: Wide range of sensor integration: Different types of sensors can be used to connect to the RTU, and measure different variables such as temperature, pressure flow, etc... • Scalability: RTUs are capable of being implemented in expansive regional networks, where they are accountable for the collection of data from a plethora of sensors spread across a vast territory.This helps in overseeing immense data and drawing better conclusions on a central point.• Standardized Protocols: Most modern RTUs allow the use of certain standard protocols, like Modbus or IEC 61850.This way, the program can be integrated easily with various kinds of controls and SCADA software.Via this feature, the data can be collected and managed without much difficulty.The GUI of the SCADA system, shown in Figure 9, provides other biochemical indicators in real-time.Furthermore, the GUI has simplified the calculation of the water flow rate, making it easier for operators to be immediately informed of important events, such as overflow or other important factors.
CONCLUSIONS
The use of PLC with SCADA to automate inlet wastewater treatment has many advantages over traditional manual methods, including increased efficiency and accuracy as the PLC ensures precise control of flow rates, chemical doses, and treatment times, resulting in improved treatment.Accurate It also improves data acquisition and monitoring as real-time data is acquired from sensors through continuous monitoring of inlet characteristics, allowing for proactive adjustments and optimal operation while enhancing safety and reliability by automating tasks to reduce human intervention.(PLCS7300) for control (monitoring and processing) achieves high performance and lower access time as the RTU suggests creating a larger smart control node that helps in collecting various data and sending it to the computer terminal through the use of communication networks and thus achieves other processors in the data center.The main one provides high accuracy in real-time and sends the data back to the subunits.
APPENDIX
The characteristics and details of PLC data storage use in the inlet parameters system are noted in Figure A1. Figure A2 shows the educational symbols for the basic ladder algorithm that allow researchers to learn it through these symbols.
Figure 3 .
Figure 3.The ladder program algorithm proposed and the parameter system
Figure 5 .Figure 6 .
Figure 5. Database system (Address, Tag, Data type, and OPC/HMI) (a) Online communication between the PLC and PC system process (b) Ladder program results for the proposed system
Figure 7 .
Figure 7. ON line results system
Future
suggestions for this study are to use other types of Schneider PLC and to use machine learning programming to improve predictive maintenance in SCADA networks.
Figure A1 .
Figure A1.The details of PLC data storage
Figure A2 .
Figure A2.Symbols ladder logic diagrams for learning PLC programming
|
v3-fos-license
|
2017-08-02T19:36:01.576Z
|
2013-09-26T00:00:00.000
|
15100444
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "HYBRID",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s00395-013-0387-4.pdf",
"pdf_hash": "3bd06a02ba090bd393037025d233c8496ab4a82d",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44404",
"s2fieldsofstudy": [
"Medicine"
],
"sha1": "3bd06a02ba090bd393037025d233c8496ab4a82d",
"year": 2013
}
|
pes2o/s2orc
|
Role of genetic polymorphisms of ion channels in the pathophysiology of coronary microvascular dysfunction and ischemic heart disease
Conventionally, ischemic heart disease (IHD) is equated with large vessel coronary disease. However, recent evidence has suggested a role of compromised microvascular regulation in the etiology of IHD. Because regulation of coronary blood flow likely involves activity of specific ion channels, and key factors involved in endothelium-dependent dilation, we proposed that genetic anomalies of ion channels or specific endothelial regulators may underlie coronary microvascular disease. We aimed to evaluate the clinical impact of single-nucleotide polymorphisms in genes encoding for ion channels expressed in the coronary vasculature and the possible correlation with IHD resulting from microvascular dysfunction. 242 consecutive patients who were candidates for coronary angiography were enrolled. A prospective, observational, single-center study was conducted, analyzing genetic polymorphisms relative to (1) NOS3 encoding for endothelial nitric oxide synthase (eNOS); (2) ATP2A2 encoding for the Ca2+/H+-ATPase pump (SERCA); (3) SCN5A encoding for the voltage-dependent Na+ channel (Nav1.5); (4) KCNJ8 and KCNJ11 encoding for the Kir6.1 and Kir6.2 subunits of K-ATP channels, respectively; and (5) KCN5A encoding for the voltage-gated K+ channel (Kv1.5). No significant associations between clinical IHD manifestations and polymorphisms for SERCA, Kir6.1, and Kv1.5 were observed (p > 0.05), whereas specific polymorphisms detected in eNOS, as well as in Kir6.2 and Nav1.5 were found to be correlated with IHD and microvascular dysfunction. Interestingly, genetic polymorphisms for ion channels seem to have an important clinical impact influencing the susceptibility for microvascular dysfunction and IHD, independent of the presence of classic cardiovascular risk factors.
Introduction
Historically, in the interrogation of altered vascular function in patient with ischemic heart disease (IHD), scientists have focused their attention on the correlation between endothelial dysfunction and atherosclerosis [11,53,65,67]. However, the endothelium-independent dysfunction in coronary microcirculation and its possible correlations with atherosclerotic disease and myocardial ischemia have not been extensively investigated. In normal conditions, coronary blood flow regulation (CBFR) is mediated by several different systems, including endothelial, nervous, neurohumoral, myogenic, and metabolic mechanisms [2,10,14,15,63,64,69]. Moreover, physiologic CBFR depends also on several ion channels, such as ATP-sensitive potassium (KATP) channels, voltage-gated potassium (Kv) channels, voltage-gated sodium (Nav) channels, and others. Ion channels regulate the concentration of calcium in both coronary smooth muscle and endothelial cells, which in turn modulates the degree of contractile tone in vascular muscle and the amount of nitric oxide that is produced by the endothelium, respectively. In this context, ion channels play a primary role in the rapid response of both the endothelium and vascular smooth muscle cells of coronary arterioles to the perpetually fluctuating demands of the myocardium for blood flow [5,6,13,18,33,45,46,51,52,61,73,75].
Despite this knowledge, there still exists an important gap about the clinical relevance and causes of microvascular dysfunction in IHD. By altering the overall regulation of blood flow in the coronary system, microvascular dysfunction could alter the normal distribution of shear forces in large coronary arteries, thus promoting atherosclerosis. On the other hand, proximal coronary artery stenosis could contribute to microvascular dysfunction [29,60]. Because ion channels play such a critical role in microvascular endothelial and smooth muscle function, we hypothesized that alterations of coronary ion channels could be the primum movens in a chain of events leading to microvascular dysfunction and myocardial ischemia, independent of the presence of atherosclerosis. Therefore, the objective of our study was to evaluate the possible correlation between IHD and single-nucleotide polymorphisms (SNPs) for genes encoding several regulators involved in CBFR, including ion channels acting in vascular smooth muscle and/or endothelial cells of coronary arteries.
Methods
In this prospective, observational, single-center study, 242 consecutive patients admitted to our department with the indication to undergo coronary angiography were enrolled. All patients matched inclusion (age [18; suspected or documented diagnosis of acute coronary syndrome or stable angina with indication(s) for coronary angiography, in accordance with current guidelines [36,68], and the same ethno-geographic Caucasian origin) and exclusion criteria (previous allergic reaction to iodine contrast, renal failure, simultaneous genetic disease, cardiogenic shock, nonischemic cardiomyopathy). All patients signed an informed consent document prior to participation in the study, which included acknowledgement of the testing procedures to be performed (i.e., coronary angiography; intracoronary tests; genetic analysis, and processing of personal data). The study was approved by the Institution's Ethics Committee. All clinical and instrumental characteristics were collected in a dedicated database. On the basis of the coronary angiography and the intracoronary functional tests, the 242 patients were divided into three groups (see also Fig. 1).
• Group 1: 155 patients with anatomic coronary alteration (comprising patients with acute coronary syndrome and chronic stable angina). Group 3: 41 patients with anatomically and functionally normal coronary arteries as assessed by angiography and with normal functional tests (CFR C 2.5 after intracoronary infusion of acetylcholine and adenosine) (Fig. 1).
Genetic analysis
In conformity with the study protocol, ethylenediaminetetraacetic acid (EDTA) whole blood samples were collected according to the international guidelines reported in the literature [48]. Samples were transferred to the Interinstitutional Multidisciplinary BioBank (BioBIM) of IRCCS San Raffaele Pisana (Rome) and stored at -80°C until DNA extraction. Bibliographic research by PubMed and web tools OMIM (http://www.ncbi.nlm.nih.gov/omim), Entrez SNP (http://www.ncbi.nlm.nih.gov/snp), and Ensembl (http:// www.ensembl.org/index.html) were used to select variants of genes involved in signaling pathways related to ion channels and/or previously reported to be associated with microvascular dysfunction and/or myocardial ischemia and/or diseases correlated to IHD, such as diabetes mellitus.
DNA was isolated from EDTA anticoagulated whole blood using the MagNA Pure LC instrument and the MagNA Pure LC total DNA isolation kit I (Roche Diagnostics, Mannheim, Germany) according to the manufacturer's instructions. Standard PCR was performed in a GeneAmp PCR System 9700 (Applied Biosystems, CA) using HotStarTaq Master Mix (HotStarTaq Master Mix Kit, QIAGEN Inc, CA). PCR conditions and primer sequences are listed in Table 1. In order to exclude preanalytical and analytical errors, all direct sequencing analyses were carried out on both strands using Big Dye Terminator v3.1 Cycle Sequencing kit (Applied Biosystems), run on an ABI 3130 Genetic Analyzer (Applied Biosystems), and repeated on PCR products obtained from new nucleic acid extractions. All data analyses were performed in a blind fashion.
Statistical analysis
This report, intended as pilot study, is the first to compare the prevalence of SNPs in genes encoding several effectors (including ion channels) involved in CBFR between these groups of patients. For this reason, no definite sample size could be formally calculated to establish a power analysis. However, assuming a 15 % prevalence of normal macrovascular and microvascular coronary findings in unselected patients undergoing coronary angiography, we estimated that a sample size of at least 150 patients could enable the computation of two-sided 95 % confidence intervals for such prevalence estimates ranging between -5.0 and ?5.0 %.
The significance of the differences of observed alleles and genotypes between groups, as well as analysis of multiple inheritance models (co-dominant, dominant, recessive, over-dominant and log-additive) for SNPs were also tested using a free web-based application (http://213. 151.99.166/index.php?module=Snpstats) designed from a genetic epidemiology point of view to analyze association studies. Akaike Information Criterion (AIC) was used to determine the best-fitting inheritance model for analyzed SNPs, with the model with the lowest AIC reflecting the best balance of goodness-of-fit and parsimony. Moreover, the allelic frequencies were estimated by gene counting, and the genotypes were scored. For each gene, the observed numbers of each genotype were compared with those expected for a population in Hardy-Weinberg (HW) equilibrium using a free web-based application (http://213.151.99.166/index.php?module= Snpstats) [59]. Linkage disequilibrium coefficient (D 0 ) and haplotype analyses were assessed using the Haploview 4.1 program. Statistical analysis was performed using SPSS software package for Windows v. 16.0 (SPSS Inc., Chicago, IL).
All categorical variables are expressed as percentages, and all continuous variables as mean ± standard deviation. Differences between categorical variables were analyzed by Pearson's v 2 test. Given the presence of three groups, differences between continuous variables, including the number of SNPs tested, were calculated using one-way ANOVA; a post-hoc analysis with Bonferroni correction was made for multiple comparisons.
Univariate and multivariate logistic regression analyses using enter method were performed to assess the independent impact of genetic polymorphisms on coronary artery disease and microvascular dysfunction, while adjusting for other confounding variables. The following parameters were entered into the model: age, male gender, type 2 diabetes mellitus (T2DM), systemic arterial hypertension, dyslipidemia, smoking status, and family history of myocardial infarction (MI). Only variables with a p value \0.10 after univariate analysis were entered into the multivariable model as covariates. A two-tailed p \ 0.05 was considered statistically significant.
Definition of cardiovascular risk factors
Patients were classified as having T2DM if they had fasting levels of glucose of [126 mg/dL in two separate measurements or if they were taking hypoglycemic drugs. Systemic arterial hypertension was defined as systolic blood pressure [140 mmHg and diastolic blood pressure [90 mmHg in two separate measurements or if the patient was currently taking antihypertensive drugs. Dyslipidemia was considered to be present if serum cholesterol levels were[220 mg/dL or if the patient was being treated with cholesterol-lowering drugs. Family history of MI was defined as a first-degree relative with MI before the age of 60 years.
Results
Sixty-two polymorphisms distributed among six genes coding for nitric oxide synthase, the SERCA pump, and ion channels were screened for sequence variations using PCR amplification and direct DNA sequencing analysis in the population of 155 patients with CAD (group 1), 46 patients with microvascular dysfunction (group 2), and 41 patients with normal coronary arteries and normal endotheliumdependent and endothelium-independent vasodilation (group 3). In Group 3, the genotype distribution of SNP rs5215 (Kir6.2/KCNJ11) moderately deviates from the HW equilibrium (p = 0.05). In Group 1 (CAD), the polymorphism rs6599230 of Nav1.5/SCN5A showed deviation from HW equilibrium (p = 0.017). The genotypic distribution of rs1799983 polymorphism for eNOS/NOS3 is inconsistent with the HW equilibrium in groups 1, 2, and 3 (p = 0.0001, p = 0.0012 and p = 0.0001, respectively). Haplotype analyses revealed that there is no linkage disequilibrium between polymorphisms of the analyzed genes.
There was no significant difference in the prevalence of T2DM (p = 0.185) or dyslipidemia (p = 0.271) between groups, as shown in Table 2. In regards to genetic characteristics, no significant differences between the three Table 2. Table 3 displays significant differences between normal subjects (group 3) and patients with either CAD (group 1) or microvascular dysfunction (group 2). When correcting for other covariates as risk factors, the rs5215_GG genotype of Kir6.2/KCNJ11 was found to be significantly associated with CAD after multivariate analysis (OR = 0.319, p = 0.047, 95 % CI = 0.100-0.991), evidencing a ''protective'' role of this genotype, as shown in Table 4a. Similarly, a trend that supports this role of Kir6.2/KCNJ11 was also observed in microvascular dysfunction for rs5219_AA. In contrast, rs1799983_GT for eNOS/NOS3 was identified as an independent risk factor following multivariate analysis (Table 4b), which agrees with literature findings as described below.
Implications of the present work
This study describes the possible correlation of polymorphisms in genes encoding for CBFR effectors (i.e., ion channels, nitric oxide synthase, and SERCA) with the susceptibility for microcirculation dysfunction and IHD. Our main findings are as follows: 1. A marked HW disequilibrium in the genotypic distribution of rs1799983 polymorphism for eNOS/NOS3 was observed in all three populations. Moreover, this SNP seems to be an independent risk factor for microvascular dysfunction, as evidenced by multivariate analysis; 2. The SNPs rs5215_GG, rs5218_CT, and rs5219_AA for Kir6.2/KCJ11 could reduce susceptibility to IHD, since they were present more frequently in patients with anatomically and functionally normal coronary arteries; 3. In particular, with regard to rs5215 for Kir6.2/KCJ11, we observed a moderate deviation from the HW equilibrium in the genotypic distribution in the control group. In addition, this genotype appears to be an independent protective factor in the development of IHD, as evidenced by multivariate analysis; 4. Furthermore, the trend observed for the SNP rs5219_AA of Kir6.2/KCNJ11 may suggest a role for this genotype in protecting against coronary microvascular dysfunction; 5. The rs1805124_GG genotype of Nav1.5/SCN5A seems to play a role against CAD; 6. No association seems to exist between the polymorphisms of SERCA/ATP2A2, Kir6.1/KCNJ8, and Kv1.5/KCNA5 and the presence IHD; 7. All groups are comparable regarding the cardiovascular risk factors of T2DM and dyslipidemia, illustrating a potentially important implication of genetic polymorphisms in the susceptibility to IHD. It is important to underline that the control group (group 3) is a high-risk population, because of their cardiovascular risk factors (hypertension = 17 %, T2DM = 34.1 %, dyslipidemia = 41.4 %), with an appropriate indication for coronary angiography, in accordance with current guidelines. Nevertheless, these patients were demonstrated to have both anatomically and functionally normal coronary arteries. Moreover, as shown in Tables 2 and 3, we observed that rs5215_GG, rs5218_CT and rs5219_AA for Kir6.2/KCNJ11 had a higher prevalence in this group, compared to patients with CAD and patients with microvascular dysfunction. Moreover, as shown in Table 4, the presence of the rs5215_GG polymorphism for the Kir6.2 subunit was inversely correlated with the prevalence of cardiovascular risk factors and CAD, whereas rs5219_AA of the Kir6.2 subunit trended towards an inverse correlation with coronary microvascular dysfunction. On the other hand, the SNP rs1799983_GT of eNOS was confirmed to be an independent risk factor for microvascular dysfunction. Our data suggest that the presence of certain genetic polymorphisms may represent a non-modifiable protective factor that could be used to identify individuals at relatively low-risk for cardiovascular disease, regardless of the presence of T2DM and dyslipidemia.
Current clinical and research context
In normal coronary arteries, particularly the coronary microcirculation, there are several different mechanisms of CBFR, including endothelial, neural, myogenic, and metabolic mediators [2,8,10,12,14,15,37,55,63,64,69]. In particular, endothelium-dependent vasodilation acts mainly via eNOS-derived nitric oxide (NO) in response to acetylcholine and shear stress. NO increases intracellular cyclic guanosine monophosphate. It also causes vasodilation via activation of both KCa channels and KATP channels. Recent data suggested a pathophysiologically relevant role for the polymorphisms of eNOS/NOS3 in human coronary vasomotion [40][41][42][43]. Our data suggest that rs1799983_GT at exon 7 (Glu298Asp, GAG-GAT) of eNOS/NOS3 represents an independent risk factor for coronary microvascular dysfunction, which agrees with a recent meta-analysis reporting an association of this SNP with CAD in Asian populations [74]. In addition, this SNP has been associated with endothelial dysfunction, although the mechanisms are not well defined [30]. Consistently, a recent study performed on 60 Indian patients with documented history of CAD reported a significantly higher frequency of rs1799983 (p \ 0.05) compared to control subjects, indicating that variations in NOS3 gene may be useful clinical markers of endothelial dysfunction in CAD [54]. Interestingly, another association between rs1799983_GT and impaired collateral development has been observed in patients with a high-grade coronary stenosis or occlusion [19].
As is well known, the significance of the mechanisms of CBFR is partly determined by the location within the coronary vasculature. For instance, for vessels with a diameter of \200 lm-which comprise the coronary microcirculation-metabolic regulation of coronary blood flow is considered the most important mechanism [24,63]. Importantly, many of these mediators of metabolic regulation act through specific ion channels. In particular, in both coronary artery smooth muscle cells and endothelial cells, potassium channels determine the resting membrane potential (Em) and serve as targets of endogenous and therapeutic vasodilators [9,27]. Several types of K ? channels are expressed in the coronary tree. The KATP channels couple cell metabolic demand to conductance, via pore-forming (Kir6.1 and/or Kir6.2) subunits and regulatory [sulphonylurea-binding (SUR 1, 2A, or 2B)] subunits.
Our data do not support any significant difference regarding the Kir6.1 subunit of the KATP channel. On the other hand, this study suggests an important role of specific SNPs for the Kir6.2 subunit (Tables 2, 3)-i.e., rs5215, rs5219, and rs5218-in the susceptibility to IHD and microvascular dysfunction. These SNPs are among the most studied KATP channel polymorphisms, especially in the context of diabetes mellitus. In fact, in both Caucasian and Asian populations, these three SNPs as well as other genetic polymorphisms for the KCNJ11 gene have been associated with diabetes mellitus [34,35,44,50,57,58,70]. Nevertheless, the precise structure-function impacts of the various amino acid substitutions remain unclear. The rs5215 and rs5219 polymorphisms, also known as I337V and E23K, respectively, are highly linked with reported concordance rates between 72 and 100 % [22,23,56]. The high concordance between rs5219 and rs5215 suggests that these polymorphisms may have originated in a common ancestor, further indicating a possible evolutionary advantage to their maintenance in the general population [49]. In our study, multivariate analysis suggests both an independent protective role of the rs5215_GG against developing CAD and a trend for rs5219_AA to be associated with protection against coronary microvascular dysfunction (Table 4a, b). The variant rs5215_GG is a missense SNP located in the gene KCNJ11 at exon 1009 (ATC-GTC) and results in the substitution of isoleucine (I) residue with valine (V) [23]. Future studies are necessary to better understand the influence of this single amino acid variant on the function of the channel.
In humans, vasodilation of the coronary microvasculature in response to hypoxia and KATP channel opening are both impaired in diabetes mellitus [39]. It is also described that gain-of-function mutations of the KCNJ11 gene cause neonatal diabetes mellitus, and loss-of-function mutations lead to congenital hyperinsulinism [43]. Our study is not discordant with previous studies about the correlation of SNPs of the Kir6.2 subunit and diabetes mellitus. Rather, our findings show that these SNPs are correlated with anatomically and functionally normal coronary arteries, independent of the presence of either diabetes mellitus or dyslipidemia.
These data suggest the possibility that these particular SNPs may identify individuals with decreased risk for coronary microcirculatory dysfunction and IHD, regardless of the presence of T2DM and/or dyslipidemia. However, further studies are necessary to confirm these findings. In this context, to better investigate the implications of genetic variation in the KATP channel, future studies should include ion channel's functional modification due to the SNPs and analysis of SUR subunits.
More than 40-kV channel subunits have been identified in the heart, and sections of human coronary smooth muscle cells demonstrate Kv1.5 immunoreactivity [16,17,27,38]. Through constant regulation of smooth muscle tone, Kv channels contribute to the control of coronary microvascular resistance [4,7]. Pharmacologic molecules that inhibit Kv1.5 channels such as pergolide [25], 4-amino-pyridine [32], and correolide [17], lead to coronary smooth muscle cell contraction and block the coupling between cardiac metabolic demand and coronary blood flow. However, no significant differences were identified between the study groups in terms of the particular polymorphisms for Kv1.5 that were analyzed in this study.
Expression of the voltage-dependent Na ? channel (Nav) has been demonstrated in coronary microvascular endothelial cells [3,66]. Our analysis reveals a possible implication of the polymorphism rs1805124_GG for Nav1.5 channel with the presence of anatomically and functionally normal coronary arteries. This SNP leads to a homozygous 1673A-G transition, resulting in a His558-to-Arg (H558R) substitution. It is important to underline that our data are the first to correlate the polymorphism rs1805124_GG with IHD. Further research is necessary to confirm the observed implication.
Finally, we have analyzed the sarco/endoplasmic reticulum calcium transporting Ca 2? -ATPase (SERCA), which is fundamental in the regulation of intracellular Ca 2? concentration [6]. SERCA is an intracellular pump that catalyzes the hydrolysis of ATP coupled with the translocation of calcium from the cytosol into the lumen of the sarcoplasmic reticulum. Although this pump plays a critical role in regulation of the contraction/relaxation cycle, our analysis did not reveal any apparent association between genetic variants of SERCA and the prevalence of microvascular dysfunction or IHD.
Conclusions
This pilot study is the first to compare the prevalence of SNPs in genes encoding coronary ion channels between patients with CAD or microvascular dysfunction and those with both anatomically and functionally normal coronary arteries. Taken together, these results suggest the possibility of associations between SNPs and IHD and microvascular dysfunction, although the precise manners by which specific genetic polymorphisms affect ion channel function and expression have to be clarified by further research involving larger cohorts.
Limitations and future perspectives
Notable limitations of this pilot study are as follows: 1. Due to the lack of pre-existing data, the power calculation was performed in advance on the basis of assumptions of allele frequencies and the population at risk. 2. The sample size for each group is small, mainly due to both the difficulty in enrolling patients with normal coronary arteries and normal microvascular function (group 3) and the elevated costs of the supplies such as Doppler flow wires. 3. There is a lack of ethnic diversity of our cohort. 4. Currently, there is an absence of supportive findings in another independent cohort or population. However, our pilot study included patients within a well-defined, specific population and was aimed to identify the presence of statistical associations between selected genetic polymorphisms and the prevalence of a specific disease. 5. There is a lack of functional characterization of the described genetic polymorphisms. 6. We have not identified any correlation between novel SNPs and IHD. Nevertheless, we completely analyzed exon 3 of both KCNJ8 and KCNJ11 genes (Kir6.1 and Kir6.2 subunit, respectively) as well as the whole coding region of KCN5A gene (Kv1.5 channel). Moreover, we examined previously described SNPs since there are no data in the literature regarding the possible association of the prevalences of those polymorphisms in the examined population.
More extensive studies are necessary to confirm our findings, possibly with a larger number of patients. Future investigations are also required to confirm the roles of ion channels in the pathogenesis of coronary microvascular dysfunction and IHD. These studies should involve analysis of both other subunits of the KATP channels (i.e., sulfonylurea receptor, SURx) and further coronary ion channels (e.g., calcium-dependent K channels), as well as in vitro evaluation of ion channel activity by patch clamp and analysis of channel expression in the human cardiac tissue. Moreover, to better address the significance of microvascular dysfunction in IHD, it could be interesting to analyze typical atherosclerosis susceptibility genes (e.g., PPAP2B, ICAM1, et al.).
Conflict of interest
On behalf of all authors, the corresponding author states that there is no conflict of interest.
Open Access This article is distributed under the terms of the Creative Commons Attribution License which permits any use, distribution, and reproduction in any medium, provided the original author(s) and the source are credited.
|
v3-fos-license
|
2020-12-03T14:08:25.599Z
|
2020-12-03T00:00:00.000
|
227249800
|
{
"extfieldsofstudy": [
"Medicine",
"Chemistry"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fchem.2020.598562/pdf",
"pdf_hash": "b8c2a2ca47eae41e8ca42cfb4b445dfc0d61eff7",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44406",
"s2fieldsofstudy": [
"Materials Science",
"Chemistry"
],
"sha1": "b8c2a2ca47eae41e8ca42cfb4b445dfc0d61eff7",
"year": 2020
}
|
pes2o/s2orc
|
Alkyl Chain Grafted-Reduced Graphene Oxide Membrane for Effective Separation of Water/Alcohol Miscible Mixtures
Separation of water/alcohol miscible mixtures via direct filtration only under gravity is a great challenge. Here, different alkyl chain grafted-reduced graphene oxide (alkyl-RGO) is synthesized and characterized. The hydrophobic alkyl chains can considerably modify the oil-wettability of the membranes and avoid water permeation. The alkyl-RGO membrane obtained by vacuum filtration can separate water/oil immiscible mixtures. Importantly, water/alcohol miscible mixtures could also be separated solely under gravity, where alcohols efficiently permeate the alkyl-RGO membrane while water is prevented through the membrane. The separation efficiency of C12H-RGO membrane reaches up to about 0.04 vol% of water content for the case of separating an n-propanol/water (90:10 v/v) mixture with high n-propanol permeability of approx. 685 mL m−2 h−1. Molecular simulations indicate that the selective absorption ability and diffusion rate also affect water/alcohol separation. The alkyl-RGO membranes via gravity driven filtration can extend the applications of separation of water/alcohol miscible mixtures.
INTRODUCTION
Separation and purification of alcohol from water/alcohol mixtures are very important due to their wide application in the chemical industry or as oil-based fuels (Nguyen et al., 2020;Sapegin et al., 2020). Generally, alcohols and water mixtures often form azeotropes or are similar in their physicochemical properties, making their separation particularly difficult. To solve this problem, several strategies including liquid-liquid extraction, pervaporation, and adsorption have been used as potential separation techniques (Zhao et al., 2020). Among them, membrane-based pervaporation technology has been recognized as a good substitute for conventional energy-intensive separation processes and has been studied intensively (Zuo and Chung, 2013;Lively and Sholl, 2017;Ying et al., 2017;Zhang et al., 2020). For example, Jiang et al. reported the enhanced separation selectivity by an efficient mussel-inspired approach during the pervaporation process (Zhao et al., 2015). However, complicated processes are required in the pervaporation, such as vacuum equipment and heating and cooling devices. Previously, membrane-based separation for oil/water mixtures via direct filtration has been extensively reported by Jiang (Qu et al., 2019), Seeger (Chu et al., 2015), Tuteja (Tuteja et al., 2007;Kota et al., 2012), and others (Cao et al., 2019;Wang et al., 2019;Yang et al., 2019). Our group has also demonstrated a pH responsive coating and self-healing electrospun membrane for the separation of oil and water mixtures Fang et al., 2016). All of these achievements are based on designing materials with special wettability that are superhydrophobic (Wang B. et al., 2015;Wang H. et al., 2015;Ge et al., 2019) or under-water superolephobic (Dudchenko et al., 2015;Gao et al., 2016). However, alcohol and water are miscible and all of these membranes are not suitable for water/alcohol mixture separation. Therefore, direct separation of water/alcohol mixtures by membrane-based filtration is a great challenge.
Due to their unique two-dimensional (2D) structures and adjustable nanopores, graphene or its derivatives including graphene oxide (GO) or reduced graphene oxide nanosheets, have great applications in the field of gas or liquid separation (Kim et al., 2013;Li et al., 2013;Niu et al., 2014;Liu H. et al., 2015;Sun et al., 2016;Kidambi et al., 2017;Wei et al., 2017;Xu et al., 2017;Ling et al., 2020). For example, Geim et al. (Nair et al., 2012) reported that micrometer-thick GO membranes allowed unimpeded percolation of water, but impeded liquids, vapors and gases. Li and co-workers (Qiu et al., 2011) constructed a type of wet graphene membranes for the separation of nanoparticles and dyes. Gao et al. (Han et al., 2013) reported that base-refluxing reduced GO membranes could separate organic dyes and water mixtures. In addition, the diverse interplays between ions and GO or reduced GO gave rise to different interaction strengths, which resulted in excellent selectivity of the membranes toward various ion species in solutions when permeating through membranes (Cohentanugi and Grossman, 2012). Besides water purification and ion selectivity, RGO-based membranes for organic solvent filtration have also been reported (Huang et al., 2015(Huang et al., , 2016. In brief, all of these achievements illustrated that the 2D nanochannels between GO or reduced GO membranes can provide pathways for gas, ion and organic solvent separation as well as water desalination based on size-dependent molecular sieving and diverse interactions. Separation of miscible water/organic solutions by graphenerelated materials has also been reported, which is still based on the pervaporation process. For example, a ceramic hollow fiber coated by GO membrane exhibited excellent water permeation for dimethyl carbonate/water mixtures through a pervaporation process (Huang et al., 2014). Gorgojo et al. (Alberto et al., 2016) reported organophilic mixed matrix membranes containing graphene-like fillers for separation of 1-butanol and ethanol from aqueous solutions. Recently, Pan et al. (Zhang et al., 2018) reported that polydopamine grafted GO composite membranes could separate 70 wt% ethanol/H 2 O mixture and 70 wt% isopropyl alcohol/H 2 O mixture by pervaporation, respectively. More recently, Wang et al. (Chen et al., 2020) reported robust angstrom-channel graphene membranes which can concentrate ethanol to 99.9 wt% from dilute solution with one to two orders of flux than conventional pervaporation membranes. However, separation of miscible water/alcohol mixtures via a graphenebased membrane filtration technology only by gravity has been seldom reported.
Herein, we provide an alternative strategy by using a facial filtration process to separate water/alcohol via a RGObased membrane. Considering that the hydrophilic GO can be reduced to hydrophobic RGO by reduction, which may hinder water permeation, different alkyl chain (n-propyl, n-octyl and n-dodecyl) grafted RGO (here referred to as C 3 H-RGO, C 8 H-RGO and C 12 H-RGO, respectively) were designed and fabricated through simultaneous reduction and grafting on GO by the corresponding alkylaniline ( Figure 1A). Thus, the alkyl chain may enlarge the interplanar distance of the reduced GO, and the corresponding alkyl-RGO membranes fabricated by vacuum filtration can permeate alcohol while blocking water permeation solely by gravity (Figures 1B,C). The molecular simulation indicates that the absorption and diffusion of water and alcohol are different when passing through the membrane.
Preparation of Alkylaniline Functionalized RGO (alkyl-RGO)
Graphene oxide (GO) was obtained from natural graphite by the Hummers method (see Supplementary Material). GO (0.6 g) and p-propylaniline (0.9 g) was mixed in 90 mL of ethanol in a three-neck flask. The mixture was refluxed at 100 • C for 12 h with stirring. Then, the resulting solution was filtrated with a polypropylene membrane (∼0.22 µm). The collected powders were rinsed by ethanol and filtrated to remove the physically absorbed p-propylaniline. Finally, the product (referred to as C 3 H-RGO) was dried in an oven at 80 • C for 24 h. We also obtained p-n-octylaniline and p-n-dodecylaniline functionalized RGO (referred to as C 8 H-RGO and C 12 H-RGO, respectively) with the same procedure.
Fabrication of Alkyl-RGO Membranes With Different Thicknesses
Alkyl-RGO membranes were fabricated via vacuum filtration of dispersions of C 3 H-RGO, C 8 H-RGO, and C 12 H-RGO in dimethylformamide (DMF) (0.1 mg/mL, 0.15 mg/mL, 0.2 mg/mL), respectively. Each membrane was fabricated by filtrating different volumes of dispersions on a cellulose ester membrane (50 mm diameter, 0.22 µm pore size). During the filtration process, the vacuum degree is kept at nearly−0.1 MPa. Finally, the membranes were dried in an oven at 50 • C for more than 12 h until the mass of the membranes did not change (TGA was used to confirm no DMF left, not shown). Thus, alkyl-RGO membranes with different thicknesses were obtained.
Separation of Immiscible Mixtures
Several organic solvents including diesel, petroleum ether, toluene, n-hexane, bromoethane, and dichloromethane were used to mix with water at a volume ratio of 1:1 for separation. When the mixed solutions were poured into the separation device, the oil passed through while water was retained on the membranes, thus achieving water and oil separation. The flux was measured by pouring 30 mL of various types of oil/water mixtures into the separation device. The time of the permeate passing through the device was recorded and the separation flux was calculated as the following equation: where V is the volume of the permeate, S is the valid area of the membrane and t is the testing time. After each separation, the membrane was simply washed by ethanol and dried. The recovery was determined by calculating the ratio of the volume of collected organic liquids to the volume of the organic feed.
Separation of Miscible Water/Alcohol Mixtures
Using the same separation device for the separation of water miscible liquids including methanol, ethanol, n-propanol, isopropanol and DMF, the mixed solutions (30 mL) with 10% volume fraction of water were poured into the separation device without pressure and only under gravity. The effective membrane area is π(0.02 m) 2 . At different times, the compositions of the permeate were determined by the gas chromatographic technique using a flame ionization detector. The separation factor α was determined by the following equation: Here, X and Y are the volume fractions of alcohol and water in the feed and permeate sides, respectively.
Gas Chromatography Measurement
The permeate composition after the separation of water miscible solution was detected by a GC-9860, SE-30 column with a length of 30 m. The temperatures of the gasify room and column were 200 and 160 • C, respectively. By using cyclohexanone as the internal standard substance, the water content could be obtained by calculating the ratio of the integral areas of the ethanol and internal standard. The detailed calculation method is described in the Supplementary Material.
Molecular Dynamic Simulation
The molecular dynamic simulation of all systems can be performed after charges and potentials are assigned to each atom. The following assumptions were made during the simulation: (1) The simple point charge (SPC) model is adopted for water molecules of the solutions.
(2) The long-range electrostatic interactions are accounted for using the Ewald method.
(3) Periodic boundary conditions are adopted in three dimensional directions of the system. (4) The simulation process is isothermal. The total energy is written as a combination of valence terms including diagonal and off-diagonal cross-coupling terms and non-bond interaction terms, which include the Coulombic and Lennard-Jones functions for electrostatic and van der Waals interactions, where E VDW and E elec are given by Equation (4): The parameters for each like-site interaction are given by the COMPASS force field (Sun, 1998;Sun et al., 1998). All three alkyl-RGO structures are constrained during the simulation. The energies of the initial configurations are minimized with the Smart Minimizer method. After the minimization, all simulations are equilibrated at constant temperature (273 K) and volume (NVT) for 5 ns. Atomic coordinates are saved every 20 ps. The analysis is performed by averaging over the final 1 ns of each trajectory. The absorbed energy of ethanol is calculated as follows: where N alcohol is the number of alcohol molecules, E solution is the interaction energy of the solution, E total is the potential energy of the energy-minimized system in equilibrium, and E SAMs is the potential energy of the single alkyl-RGO monolayers. The mean square displacements (MSD) of water and alcohol molecules in three systems are calculated from Equation (6): where N is the number of target molecules and r i (t) is the position of molecule i at time t. This figure displays the MSD of the water and alcohol molecules in the final 1,000 ps of the final equilibrium trajectory. Diffusion coefficients (D) can then be obtained from the slope of the mean square displacement vs. time curve, using the well-known Einstein relation, where d is the dimensionality of the system, and r i (t) and r i (0) are the center-of-mass coordinates of the ith molecules at times t and t = 0, respectively.
Characterization FTIR spectra were obtained on an IR Prestige-21 FTIR spectrometer (Shimadzu, Japan). X-ray diffraction (XRD) analyses were carried out on a D-8 ADVANCE X-ray diffractometer (Bruker AXS, Germany). Alkyl-RGO and GO powders dried from aqueous solution were used to measure XRD. X-ray photoelectron spectroscopy (XPS) measurements were conducted in an ESCALAB 250 (Thermo Fisher Scientific, America) using a monochromatic Al-Kα X-ray source at 100 W. Scanning electron microscope (SEM) images were obtained by a QUANTA 200 (FEI, America). The thickness of the alkyl-RGO membranes was obtained by scanning at least three different samples and the average values were used to evaluate the thickness. The Raman spectra were obtained using a LabRAM HR800 Raman spectrometer (HORIBA JY, France). The equilibrium contact angles (CA) were measured by a DSA 100 (KRÜSS, Germany) contact angle meter at ambient temperature. Atomic force microscopy (AFM) images were carried out on a Mutimode 8 Nanoscope V system (Bruker, USA) in peak force tapping mode. Different alkyl-RGO membranes were measured by AFM and the surface roughness were obtained by scanning at least three different areas.
Characterization of Alkyl-RGOs
The different alkyl-RGOs were synthesized according to a previous report ( Figure 1A). After filtration, the alkyl-RGO layer could tightly adhere to the cellulose ester membrane. We tried to uncover the layer from the cellulose ester membrane, but failed. The composite membranes are flexible as shown in Figure 1B. In the FTIR spectra, two new peaks centered at 2,921 cm −1 and 2,842 cm −1 appear for alkyl chain grafted-RGO compared to that of GO, which are assigned to the stretching vibration of CH 2 . In addition, the NH stretching peak at 1,564 cm −1 indicates the formation of the C-NH-C bond (Supplementary Figure 1). All of these changes indicate the successful grafting of alkyl chains on the GO sheets. The reduction of GO by alkylaniline is proved by XRD measurement (Figure 2a). The alkyl-RGOs exhibit a weak and broad diffraction peak centered at 23.32 • , 21.32 • , and 21.15 • with an interplanar distance of 3.81, 4.16, and 4.20 Å for C 3 H-RGO,C 8 H-RGO, and C 12 H-RGO, respectively. These interplanar distances of alkyl-RGO are slightly higher than that of graphite (3.35 Å) and much lower than that of GO precursor (7.78 Å), indicating the successful reduction of GO into RGO sheets. Considering a large amount of oxygen-containing group on GO, we fabricated RGO by hydrazine reduction of GO. As expected, the interplanar distance of alkyl-RGO is slightly larger than that of RGO obtained by hydrazine reduction, indicating the successful reduction and grafting of alkyl chains on GO sheets. In addition, the interplanar distance of alkyl-RGO also increases as the alkyl chain length increases, which may affect the alcohol permeation. The Raman spectra indicate that two bands around 1,349 cm −1 and 1,583 cm −1 are assigned to the D-band and G-band of carbon, respectively (Figure 2b). The intensity ratio of D-band and Gband (I D /I G ) are calculated to be 1.22, 1.09, and 1.07 for C 3 H-RGO, C 8 H-RGO, and C 12 H-RGO, respectively, which are higher than that of GO (0.81), confirming that the GO sheets are reduced and that some conjugated structure of GO is converted to single bonds Liu J. et al., 2015). The surface chemistry of C 12 H-RGO was further characterized by X-ray photoelectron spectroscopy (Supplementary Figure 2). A new element (N) appears in the C 12 H-RGO spectra. The molar ratio of C/O is about 4.2 for C 12 H-RGO, which is much higher than that of GO (2.1) and slightly lower than that of RGO obtained by hydrazine reduction (5.2) (Supplementary Figure 3). The C 1s spectra of GO and C 12 H-RGO also reflected the reduction process. For GO, four different peaks centered at 284.6, 286.6, 287.2, and 288.5 eV correspond to C-C in the unoxidized graphite carbon skeleton, C-OH in the hydroxyl group, C-O-C in the epoxide group, and O-C=O in the carboxyl group, respectively Yang et al., 2020) (Supplementary Figure 2b). After reduction by dodecylaniline, the peaks corresponding to the oxygencontaining groups of C 12 H-RGO are significantly weakened, indicating that a large number of oxygen-containing groups have been removed (Supplementary Figure 2c). The elemental mapping confirms that the alkyl chain is uniformly distributed on the RGO sheet (Figures 2c-e).
The alkyl-RGO membranes were first tested for water/oil separation. The key point of oil/water separation is the special surface wettability of membranes. Therefore, the surface wettability and morphology of the membrane were first characterized. The alkyl-RGO membranes were obtained by facial vacuum filtration of alkyl-RGO in DMF dispersions. Three alkyl-RGO membranes (C 3 H-RGO ∼ 0.93 µm, C 8 H-RGO ∼ 0.58 µm, and C 12 H-RGO ∼ 0.49 µm) were measured by SEM and AFM, respectively. As characterized by SEM (Figures 3a-c), the surfaces of C 3 H-RGO and C 8 H-RGO are smoother and no obvious corrugations are observed, while the surface of C 12 H-RGO reveals significant folding of RGO sheets. The root mean square roughness and the average roughness obtained from AFM height images are all obviously increased as length of the alkyl chains increases from C3 to C8 and C12, respectively (Figures 3d-g). This is consistent with the water contact angle measurement, which shows that the contact angle is increased from 100 ± 3 • for C 3 H-RGO membrane to 150 ± 3 • for C 12 H-RGO membrane. In addition, the stability of the water droplet on the alkyl-RGO membrane is slightly decreased. After 30 min, the water contact angles still remain in the hydrophobic range of 137 ± 3 • for the C 12 H-RGO membranes ( Figure 4A). When oil (e.g., n-hexane) is dropped on the membrane surface, it immediately penetrates into the films, which indicates the high oleophilicity ( Figure 4B). It is noted that the longer alkyl chains on the RGO sheet endow the membranes with more roughness and a larger contact angle, which facilitate the water/immiscible liquid separation and water/alcohol separation (see below).
Separation of Immiscible Oil/Water Mixtures
For some traditional membranes, the surface wettability for water and oil is not distinctly different. When absorbing oil, the membrane takes in water simultaneously and vice versa, which results in a decrease in the separation efficiency (Kota et al., 2012). In our case, the superhydrophobic surface is favorable for water/oil separation. The thickness of the membrane can be easily controlled by varying the volume and concentration of the dispersion. Thus, alkyl chain grafted-RGO membranes with different thicknesses have been fabricated (Supplementary Figure 4). We selected C 3 H-RGO, C 8 H-RGO, and C 12 H-RGO membrane with thicknesses of 0.93 µm, 0.58 µm, and 0.49 µm, respectively, to measure the oil/water separation. Several organic solvents including diesel, petroleum ether, toluene, n-hexane, bromoethane, and dichloromethane were used to mix with water at a volume ratio of 1:1 for separation. For all of the alkyl-RGO membranes, quite high fluxes are obtained as shown in Figure 5A. For example, the obtained fluxes of C 12 H-RGO membranes were 3,184 ± 36, 2,388 ± 30, 1,719 ± 26, 1,671 ± 25, 1,624 ± 20, and 1,643 ± 28 Lm −2 h −1 mbar −1 for dichloromethane, bromoethane, n-hexane, toluene, petroleum ether, and diesel, respectively. The recovery was determined by calculating the ratio of the volume of collected organic liquids to the volume of the organic feed. For all alkyl chain-grafted RGO membranes, the recovery is higher than 98%. After 5 recycles, the recovery is still higher than 97% (Figure 5B).
In our case, the recovery may be related to the density of the separated oil. The densities of dichloromethane and bromoethane are 1.33 and 1.47 g cm −3 , respectively; while the densities of the other studied oil are lower than 1 (Supplementary Table 1). Thus, the obvious higher recovery for dichloromethane and bromoethane should be due to their higher densities (Xi et al., 2019).
Separation of Water Miscible Mixtures
More importantly, the alkyl chain-grafted RGO membranes can also separate water/alcohol miscible solution. After miscible solutions containing 10 vol% water and 90 vol% different liquids such as methanol, ethanol, n-propanol, isopropanol, and DMF were filtrated through the membranes, the compositions of the permeate were determined by gas chromatography (see Experiment section and Supplementary Material). It is found that all alkyl-grafted RGO membranes can efficiently separate a 10 vol% water/alcohol mixture by direct filtration, allowing alcohol but inhibiting water to pass through, to produce a permeate of more than 97 vol% alcohol ( Table 1).
For all alkyl-RGO membranes with different thicknesses, after the separation of methanol/water, ethanol/water and n-propanol/water mixtures, the water contents in the permeate are in the order of methanol > ethanol > n-propanol. The increased separation efficiency is attributed to the alkyl chain interaction between the alcohols and the modified RGO membranes, where longer alkyl chains of the alcohol (e.g., n-propanol) lead to higher interaction with the alkyl-RGO membranes. For example, by using C 3 H-RGO membrane with a thickness of 0.93 µm, the water content in the permeate decreases from 1.42 ± 0.04 vol% for methanol/water mixture to 1.20 ± 0.03 vol% for n-propanol/water mixture. In addition, for all the water/alcohol mixtures, the separation efficiency also increases as the alkyl-RGO membrane changes from C 3 H-RGO to C 12 H-RGO, which is also interpreted as due to increased alkyl chain interactions. For example, the water content can be decreased to 0.04 ± 0.01 vol% after separation via a C 12 H-RGO membrane with a thickness of 0.49 µm. The separation factor of C 12 H-RGO membrane (0.49 µm) for separating n-propanol/water is about 278, which is higher than that of the reported membrane for separation of water/alcohol via the pervaporation process (Tang et al., 2014;Igi et al., 2015). However, for the mixtures with higher water concentrations, the separation efficiency reduces. For example, after separation of the n-propanol/water mixtures with 20% volume fraction of water, the water content in the permeate is about 1.5 vol% and the separator factor is calculated to be about 16.
In addition, to confirm the effectiveness of water/alcohol separation, we measured the water CA and alcohol CA of pure cellulose ester membrane and GO deposited cellulose ester membrane. For the two kinds of membranes, they all behave hydrophilic and oleophilic. When water droplet was dropped on the GO membrane, it penetrated into the membrane within 40 s (Supplementary Figure 5). When water/alcohol mixtures were poured on the GO membranes, the water/alcohol mixtures were permeated the membrane together and could not be separated (Supplementary Figure 6). Geim et al. (Nair et al., 2012) reported that thick RGO membranes (≈0.5-1.0 µm) were impermeable to all molecules including water. Remarkably, in our case, modification and reduction of GO offer a much more straightforward approach to control the passages for alcohol and water. To further confirm the role of alkyl chains on the RGO sheets, we prepared a RGO membrane by hydrazine hydrate. The water contact angle of the RGO membrane decreases with time, and the oil droplet (e.g., hexane) immediately penetrates into the membrane after touching the RGO surface, indicating the hydrophilic and oleophilic properties of the RGO membrane (Supplementary Figure 7). After filtration of water/alcohol miscible solution, the film reveals no separation ability for all miscible solutions as confirmed by GC.
It should be mentioned that not only the alkyl chain length of the alcohol but also the structure of the alcohol play a role in affecting the separation efficiency. As shown in Table 1, the water content after separation of i-propanol/water for all of the alkyl-RGO membranes is slightly higher than that of the permeate after separation of the n-propanol/water mixture. In addition, the separation efficiency of DMF/water miscible solution was also checked ( Table 1). The water content in the permeate is decreased as the alkyl chain length increases and reaches 1.72 ± 0.05 vol% after separation by the C 12 H-RGO membrane.
It is noted that the whole separation process is only driven by gravity without any other external force. All miscible solutions could be successfully separated in one step. Although the diameter of alkyl-RGO membranes is 4 cm in this study due to the limitation of filtration setup, they could be larger according to the actual situation. The fluxes of the permeate through the RGO-based membrane were determined by measuring the time for almost completely permeating a certain volume of the solution. For C 8 H-RGO and C 12 H-RGO membranes, the fluxes decrease with the increase in the thickness of the membranes ( Table 2). Permeation theory predicts that the filtration rate is directly proportional to the square of the effective pore size of the membrane and inversely proportional to the thickness of the membrane (Peng et al., 2009). In our case, the observed data are consistent with the separation theory in that a thicker membrane will sacrifice its effective pore size, resulting in a slower filtration rate. However, the fluxes do not always decrease as the thickness of the alkyl-RGO membrane increases. For example, for the C 12 H-RGO membrane during the separation of all water miscible solutions, the fluxes of the permeates decrease first and then show almost no change as the thickness of the membrane increases (Supplementary Figure 8).
It should be noted that not only the thickness of the membrane but also the viscosity of the organic liquid affect the fluxes of the permeate. Usually, the flux is in inverse proportion to liquid viscosity (Han et al., 2013). As shown in Table 2, for C 12 H-RGO membrane with a fixed thickness of 0.49 µm, the fluxes of methanol and ethanol are 1,035 ± 38 and 1,138 ± 31 mL m −2 h −1 , which are much higher than that of propanol and isopropanol. This may be due to the higher viscosity of propanol (2.26 mPa s) and isopropanol (2.43 mPa s) than that of methanol (0.55 mPa s) and ethanol (1.07 mPa s). For DMF with a viscosity of 0.92 mPa s, a flux of 923 mL m −2 h −1 is obtained. After reduction, the interplanar distance of all alkyl-RGO is larger than the kinetic diameter of water (0.265 nm), namely, water can pass through the spacing (Huang et al., 2014). However, water is blocked for all alkyl-RGO membranes, which suggests that molecular sieving is not the major mechanism for water/alcohol separation. The successful separation of water miscible solution by the filtration process relies on the superhydrophobic interaction between the alkyl chain on the RGO sheet and alcohols and the selective absorption, as well as the diffusion rate difference of the alcohols through the membrane.
Molecular Dynamic Simulations
To further prove the absorption and diffusion difference in the alcohol and water through the membrane, a series of molecular dynamics simulations were performed. Here, we selected ethanol as a model. As shown in Figure 6A, three different single layer systems (C 3 H-RGO, C 8 H-RGO, and C 12 H-RGO) with the same concentrations of ethanol aqueous solution were placed in the middle of the ethanol aqueous solution. The ethanol aqueous solution for each system consists of 200 ethanol molecules and 70 water molecules in which the concentration is the closest to our experiments. All of the systems were simulated in the cubic simulation lattice built with dimensions of x = 9.8 Å, y = 12.3 Å, and z = 102 Å. Detailed simulation methods are described in the Experimental section. From an energy aspect, the absorbed energy of ethanol on a single layer of alkyl-RGO is calculated to be −24.98 kJ/mol for C 3 H-RGO, −25.45 kJ/mol for C 8 H-RGO, and −25.89 kJ/mol for C 12 H-RGO ( Figure 6B, Supplementary Table 2). This means that ethanol molecules can be easily absorbed on the alkyl-RGO surface in the order of C 12 H-RGO > C 8 H-RGO > C 3 H-RGO. Many effective analytical methods can estimate the diffusion behavior of small molecules, like alcohol (Yang and Lue, 2013), in membrane materials. Considering the characteristics of our simulation system, we finally calculated the diffusion coefficients to analysis the diffusion behavior of water and ethanol molecules. The diffusion coefficient of water on the surface of alkyl-RGO is also analyzed to quantify the affinity between the alkyl-RGO and water molecule, which is calculated to be 0.86 × 10 −6 cm 2 s −1 for C 3 H-RGO, 0.36 × 10 −6 cm 2 s −1 for C 8 H-RGO, and 0.30 × 10 −6 cm 2 s −1 for C 12 H-RGO ( Figure 6C, Supplementary Table 3). A much lower diffusion coefficient of water molecules indicates that alcohol passes through the C 12 H-RGO membranes more easily. In addition, to further prove the successful separation of alcohol and water, thermodynamic simulation was also performed as shown in Supplementary Material. After separation, the sum energy of the alcohol and water is obviously lower than that of the water/alcohol mixtures.
CONCLUSION
Different alkyl chain-grafted RGOs were obtained by simultaneous grafting and reduction of GO. The alkyl-RGO membranes obtained by vacuum filtration can be used to separate water/oil immiscible mixtures. More importantly, the membranes can be used to separate water/alcohol miscible solutions. The alkyl chains on the RGO rendered the alkyl-RGO membrane more hydrophobic and facilitated the alcohol passing through the membrane while blocking water penetration. Molecular simulation indicated that the selective absorption ability and diffusion rate affected the water/alcohol separation. Although the mechanism of filtration needs to be deeply investigated, the separation of water/alcohol miscible mixtures driven solely by gravity is undoubtedly an alternative compared to the pervaporation technology for water/alcohol separation.
DATA AVAILABILITY STATEMENT
The original contributions presented in the study are included in the article/Supplementary Material, further inquiries can be directed to the corresponding authors.
AUTHOR CONTRIBUTIONS
LL and ZX designed and wrote the whole work. HZ and JY did the most of the work. XJ characterized the work. TL and YZ analyzed the work. All authors named on the manuscript have made a significant contribution to the writing, concept, design, execution, or interpretation of the work represented. All authors agree with the authors list appeared on the manuscript.
|
v3-fos-license
|
2023-03-30T06:16:30.407Z
|
2023-03-28T00:00:00.000
|
257805463
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "HYBRID",
"oa_url": "https://doi.org/10.1021/acs.inorgchem.3c00469",
"pdf_hash": "1820df0c669797ab8df0705836f7f086ce2c7bb8",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44407",
"s2fieldsofstudy": [
"Chemistry"
],
"sha1": "4cf0f7b441168eb11832bc68003f612dd934b91d",
"year": 2023
}
|
pes2o/s2orc
|
Role of Multiple Vanadium Centers on Redox Buffering and Rates of Polyvanadomolybdate-Cu(II)-Catalyzed Aerobic Oxidations
A recent report established that the tetrabutylammonium (TBA) salt of hexavanadopolymolybdate TBA4H5[PMo6V6O40] (PV6Mo6) serves as the redox buffer with Cu(II) as a co-catalyst for aerobic deodorization of thiols in acetonitrile. Here, we document the profound impact of vanadium atom number (x = 0–4 and 6) in TBA salts of PVxMo12–xO40(3+x)– (PVMo) on this multicomponent catalytic system. The PVMo cyclic voltammetric peaks from 0 to −2000 mV vs Fc/Fc+ under catalytic conditions (acetonitrile, ambient T) are assigned and clarify that the redox buffering capability of the PVMo/Cu catalytic system derives from the number of steps, the number of electrons transferred each step, and the potential ranges of each step. All PVMo are reduced by varying numbers of electrons, from 1 to 6, in different reaction conditions. Significantly, PVMo with x ≤ 3 not only has much lower activity than when x > 3 (for example, the turnover frequencies (TOF) of PV3Mo9 and PV4Mo8 are 8.9 and 48 s–1, respectively) but also, unlike the latter, cannot maintain steady reduction states when the Mo atoms in these polyoxometalate (POMs) are also reduced. Stopped-flow kinetics measurements reveal that Mo atoms in Keggin PVMo exhibit much slower electron transfer rates than V atoms. There are two kinetic arguments: (a) In acetonitrile, the first formal potential of PMo12 is more positive than that of PVMo11 (−236 and −405 mV vs Fc/Fc+); however, the initial reduction rates are 1.06 × 10−4 s−1 and 0.036 s–1 for PMo12 and PVMo11, respectively. (b) In aqueous sulfate buffer (pH = 2), a two-step kinetics is observed for PVMo11 and PV2Mo10, where the first and second steps are assigned to reduction of the V and Mo centers, respectively. Since fast and reversible electron transfers are key for the redox buffering behavior, the slower electron transfer kinetics of Mo preclude these centers functioning in redox buffering that maintains the solution potential. We conclude that PVMo with more vanadium atoms allows the POM to undergo more and faster redox changes, which enables the POM to function as a redox buffer dictating far higher catalytic activity.
S2 electrolysis until the current dropped to <10% of the initial value, then aliquots were withdrawn, and the UV-Vis spectra were recorded under Ar. The electrolysis was then resumed at the more negative potential as listed in Table S1-S5. Rotating disk electrode (RDE) voltammetry and square pulse wave voltammetry (SWV) were conducted on a Wavedriver 10 potentiostat/galvanostat (Pine Research Instrumentation). For both experiments, the standard three electrode setup was used with a 3-mm diameter glassy carbon disk working electrode, a Ag/Ag + (0.01 M AgNO3 in CH3CN) reference electrode, and a platinum wire counter electrode. The rotation speed from 500-3000 RPM was controlled by a Model AFMSRCE ring-disk electrode system (Pine Research Instrumentation).
For 31 P NMR spectra, PVMo11 has a single peak at -4.31 ppm that proves its purity. PV2Mo10 has a peak at -4.31 ppm indicating the PVMo11 component and a broad peak that split to -4.54 and -4.60 ppm which is assigned to PV2Mo10 and PV3Mo9 components. For PV3Mo9, in addition to the peaks that have essentially the same chemical shifts as for PV2Mo10, multiple peaks more positive than -4ppm may be assigned to PV4Mo8 components. PV4Mo8 and PV6Mo6 all show multiple peaks that cannot be clearly assigned indicating the many components and positional isomers present. It is well-establishd that heterpolyacids, H3+xPVxMo12-xO40 (3+x)-, when x>1, are mixtures of positional isomers and components with different x. 6 The 31 P NMR data in this work shows that the TBA salts of PVMo in acetonitrile are isomeric mixtures.
RSH oxidation and measurement of the varying PVMo reduction states
2-Mercaptoethanol was used as an exemplary substrate for probing the aerobic thiol oxidation, eq 1 in the text, where RSH is 2-mercaptoethanol. The mechanism of the PV6Mo6/Cu system was S3 thoroughly studied in previous work. 5 This article focuses on the impact of the number of vanadium atoms (x = 0-4, and 6) in PVxMo12-xO40 (3+x)-(PVMo). The RSH concentration was quantified using Ellan's reagent (5,5-dithiobis(2-nitrobenzoic acid) (DTNB)). 7 In a typical reaction, 0.1 mL of DTNB solution (5 mg/mL in methanol) was added to a 5 mL pH = 7.4 phosphate buffer solution (50 mM). This solution was first used as the blank for UV-vis measurements. Then, a 10 µL aliquot of the reaction solution was added and the absorbance at 412 nm was followed and the RSH concentration calculated.
In a typical RSH oxidation reaction, POM (0.1 mM), Cu(ClO4)2 (0.8 mM) and 2-mercaptoethanol (30 mM) were stirred in acetonitrile in a heavy-wall glass pressure vessel in an air-conditioned room at 25±2 o C. Aliquots of the solution were withdrawn every several minutes and monitored by UV-vis spectra as described above. Figure S21).
Stopped-Flow Measurements
A stopped-flow UV-vis spectrometer was used to monitor the rates of PVnMo12-nO40 (3+n)reduction Table S1 below. [c] Ending current ratio is defined as the final current at the end of the bulk electrolysis over the initial current at the beginning of the bulk electrolysis at the specific potential. Table S2 below. Table S3 below.
Quantifying the speciation of reduced POMs
Here we use PVMo11 as an example to calculate the POM distribution in different reduction states.
However, the same procedure was used for the other PVMo. This distribution depends on the chemical solution potential, E, as described by eq S1, where Ei is the standard reduction potential of a (PVMo11)i / (PVMo11)i+1 couple measured electrochemically The calculated apparent reduction state of POM, napp (average), is given in eq S2, and the results are given in Figure S15.
|
v3-fos-license
|
2020-03-07T16:03:10.593Z
|
2020-03-06T00:00:00.000
|
212581252
|
{
"extfieldsofstudy": [
"Medicine",
"Biology"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://bmcvetres.biomedcentral.com/track/pdf/10.1186/s12917-020-02299-2",
"pdf_hash": "1dd19c86629e145337db3f1c1a21d2b52a042ac5",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44408",
"s2fieldsofstudy": [
"Biology"
],
"sha1": "1dd19c86629e145337db3f1c1a21d2b52a042ac5",
"year": 2020
}
|
pes2o/s2orc
|
Approaches to overcome flow cytometry limitations in the analysis of cells from veterinary relevant species
Background Flow cytometry is a powerful tool for the multiparameter analysis of leukocyte subsets on the single cell level. Recent advances have greatly increased the number of fluorochrome-labeled antibodies in flow cytometry. In particular, an increase in available fluorochromes with distinct excitation and emission spectra combined with novel multicolor flow cytometers with several lasers have enhanced the generation of multidimensional expression data for leukocytes and other cell types. However, these advances have mainly benefited the analysis of human or mouse cell samples given the lack of reagents for most animal species. The flow cytometric analysis of important veterinary, agricultural, wildlife, and other animal species is still hampered by several technical limitations, even though animal species other than the mouse can serve as more accurate models of specific human physiology and diseases. Results Here we present time-tested approaches that our laboratory regularly uses in the multiparameter flow cytometric analysis of ovine leukocytes. The discussed approaches will be applicable to the analysis of cells from most animal species and include direct modification of antibodies by covalent conjugation or Fc-directed labeling (Zenon™ technology), labeled secondary antibodies and other second step reagents, labeled receptor ligands, and antibodies with species cross-reactivity. Conclusions Using refined technical approaches, the number of parameters analyzed by flow cytometry per cell sample can be greatly increased, enabling multidimensional analysis of rare samples and giving critical insight into veterinary and other less commonly analyzed species. By maximizing information from each cell sample, multicolor flow cytometry can reduce the required number of animals used in a study.
Background
Fluorescence-activated cell sorting (FACS) and flow cytometry have been essential immunological tools since the invention of FACS in the late 1960s [1][2][3], as they enable identification, characterization, and isolation of defined leukocyte subsets [4,5]. Flow cytometry employs fluorochrome-labeled antibodies that detect cell surface or intracellular antigens [6,7], a method that was first developed for characterization of cells and tissues by microscopy [8]. Recent advances in the development of novel fluorochromes and instrumentation (i.e. flow cytometry analyzers and sorters) allow for the theoretical analysis of up to 50 parameters in a single staining panel [9], and a 28-color panel has recently been demonstrated [10]. Polychromatic experiments enable the simultaneous measurement of a larger number of cell surface and intracellular markers, thereby facilitating the analysis of infrequent cell subsets or limited cell samples [5,[11][12][13]. Therefore, many institutions have acquired high capacity flow cytometers, and the analysis of > 10 fluorochromes has become routine in the study of human and mouse cells.
The house mouse (Mus musculus) is the most frequently used species in biomedical research and, as a consequence, a large spectrum of reagents and genetic models are available [14,15].. However, animal species other than the house mouse may represent more suitable models of specific human physiology, disease or anatomy, and can also enable studies of comparative medicine and/or of zoonotic pathogens in their natural hosts [14,16,17]. An example is the guinea pig, which has been a model for human infectious diseases for 200 years, and enabled disease research and vaccine development in tuberculosis [18,19]. More recent examples for the use of non-mouse species in biomedical research include pigs and sheep in orthopedics and Alzheimer's disease [20][21][22] and dogs in oncology [23].
Flow cytometry is a key method in immunological studies [24] encompassing biomedical, veterinary, agricultural, and wildlife research, but the method is also routinely employed in veterinary clinical laboratories [25,26]. Unfortunately, we face many limitations in the analysis of non-mouse animal samples, including lower availability of commercially or otherwise available antibodies to cell antigens and reduced options for fluorochrome labels by commercial antibody suppliers. It is also not uncommon to receive limited amounts of hybridoma supernatant rather than purified antibody. In addition, antibodies for non-standard species tend to be more expensive. Due to this absolute and relative lack of reagents, the design of state-of-the-art multicolor flow cytometry staining panels is much more difficult than it is for mouse or human cell samples.
Our laboratory studies lymphocyte recirculation using lymph vessel cannulation in sheep, which was pioneered by Bede Morris [27,28]. Due to a number of limitations, afferent lymph vessels cannot be readily cannulated in mice or humans, and lymph vessel cannulation in sheep allows for the analysis of lymphocytes during their physiological recirculation through tissues [29][30][31]. Here we present technical approaches that are commonly employed by our laboratory to increase the number of parameters analyzed by flow cytometry per cell sample from sheep [32][33][34][35][36]. The discussed approaches are compatible with the analysis of cells from most other animal species and include direct modification of antibodies by covalent conjugation or Fcdirected labeling (Zenon™ labeling kits), labeled secondary antibodies and other second step reagents, labeled receptor ligands and species cross-reactivity. Detailed guidelines for the use of flow cytometry, including general protocols, are extensively discussed elsewhere [11,24].
Selection of fluorochromes
When designing multicolor staining panels for flow cytometry, one is limited to the use of fluorochromes compatible with available flow cytometers. Therefore, the technical specificities of the available instrumentation determine the fluorochromes in a staining panel. For the panels presented in this paper we used the BD LSR Fortessa™ cell analyzer. Our machine has 5 lasers, UV (355 nm), violet (405 nm), blue (488 nm), yellow/green (561 nm), and red (640 nm), and can simultaneously detect up to 18 colors plus forward-(FSC) and side scatter (SSC) properties (Fig. 1). Figure 1 depicts the specific laser and filter set-up of our flow cytometer, its theoretically available colors, as well as examples for fluorochromes commonly used in our laboratory. As advised [37], we aim to choose fluorochromes with minimal spectral overlap, and online resources such as the Spectrum Viewer from BD, Fluorescence Spectra Viewer from Thermo Fisher Scientific, or the BioLegend Spectra Analyzer help with assessing the degree of spectral overlap and potential spillover. The simultaneous use of fluorochromes with extensive spectral overlap is more feasible with appropriate compensation, carefully titrated antibodies, and when the antibodies recognize distinct cell populations, e.g. B cells vs. T cells; the approach is less suitable for co-expression studies [24,38]. However, each specific staining panel will need to be tested and optimized on available instrumentation. More details on optimal fluorochrome combinations and discussions on appropriate compensation techniques are described elsewhere [12,24,37].
When designing staining panels, we select brighter fluorochromes for antibodies that bind rare antigens as advised [37]. Companies such as BioLegend provide a relative brightness index for fluorochromes but also warn that the brightness can vary depending on the antibody, antigen, or cell type, and that it is also influenced by instrumentation. Consequently, titrating the antibody is always recommended. Examples of brighter fluorochromes that we used in our panels of this paper include phycoerythrin (PE), Alexa Fluor™ (AF™) 594, phycoerythrin-cyanine 7 (PE-Cy7), allophycocyanin (APC), and Brilliant Violet™ 421 (BV421) (Fig. 1). In an ideal scenario, monoclonal antibodies for each cell antigen are available in all possible fluorochromes. However, even for human antigens this is not the case and is further from reality in veterinary species. Thus, initial antibody staining panel design will depend on easily available and previously validated antibodies ("what is in the refrigerator") and approaches to expand the panel.
Covalent labeling of antibodies and species-cross-reactive antibodies
Antibody vendors have a variable supply, and antibodies for veterinary species are generally available in a limited number of fluorochrome labels and are often unconjugated. BioRad, for example, has a sound variety of anti-ovine and other veterinary antibodies, most of which are only available purified or conjugated to fluorescein isothiocyanate (FITC) or PE. A method to broaden the fluorochrome range is to label purified antibodies by using a reactive labeling kit that covalently binds fluorochromes to reactive protein groups, such as amines. In contrast to purified antibodies, non-purified antibodies (i.e. hybridoma supernatants, ascites fluid) cannot be labeled covalently without also labeling other protein components in the fluid, but they can be selectively labeled with the Zenon™ labeling method (discussed below). Covalent labeling kits for variable amounts of antibody are available for numerous fluorochromes from commercial vendors, such as Invitrogen™ by Thermo Fisher Scientific, Novus Biologicals, Abcam, or Biotium™. The labeling procedure follows simple protocols provided by the manufacturer and takes less than 3 hours. The newly labeled antibody is immediately ready for staining but should be titrated and validated prior to use in an experiment. The reactive label works for all animal species and IgG subclasses, and the covalently labeled antibody is stable and can be stored for usage over several months. For example, we labeled purified anti-ovine CD8 with a Pacific Blue™ antibody labeling kit and included it in a multicolor panel to detect CD8 + T cells among sheep blood lymphocytes (Fig. 2a). Within the same staining panel, lymphocytes were additionally gated for γδ T cells (Population A), CD4 T cells (C), B cells (D), and CD11c + antigen presenting cells (E) distinguished within the high side scatter granulocyte population (Fig. 2a).
To broaden the antibody repertoire and fluorochrome spectrum, many laboratories use antibodies that are raised against antigens in one species but exhibit documented cross-reactivity for a different species. When antibodies are produced for use in mice and humans, they are more likely to be commercially available in a larger variety of colors. In our example panel, we took advantage of documented cross-reactivity of anti-human α4and β1-integrin and anti-bovine CD21 antibodies with their corresponding sheep molecules [39,40] and analyzed all cell subsets (A-E) for these markers, as well as CD62L (Fig. 2b).
Staining controls are particularly important for multiparameter analyses [37]. In a fluorescence minus one (FMO) control all antibodies of the panel, except for one, are included in their respective fluorochromes, allowing assessment of spectral overlap into the "empty" channel ( Fig. 2c). Figure 2d depicts the individual fluorochromes and antibodies to antigens used in the staining panel of Fig. 2 and lists the method by which the individual staining was achieved.
Zenon™ labeling kits
An additional approach to overcome limited fluorochrome availability for domesticated and other animal species is the use of Zenon™ labeling kits (Invitrogen™, Thermo Fisher Scientific) [33,34,41,42]. The Zenon™ labeling technique uses fluorochrome-labeled Fab antibody fragments that recognize the IgG subclass (Fc portion) of its target antibody (Fig. 3). This noncovalent conjunction enables the labeling of human, mouse and rabbit antibodies, which can be purified antibodies, hybridoma culture supernatant, or ascites fluid [43]. Therefore, the Fab fragment of the Zenon™ kit has to recognize the species (human, mouse or rabbit) and the IgG subclass (IgG 1 , IgG 2a or IgG 2b ) of the specific target antibody. To label the target antibody, it is mixed with the fluorochrome-labeled Fab fragments of the Zenon™ kit, and the mixture is incubated for 5 min (Fig. 3a). Next, the Zenon™ blocking reagent is added to the mixture and incubated for another 5 min (Fig. 3b). The Zenon™ blocking reagent is a nonspecific immunoglobulin mix from the same species as the target antibody and will bind to excess fluorochrome-labeled Fab fragments (Fig. 3b). Finally, together with other antibodies of the staining panel, the Zenon™ mixture is added to the cell sample for staining (Fig. 3c). After washing to remove unbound nonspecific immunoglobulins, the cells are ready to be analyzed by flow cytometry (Fig. 3e).
For the labeling of purified monoclonal antibodies, our laboratory follows the Zenon™ labeling protocol provided with the kit, which recommends the use of 1 μg of target antibody, 5 μl of the Zenon™ labeling complex and 5 μl of the Zenon™ block. For some very bright fluorochromes and antibodies that recognize highly expressed cell surface antigens, such as anti-γδ TCR antibody labeled with Zenon™ AF™594 (Fig. 2), we only use 2.5 μl of the Zenon™ labeling Fab fragment and 2.5 μl of the Zenon™ blocking reagent to label 1 μg of antibody. However, for antibody hybridoma supernatants of unknown concentration we found the use of 10 μl supernatant and 5 μl of each the Zenon™ labeling complex and Zenon™ block works best in most cases and may be determined by titration for each batch of supernatant. Figure 2 shows an example of antibody supernatant labeling to stain CD11c + antigen presenting cells.
One option that the Zenon™ protocol provides is to stop the labeling after the first step (before adding the blocking reagent) for storage at 4°C. While our laboratory sometimes stores the Zenon labeled antibody for 1-2 h in the refrigerator, the manufacturer's protocol allows for storage of up to several weeks with the addition of 2 mM sodium azide (Invitrogen™). The use of the Zenon™ labeling kits is quick and allows flexibility in staining because the fluorochrome can easily be changed by using the isotype-specific Zenon™ kit in a different color. Multiple Zenon™ labeled antibodies can be used simultaneously in the same staining panel, and isotype controls can be labeled in the same manner as the staining antibodies. Each new antibody-Zenon combination should be titrated to determine the best dilution.
Labeled secondary antibodies and other second step reagents
For the flow cytometric analysis of cells from veterinary species fluorochrome-labeled secondary antibodies are widely used. Monoclonal and polyclonal secondary antibodies are produced in a diverse array of host species and are commercially available in a broad range of colors. The use of monoclonal secondary (and primary) antibodies is preferred as they usually achieve more consistent staining with less background. Secondary antibodies are employed to bind primary antibodies (which recognize antigens on target cells) without cross-recognition of target species antigens. Therefore, secondary antibodies recognize the species and isotype of the primary antibody and are adsorbed to or otherwise unreactive with antigens on cells of the target species. For example, a monoclonal mouse IgM antibody that recognizes sheep B cells (clone 2-104) [44] was visualized with a BV650-conjugated rat antimouse IgM monoclonal antibody ( Fig. 2a and d). In the same staining panel, mouse anti-ovine CD62L clone DU1-29 was detected with a PE-Cy7-conjugated rat antimouse IgG 1 monoclonal antibody ( Fig. 2b and d). Thus, use of secondary antibodies leads to signal amplification and is versatile, allowing for the selection of less commonly used fluorochromes and easy matching to a variety of panels.
As a general rule, it is possible to use multiple secondary antibodies in the same staining panel as long as they recognize different immunoglobulin classes or IgG subclasses (e.g. anti-mouse IgG and anti-mouse IgM as shown in Fig. 2, or anti-mouse IgG 1 , and anti-mouse IgG 2a ). Even when using a single secondary antibody, the isotypes of all antibodies in the staining panel must be considered and the primary antibody that is targeted should be the only one recognized by the secondary antibody. However, including two additional staining steps allows for detection of an unlabeled primary antibody in the presence of additional (labeled) antibodies of the same isotype without crossrecognition by the secondary antibody. The first step includes only the primary antibody, followed by only the isotype-specific secondary antibody. After a requisite blocking step with excess unlabeled antibody of the isotype of the target antibody, additional antibodies of the same isotype can be used in a third staining step. Using this approach our laboratory visualized CD62L expressing cells (Fig. 2) as well as natural killer (NK) cells ( Fig. 4a and b) with anti-IgG1 secondary antibodies in the presence of other IgG1 staining antibodies. In Fig. 4, unlabeled EC1.1, a monoclonal mouse IgG 1 that recognizes the ovine natural cytotoxicity receptor NKp46 [45] was visualized by rat anti-mouse IgG 1 , clone A85-1, conjugated to BUV395 (Fig. 4a and b). Following the secondary labeling, the cells were blocked with nonspecific polyclonal mouse IgG to saturate any free valences of the secondary antibody, rendering it unable to interfere with the other mouse IgG 1 antibodies in the staining panel. Finally, mouse IgG 1 antibodies against α4and β1-integrins were employed in the staining procedure ( Fig. 4a and b). Isotype control antibodies of the same species, antibody isotype and fluorescent label can be incorporated in the same manner as the staining antibodies (Fig. 4c).
Biotinylated antibodies also increase the fluorochrome spectrum and signal in flow cytometry. Streptavidin is commercially available in many different fluorochromes and binds biotin on the primary antibody with high affinity. This leads to signal amplification, making it particularly useful for detection of antigens with low density per cell, and stressing the importance to titrate both the biotinylated primary antibody and the conjugated streptavidin. Many antibodies are commercially available in biotinylated format and purified antibodies can be biotinylated using antibody/protein biotinylation kits or Zenon™ technology (see above).
Labeled receptor ligands
When antibodies for cell surface receptors are unavailable or when ligand binding ability, as opposed to simple receptor expression is the aim of the study, labeled ligands can be used in flow cytometry. Employing this method, we have previously analyzed ovine lymphocytes for expression of costimulatory molecules B7.1/B7.2 and skin homing marker E-selectin ligand (epitopes that include cutaneous lymphocyte antigen) by evaluating binding of CTLA4-human IgG and mouse E-selectin-human IgG 1 chimeric proteins, respectively [32][33][34]. Here, we show an example in which E-selectin binding was visualized with an APC-conjugated mouse monoclonal antibody that recognizes the human IgG 1 portion of the E-selectin chimeric protein (Fig. 4d). After gating lymphborne CD4 T cells (as in Fig. 2a), we analyzed their percentage of E-selectin ligand expression (Fig. 4d). Because E-selectin binding requires calcium, a control staining was performed in EDTA-containing buffer (Fig. 4d). Alternative controls will depend on the specific ligand used in an experiment, and examples include staining with irrelevant IgG fusion proteins or blockade of staining with excess unlabeled ligand.
Discussion
In this methods paper, we present several approaches to overcome flow cytometry limitations in the analysis of veterinary species. During our studies we faced several technical issues. For example, certain antibody clones were not sufficiently labeled by direct covalent labeling kits. This is an old problem and known causes of labeling resistance include: buffer components react with the dye, suboptimal pH, or reactive amine groups lie within the antigen-binding site of the antibody [46]. Zenon antibodies can be a solution for labeling monoclonal IgG antibodies that do not label well with labeling kits. For example, our ovine CD4 monoclonal antibody (44.38) did not yield satisfying staining quality when conjugated with the Invitrogen Pacific Blue antibody labeling kit. However, Fig. 4 Secondary antibody staining with multiple same-isotype primary antibodies and cell-surface molecule detection by ligand binding. (a) Summary of the steps to stain with an isotype-specific secondary antibody when its target isotype antibody is present multiple times in the same staining panel. (b) Staining of NKp46 with an isotype-specific secondary antibody (anti-mouse IgG 1 ) in the presence of anti α4and β1-integrin antibodies, which are of the same isotype as the anti NKp46 antibody. Peripheral blood mononuclear cells were pre-gated on single live lymphocytes as in Fig. 2a and CD3 + T cells and CD3 − NKp46 + NK cells analyzed for expression of α4and β1-integrins. (c) Corresponding isotype control staining for α4and β1-integrins. (d) E-selectin ligand expression on CD4 + T cells from afferent lymph of adult sheep was determined by flow cytometry using an E-selectin-human IgG fusion protein. Cells were pre-gated as in Fig. 2a. As a negative control, staining was performed in buffer containing EDTA. (b-d) One representative of five individually analyzed sheep is shown. Abbreviations: αm, anti-mouse; APC, allophycocyanin; BUV, Brilliant ultra violet; FITC, fluorescein isothiocyanate; PE, R-phycoerythrin using the Zenon technology yielded superior results. Another common difficulty is the conjugation of IgM antibodies because most labeling kits raise pH and denature the pentameric structure of IgM [47]. Some manufacturers, such as Thermo Fisher Scientific, offer specific protocols optimized for IgM labeling. In the case of the (mouse IgG1) anti-NKp46 clone, neither covalent conjugation nor Zenon™ technology were effective methods for labeling, and we had to employ the staining method outlined in Fig. 4a. Unfortunately, not all commercial antibody suppliers have consistent quality controls in place and we have occasionally seen commercially labeled antibodies that are unreliable. We also found that in one case the Zenon™ mix interfered with the staining of a different antibody in the same staining panel. Specifically, the antiovine B cell antibody (2-104) is a mouse IgM and its detection by an anti-mouse IgM secondary antibody was blunted. An ELISA revealed that the Zenon™ blocking reagent that is included in the kits contained both mouse IgG and mouse IgM, and the latter was competing with our mouse IgM primary antibody for binding by the secondary anti-mouse IgM antibody. We solved the issue by using purified IgG for blocking rather than the Zenon™ kit blocking component.
Certain tandem dyes are sensitive to degradation, leading to a weaker signal and detection in other fluorochrome channels. For example, PE-tandem dyes are susceptible to degradation by handling, storage, and light [48,49]. We also found that the tandem dye PE-Cy7, is sensitive to extended fixation with PFA, which can degrade the fluorophore and lead to artifactual strong signals in the PE channel. We found that the following precautions prevent tandem conjugate degradation: staining at 4°C in the dark, careful and extensive washing after fixation (i.e. twice with sufficient buffer volume), and storage at 4°C in the dark for no longer than 24 h. Another potential issue, which we have not encountered in our studies, is the interference of Brilliant Violet and other ultra-bright antibodies with each other when used in the same panel. Such issues can be addressed by using specialized staining buffers. BD Bioscience, for example, offers a specific buffer for staining with Brilliant™ dyes [50].
While we present several simple approaches to broaden the number of flow cytometric parameters per cells, additional approaches exist that we have not utilized so far. For example, PrimeFlow™ (Invitrogen™) or Branched DNA method is a technique to detect cell-expressed RNA by flow cytometry [51], and custom antibody production or customized antibody labeling are also available.
Conclusion
Here, we presented multiple relatively simple and timetested approaches that broaden the fluorochrome spectrum for flow cytometric analysis of cells from veterinary relevant species. These approaches can enhance the quality and quantity of information obtained from each cell sample. Therefore, the use of multiparameter analysis in flow cytometry can give critical insight into veterinary and other less commonly analyzed species, can help obtain information from rare cell samples, better define subpopulation of cells, and also potentially reduce the required number of animals used in a study.
Animals and lymphatic cannulation
Eight-eighteen months old female or wether Dorset or Dorset-cross sheep with negative Q-fever serology were purchased from Archer Farms, Inc. and conventionally housed in standard pens, under a 12-h-light/dark cycle, in groups and singly when entering experiments. Hay and water were provided ad libitum, and standard pellet feed for ruminants (Labiana) were fed twice per day. Sheep were 40-65 kg of weight when entering experiments Lymphadenectomy to remove the subiliac (prefemoral) lymph nodes was performed as previously described [52]. Six-eight weeks after lymphadenectomy, pseudoafferent lymph vessels were cannulated with heparin-primed 3 or 3.5 French polyurethane catheters (Access Technologies) in a surgical procedure as described [52]. Pre-procedural sedation was induced with Tilzolan (tiletamine and zolazepam; Dechra) at 4-6 mg/ kg into muscles of the hind or front leg; anesthesia was induced with propofol i.v. at 2-8 mg/kg (PropoFlo 28, Zoetis) and/or sevoflurane (Patterson Veterinary) per inhalation at 2-3% in oxygen via mask, and anesthesia was maintained at a surgical plane with 2-3% isoflurane (Isothesia, Covetrus) in oxygen, administered via an endotracheal tube. All surgical procedures were performed under aseptic conditions in a dedicated surgical suite. Postoperative analgesia was provided using buprenorphine (Par Pharmaceuticals) at 0.01-0.05 mg/kg every 4-12 h s.c. in the neck, and/or flunixin meglumine (Flunixin Injection, Norbrook) at 1 mg/kg every 8-24 h i.m. in the leg. Additional doses of analgesics were given if animals showed signs of pain or distress, which were assessed at least three times per day for 3 days, and at least every 12-24 h thereafter for a week. Afferent lymph was collected into sterile bottles containing 100 μL of 10, 000 U/mL Heparin (Hospira, Inc.). Collection bottles were changed every 4-12 h. After conclusion of experiments, the animals were euthanized while under anesthesia by i.v. injection with pentobarbital and phenytoin (SomnaSol, Covetrus) at 97.5-195 mg/kg and 12.5-25/kg, respectively. Death was confirmed by auscultation for cardiac arrest. The method of euthanasia is consistent with the recommendations by the Panel of Euthanasia of the American Veterinary Medical Association.
Cell isolation and blood collection
Blood was collected from the jugular vein with a syringe containing heparin. Mononuclear cells were isolated using density gradient centrifugation with Histopaque®-1083 (Sigma-Aldrich). Blood was diluted at a one to one ratio with elution media (58.4 mM sucrose (Sigma-Aldrich), 10 ml 5 mM EDTA (Invitrogen), 100 mL 10x PBS (Gibco), 900 mL Milli-Q Water) at room temperature and carefully layered on top of the Histopague in a conical tube. The layered blood is centrifuged at 9000 RCF for 30 min. Lymphocytes are collected by harvesting the buffy coat. Blood and lymph-borne cells were washed with wash media (RPMI 1640 medium with GlutaMAX™ (Gibco®), 0.2% BSA (Sigma-Aldrich), and 25 mM HEPES (Gibco®)), and, when necessary, red blood cells were lysed using red blood cell lysing buffer (155 mM ammonium chloride (Sigma-Aldrich), 10 mM sodium bicarbonate (Sigma-Aldrich), and 0.1 mM EDTA (Gibco)). Isolated cells were resuspended in wash media, counted by hemocytometer, and kept on ice until staining.
All staining steps were performed in a total volume of 100 μl on ice. Cells were washed with staining buffer (Dulbecco's phosphate-buffered saline (DPBS; Corning) and 0.2% tissue-culture grade bovine serum albumin (BSA; Sigma-Aldrich)) and spun down at 500 RCF. 2 × 10 6 cells were stained per 1.2 mL microtiter tube (Fisher Scientific). To block nonspecific binding, each tube of cells was resuspended, and subsequently incubated for 10 min, in 10 μl of staining buffer containing 1 μg sheep IgG (Jackson ImmunoResearch) and/or 1 μg mouse IgG (Jackson ImmunoResearch), as well as 0.1 μl of the LIVE/DEAD™ Fixable Aqua Dead Cell Stain Kit (Invitro-gen™). The blocking step was performed at the beginning of the staining process or after staining with a secondary antibody before addition of antibodies of the same isotype as the primary antibody ( Fig. 2 and Fig. 4 a and b). After blocking, antibodies to cell surface antigens were added and cells incubated for 15 min, and subsequently washed with staining buffer. Cells were then fixed by resuspending and incubating for 15 min in 2% paraformaldehyde (Sigma Aldrich), followed by washing with staining buffer. After fixation, ovine CD3 staining was performed in staining buffer containing 0.5% saponin (from Quillaja bark, molecular biology grade; Sigma Aldrich). Binding to a mouse E-selectin-human IgG 1 chimeric protein (R&D Systems) was tested in DPBS containing calcium and magnesium (Corning), and visualized by an APC-conjugated mouse anti-human IgG 1 antibody (clone 97,924; R&D Systems). The control was stained in the same manner using DPBS without calcium and magnesium under the addition of 30 mM ethylenediamine tetraacetic acid (EDTA; Invitrogen™). Data was acquired using the BD LSR Fortessa™ (BD Biosciences) and analyzed with FlowJo software (Tree Star). A single cell gate was set for each cell sample using FSC-Area and FSC-Height as depicted in Fig. 2a. Dead cells were excluded from analysis by gating on LIVE/DEAD™ Fixable Aqua low events, and lymphocyte and/or granulocyte gates were drawn based on SSC-Area and FSC-Area (Fig. 2a). A minimum of 100,000 lymphocytes were recorded per tube.
|
v3-fos-license
|
2022-11-20T16:17:27.220Z
|
2022-10-01T00:00:00.000
|
253691995
|
{
"extfieldsofstudy": [],
"oa_license": "CCBY",
"oa_status": "HYBRID",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s11562-022-00506-5.pdf",
"pdf_hash": "22fefdd41a96d1141002be7928be37d8b4dbade4",
"pdf_src": "Springer",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44409",
"s2fieldsofstudy": [
"History"
],
"sha1": "b94eae0a469cec62c1f6e5e2ea1ef1828807c20a",
"year": 2022
}
|
pes2o/s2orc
|
The practices of a raqi (Islamic exorcist) in Stockholm
This article investigates in depth the practices of a Stockholm-based raqi. In the first section, the principles and methods of his version of ruqya (Islamic exorcism) are described: which Qur’anic passages he perceives as being most suitable to read in the cases of different afflictions, how he complement his reading with the use of his right palm to detect the possession, and his use of the “satanic meridians,” i.e., pressure points to use to facilitate the eviction of stubborn djinns. Later, the cases of five patients are discussed in order to shed light upon those who seek out his services. One particularly interesting example concerns a patient who regularly takes ruqya against sorcery. Despite the fact that she does not believe in sorcery herself, she considers ruqya more beneficial for her well-being than Western treatments. Next, the raqi’s perspective on psychotherapy and on mental illness in general are then presented. Finally, the problems of non-contextualized interviews versus ethnographic observations carried out as part of fieldwork for the purposes of gathering information are illustrated.
Introduction
Despite the significant increase in the Muslim population of Europe throughout the last decades, only a few scholarly works have been published on ruqya (Islamic exorcism) and sihr (sorcery according to Islamic traditions) in Europe. 1 Even more surprising is the fact that, with the exceptions of Khedimellah (2007), Muslim Eneborg (2013), Oparin (2020), and Suhr (2019), there is a lack of detailed academic ethnographic descriptions of the actual performance of ruqya in Europe.
What is also striking is the fact that almost all academic technical descriptions of ruqya focus on its theological aspects, how it is performed with reciting extracts from the Qur'an, and what it does with the possessed clients and djinns. What seems to be left out are the corresponding physical and embodied aspects of the ritual of ruqya, how the raqi is using his hands and limbs, ritual paraphernalia (sticks to strike the djinns and blessed water to "burn" the satanic djinns), and recipes from the prophetic medical tradition used to, for example, assist in the extraction of sihr or remedy the issues that led to the possession in the first place. Suhr (2019: 82, 188) reveals in two instances that the two raqi in his study place their palms on the forehead of their clients when reciting. Muslim Eneborg (2013: 1093) explains that the raqi in his article also touches the head of the clients but in this case do so to diagnose the client based on their heat, in a way that resembles the pulse diagnosis in the Unani tradition. He also highlights the importance of herbal medicine in assisting his informant's version of ruqya. Unfortunately, Muslim Eneborg does not provide any actual examples of either the diagnosis or the use of herbal remedies. Oparin (2020: 734) mentions how one of his informants had learned in "Arabia" how to blow on his clients during ruqya, "under their shirt over their chest and into their eyes." Khedimellah (2007: 396-397) describes how his raqi uses mostly blessed water, fortified with basic prophetic medicine (olive oil, honey, etc.) to assist the healing process after the ruqya has been performed. I have explained in two earlier articles on the technical aspects of ruqya the use of sanctified water, beating clients with a stick to force out stubborn djinns (Marlow, 2023a), the use of purgative or laxative potions to remove digested sihr, and how to evict possessing satanic djinns with injections of sanctified antidotes into the afflicted person's bloodstream (Marlow, 2023b).
From the perspective of ritual theory, I find this general lack of in-depth descriptions surprising based on the importance of evident and undeniable self-embodied experiences to counter the doubt of the efficacy of a ritual (Thomas, 2008: 338). Another aspect of adding embodied experiences of the clients, next to the recitations of the Qur'an during ruqya, is how it strengthens the ritual's performative vertical effects by adding several simultaneous horizontal components, monotoneous and repetitive recitations, the volume of the raqi's voice during the ruqya, the sensations of the raqi's breath when reading or blowing, pressure and heat from the raqi's palm, moisture from the raqi's saliva, or his spraying of blessed water on the client (cf. Tambiah, 1979: 114, 140). I have found that to convince an afflicted "lived" body that it is restored from an external assault by djinns, the experienced raqi utilizes several methods of "the somatic component of ways of knowing" (McGuire, 1990: 286) during his practice of ruqya.
This paper is based upon three days of observations of Raheem 2 conducting ruqya in a Stockholm mosque. The observations were followed by short interviews of the participants and Raheem after each session. The aim was to get both parties' perspectives on what had occurred during the sessions. 3 Later, two longer interviews were carried out with Raheem, during which we had more time to discuss my preset questions on his practices at length.
Dupret et al. have addressed the tendency within the anthropology of religion to search for "big explicative schemes" that attempt to formulate generalizations regarding the field that one studies. Such a theoretical approach might mean that "researchers often lose the actual object of interest and propose new narratives in its place that are devoid of the contextual and praxiological specificities of any actual situation" (Dupret et al., 2012: 1). Another tendency that one finds among scholars who search for grand schemes is an idealization of the field's adherence to the discursive traditions of Islam. This approach risks hiding the actual ambivalent deviations within lived religion. Moreover, it might result in "too much Islam in the anthropology of Islam" (Schielke, 2010: 1).
The primary purpose of this paper is to describe a specific informant's practice of ruqya in greater detail than what has been the case in earlier studies in order to compensate for the ethnographic lack outlined above. Special attention will be paid to embodied techniques used by the raqi to affect the possessed clients during ruqya. Observations of the practices preceded the interviews. The purpose of this method is to reduce the risk of the social context being lost in the subsequent questions and descriptions.
In order to clearly separate the different perspectives presented in this paper, my informant Raheem's general techniques of ruqya will be presented in the first section. All explanations of the rationale behind the practices will be presented from Raheem's perspective. In the second section, five separate cases will be discussed. In passages printed in smaller fonts, my observations of the ruqya-session combined with Raheem's and the patients' perspectives will be shared for each case. Next, in passages printed in regular-size fonts, my theoretical reflections on the case will be discussed.
2 Several of the disclosed practices are or might be illegal in Sweden according to The Patient Safety Act (2010:659, chapter 5, https:// lagen. nu/ 2010: 659# K5. Therefore, to protect the informant, I asked him to choose a pseudonym. 3 Most of the patients reacted positively to the fact that a Western and non-Muslim researcher was interested in studying ruqya. Therefore, they allowed me to observe their session. However, a few of them spoke neither Swedish nor English. In those cases, I could only get Raheem's perspective of the sessions.
The fieldwork
Raheem was thirty years old in late 2013 and early 2014 when the fieldwork upon which this paper is based was conducted. A mutual friend suggested that I contact him. When I told him about my earlier research on ruqya, he agreed to let me observe and interview him. 4 Raheem works mainly from a mosque and has an academic understanding of my fieldwork, based on his Bachelor of Arts degree, too. Raheem's background is so unique here in Sweden that anyone can find him on Google if I disclose his background information (active Muslim, his academic degree, his mix of ethnicity, and his teacher's origin).
At that time, he had been active as a raqi (a performer of ruqya) for four years. Ruqya 5 is the Islamic ritual practice of casting out djinns 6 and other negative metaphysical influences on humans. It resembles the Christian practice of exorcism. The methodology primarily consists of recitations from the Qur'an and secondarily of the use of prophetic medicine and various forms of paraphernalia.
Raheem was first exposed to ruqya when he joined his mother on a trip to her native country when he was sixteen years old. He told me that she had been tormented by djinns for as long as he can remember. Therefore, he decided to learn ruqya to help her. His primary teacher is a Central-Asian raqi. When Raheem returned to Sweden after his training, several people approached him and asked for his help with ruqya. He was not aware of any other raqi practicing in Stockholm at that time, so he decided to help people outside his family as well. He later found out that there were several other raqi practitioners. For this reason, he has chosen to limit his ruqya sessions to Fridays so that he can focus on completing his Western academic studies.
Which Qur'anic passages to read in ruqya
And when they had cast, Moses said: That which ye have brought is magic. Lo! Allah will make it vain. Lo! Allah upholdeth not the work of mischiefmakers.
And Allah will vindicate the Truth by His words, however much the guilty be averse. (The Qur'an 2021 10: 81-82) Raheem uses ruqya to treat patients suffering from afflictions caused either directly by djinns or those primarily caused by humans. Usually, djinns possess out of love. Afflictions caused by humans mainly constitute sihr (which in Raheem's terminology translates as sorcery 7 ) and ayn (the evil eye, i.e., negative effects caused by envy). 8 According to Raheem, the essential Qur'anic passages for all kinds of ruqya are sura al-Fatihah (1), Ayat al-kursi (2: 255, The Verse of the Throne), and the last three suras (112)(113)(114). Against sihr, he most frequently uses sura Yunus (10: 81-82). He repeated it seven or eleven times in a row during several of the ruqya sessions I observed. Against ayn, he prefers two verses from sura al-Qalam (68: 51-52). For patients who are disturbed by waswasa (satanic seductive whisperings), 9 Raheem recites suras Ya-Sin (36) or al-Saffat (37). He explains that there are different kinds of waswasa: the one that causes OCD (obsessive compulsive disorder), one that gives you negative visions, and another that will disturb your thoughts. However, Raheem tries to develop his ruqya skills continuously, based upon the effects he notices on the djinns when making use of different methods and Qur'anic passages.
How Raheem detects metaphysical influences with his palm
Allah! There is no God save Him, the Alive, the Eternal. Neither slumber nor sleep overtaketh Him. Unto Him belongeth whatsoever is in the heavens and whatsoever is in the earth. Who is he that intercedeth with Him save by His leave ? He knoweth that which is in front of them and that which is behind them, while they encompass nothing of His knowledge save what He will. His throne [kursi] includeth the heavens and the earth, and He is never weary of preserving them. He is the Sublime, the Tremendous. (Ayat al-kursi, The Qur'an 2: 255) Raheem has been taught to place his right palm on a patient's head when performing ruqya. Usually, he explained, he will sense that the head becomes cooler during the recitation. In the next stage, he can notice that some areas of the head get hotter. More serious afflictions generate higher temperatures, which makes it easier for him to detect them. The location of these areas reveals to Raheem what kind of affliction the patient is suffering from. According to the Sunna, he explained, the Prophet Muhammad blew on his hand and then placed upon the head of the afflicted when performing ruqya. He stressed that, based on Islamic, psychological, and therapeutical perspectives in general, he believes physical contact is vital during healing.
Raheem stated that some patients experience a sensation rising through the body up to the head. Other patients feel nothing. Usually, Raheem explained, he can diagnose the affliction with his palm while reciting. For instance, if it is sihr intended to separate people, he feels it in the center of the patient's head, from the hairline towards the neck. He added that he can judge how old the sihr is with his fingers. If it reaches three fingers from the hairline, it is three years old-the more intense the sihr, the hotter the area, and the easier it is to diagnose it. According to Raheem, the heating sensation is not present from the beginning of the session but starts after he has begun reciting. Sometimes, Raheem does a fast "check-up" by only reciting a few powerful verses, e.g., either 2: 32 or Ayat al-kursi. If he does so, he will feel the heat more quickly, but there will be no healing. For healing to take place, he must read much more than just a few verses.
Raheem subsequently informs the patients about the cause of the sihr that has afflicted them because that is the way his teacher taught him. The patients usually confirm and understand the origin of the sihr that has struck them when they are told when it began. However, he never tells them who caused it even if he sometimes can figure it out. He does not want the patients to focus on negative emotions towards other people, which he considers will harm the beneficial effects of the ruqya.
The satanic meridians: how to physically hurt the djinns
Say: I seek refuge in the Lord of mankind, The King of mankind, The God of mankind, From the evil of the sneaking whisperer, Who whispereth in the hearts of mankind, Of the jinn and of mankind. (The Qur'an 114) I noticed that, Raheem, after diagnosing the heated areas, used his fingertips on different pressure points. At first, he was reluctant to talk about it, but he later disclosed that the pressure points are called "satanic meridians." He was first secretive about this practice because it can harm the patients if misapplied. He explained that if a djinn resides in the patient's body, for example, he can feel the heat from the temple and around the ear. If it is waswasa, he will often find it in the cavity under the ear. If he wants to confirm the presence of a djinn, he puts pressure under the larynx for half a minute and then observes the reaction.
If Raheem has encouraged the djinn several times to leave the body during the ruqya but it refuses, applying pressure on a certain point can, according to him, force the djinn to do so without hurting the patient. He reported having seen several raqis strike a patient's body with a stick in order to force the djinn to leave. Although he stated that this method will harm only the djinn, and not the possessed patient, he still does not use this method when in Sweden because it might be illegal. Therefore, he prefers the pressure points when force is needed to cast out a djinn. Raheem related that several raqi in Sweden apply pressure to the throat. 10 However, according to him, he is the only one who uses the other pressure points.
In an earlier paper on the practice of ruqya (Marlow, 2023a), I have discussed that the lesser degree of violence used by raqis outside the Arab countries probably is an adaptation to local norms and customs. The raqi informant described in that paper uses a siwak (a small dental stick) instead of a big stick when applying the striking method of ruqya in Sweden (ibid.: 9). Further, the findings of a study based upon interviews with sixteen raqis (from Egypt, Saudi Arabia, Bahrain, Pakistan, India, and Trinidad) indicate that the practice of striking djinns with a stick is more common in Arab countries than in India or Pakistan (Philips 2007: 165-166, 199). Raheem's statement here strengthens my earlier assumption of local adjustments to European norms and laws of the practices associated with ruqya.
How to cast out djinns
Two weeks ago, I [Raheem] made an emergency visit to a woman in the [Stockholm] suburbs. I think it [the djinn] was there because of love. It is usually much stronger when a djinn possesses because of love. Furthermore, sometimes they will reenter after they have been extracted. I always make sure that they [the afflicted] sit because if they lie on the floor, they have more freedom to react the way they want to [enact the possession physically]. Then I came to a section [Qur'anic verse] where the djinn reacted. I immediately pressed a point [of the satanic meridians] before I addressed it [the djinn] to demonstrate that I could hurt it. With great authority, I told it: "Listen to me"! I never gave it a chance to respond. "You have a choice to make. You can leave the easy way, I will tell you how, or you have to [...]". Sometimes I do the Shahada [the Islamic creed], too. "You can become Muslim, so it is easier for you [to leave]. If you want to leave the easy way, the righteous way, raise your right hand". I then told it: "First, we have to break your contract out of safety". I was not sure at that time whether or not there existed any [sihr-related] contract [with a sahir, a human sorcerer], or what the reason was for the djinn being there […] "I will then read something for you which will make it easier for you to leave". When it finally tried to leave, it was a very persistent djinn, and I had to leave the afflicted person alone for a while for it to leave through the legs. (Interview with Raheem January 21, 2014) My experience when interviewing both other Sunni raqi and West African marabouts is that they often instruct the djinns to exit through the patient's big toe or thumb. However, Raheem usually uses the left leg and seldom the hand or thumb. When I asked him why the mouth is not used, he told me that there is no manageable exit channel. According to Raheem, the patient will occasionally feel the djinn in his teeth but nowhere else in the mouth. However, Raheem explains, the djinns can exit if you make the patient vomit. He is, on the other hand, unsure if "the vomit is pushing them out" or "if they reside in it." He also told me that djinns can exit through the hair.
As described in other studies (cf. Philips, 2007;Maarouf, 2007;al-Subaie and Alhamad, 2000), the raqi traditionally interrogates the djinns for the purposes of arriving at a diagnosis. Raheem was initially taught this method, but he is very skeptical of it because he considers the djinns to be unreliable and, moreover, that it is immoral to cooperate with them instead of relying solely upon the Qur'an. His aim is to develop an enhanced method of ruqya in which there is no dialogue with the djinns. His diagnostic method using his palm should be so precise that there is no need for him to ask the djinn why it is there. The sensations felt by the palm when reciting and then later interviewing the patients about their symptoms and dreams should be enough, according to Raheem.
Sihr with and without djinns
Raheem explains that, often, when sihr is involved, the djinns reside outside the victim's body and try to find an opening through which to enter. For example, if the sahir binds the djinn to a physical substance, he makes the victim swallow it. According to Raheem, this phenomenon can even happen in a dream and will still attract djinns. He added that humankind has three emotional levels: the physical (everyday), the ecstatic or dreaming, and the spiritual level. It is common for djinns to possess people in the second level, either in a dream or in an exalted state (caused by extreme joy or sorrow) or when they are intoxicated. In dreams, djinns often attack people in the shape of dogs or snakes. If one dreams of falling or drowning, it is the way one's subconcious reveals that one has been attacked by sihr, according to Raheem.
Raheem occasionally experiences that no djinns are involved at all, only the negative energy of the sihr. He believes that those negative energies might create cancer or mental illnesses. Another example given by Raheem of sihr without any djinns involved is cursing. The sahir will curse an object (like a shirt) that has been in contact or will be in contact (like a comb) with the intended victim. Another method of sihr is cursing a picture of the victim. 11 I asked Raheem how he gained knowledge of the practices of sihr. He told me that a sahir will occasionally regret his wrongdoings and try to return to the umma (the Muslim community). In that case, he will speak about his experiences afterward. However, Raheem has gained most of his knowledge of the practices of a sahir from other raqi during his training.
Five cases of Raheem performing ruqya, field notes, and discussion
The long-term patient The patient (P1) suffers from visions of birds flying around in his bedroom at night. P1 is convinced that the birds are in reality djinns. His troubles started about forty years ago in North West Africa when he was a young boy. He suspects that he was first struck by sihr at that time, but his difficulties increased when he moved to Sweden. Here, he had several relationships with Christian and Jewish girlfriends and drank alcohol, all of which he believes further exposed him to sihr. For the last fifteen years, he has followed an Islamic way of life, and the effects of the sihr have decreased. He explained that a physical sign of sihr was that his hands and feet became blackened by poison. This sign has now disappeared completely. However, his loss of hair, which he blames on sihr, has not reversed. Other signs of sihr, according to P1, were that he perceived that he always was misfortunate, and he often quarreled with his friends and former girlfriends for no apparent reasons. He also frequently used to roll his eyes with no discernable cause. The eye-rolling also disappeared after he changed his lifestyle and had several ruqya sessions. P1 speaks Swedish fluently and is very social, charming, and talkative. I have the impression that he liked the attention of me being present. 12 During the ruqya, P1 sits on the floor in the direction of the Kabah in Mecca. Raheem placed his palm on the crown of P1's head and recited for half an hour. At the end of the session, he blows in his hands and on P1's head. Afterwards, Raheem concludes that no djinns are residing inside P1. They are only sometimes whispering to him (waswasa). Raheem tells P1 to continue his daily recitations of one thousand repetitions. He adds two new suras to the ones given earlier (112,113,. He should combine the reading with digesting honey mixed with vinegar each morning. On a later occasion, I asked Raheem if P1 is a "ruqya junkie". 13 Raheem told me that P1 used to visit several local raqi before they met. He also spent much time reading the Qur'an. Raheem thought that this made him "go around in circles" instead of progressing. He initially instructed P1 to read Salawat (blessings of the Prophet) instead in order to counter his earlier behavior. That initiated a radical change for P1, according to Raheem; both his depression and his physical symptoms disappeared. He believes that P1 had previously read the Qur'an in an obsessive way, without any heart, and had done so with the sole purpose of solving his problems. Now, Raheem 12 P1 had also met my other raqi informant, "Didan," and praised his work (Marlow, 2023a). However, Didan only wanted to treat him once, according to P1, for reasons having to do with envy, perhaps because they are from the same country and know each other's families. P1 confirmed some of the stories that Didan told me regarding his more successful cases. 13 Resembling someone who alternates between various Christian charismatic churches in order to get a "deliverance fix" from all the attention they receive and katharsis they experience when they are exorcised (Hunt 1998: 221-228). explained, P1 has started to read the Qur'an again in a more sensible and heart-felt way. Raheem does not believe that djinns caused the rolling of P1's eyes but that he instead unconsciously does it himself.
Raheem informed me that he has had several patients who felt that they needed additional sessions of ruqya after he considered the procedure completed and the symptoms gone. However, he explained that he does not think that they do so because they want to re-experience a feeling of katharsis. Instead, he believes that they crave a sense of safety and security.
The case of P1 also indicates (as was also found in my earlier studies of ruqya) that the structure of a European ruqya implies that a return to Islam (not assimilation in the diaspora) is the solution for issues regarding one's health and safety. Turning to Islam and the Qur'an is the cure, and adhering to Islamic practices in the diaspora is the vaccine needed to prevent future health problems, according to the two raqis I have observed and interviewed. From my outsider perspective, as a side effect, ruqya might function as a method of bringing secularized European Muslims back to the umma by increasing their awareness of reciting the Qur'an and conducting daily prayers for their overall well-being.
The patient whose ability to work was affected P2 is in her late twenties and is of North African descent. Her problems started three years ago, and she was diagnosed with "rheumatism in the blood" by a physician. She claims that Western medicine had no effect on her. However, she believes that ruqya and prophetic medicine have provided some relief in regard to her rheumatic problems. "Before, I felt well 10% of the time, and now it is 90%." She also experiences "mental blocks" at work. Before the session begins, she asks Raheem for a new kind of prophetic medicine because the previously given one makes her vomit. P2 speaks fluent Swedish. She passionately insists that ruqya and prophetic medicine are far superiour to Western treatments. She tells me that her mother is convinced that she [P2] has been affected by sihr. However, P2 does not believe in the existence of sihr herself. Before the session begins, Raheem opens the door towards the main prayer room in the mosque because P2 has no close relatives present at the ruqya. During the recitation, she experiences both the rheumatism and mental blocks making her head feel heavy. Raheem, therefore, continues reciting until the heavy feeling starts to fade. However, it does not disappear completely, according to P2. After the session, her pulse is checked to diagnose if any new sihr has affected her. She has been instructed by Raheem to ingest a mixture of honey, vinegar, and olive oil as a preparatory measure for expelling the sickness. She should also read Al-Fatiha and Ayat al-kursi combined with the last three suras once after each prayer session and before she goes to bed. She receives another sura recommendation from Raheem that she is supposed to read thirty-three times after the others. Finally, she should read a dua (prayer) asking God for forgiveness seventy times every night. Raheem then tells her to return in six weeks for a follow-up. 14 Raheem calls the prophetic medicine an oxymel. He explains that it is initially based on Greek medicine but that it is recommended in the Sunna. 15 From a Western academic perspective, one might be surprised that P2 told me that she does not believe in sihr. Despite her disbelief, she reported regularly coming to the mosque for Raheem to perform ruqya to heal her from sihr specifically. This is an example of the dichotomy of lived religion vs. idealized religion as discussed at the beginning of this paper.
Within the study of religions, an individual's faith (or religious beliefs) has most often been approached as a static attribute and from a mono-cultural context. My view is that this is erroneous and based upon a Christian concept of what constitutes "religion." Unlike the Roman, Greek, or Jewish "religions," which were primarily based upon ethnicity, Paul introduced Christianity as a shared community of faith (cf. Colossians 3 in the Christian Bible). Instead of constituting ethnic traditions and practices, "religion" primarily became a faith and belief system.
If faith is an essential factor for the success of a therapeutic ritual, do the patients need to either adhere to faith in their local cultures (local remedies) or faith in one universal biomedical culture (the enlightenment version of modernity)? My earlier studies on exorcism in new religious movements in Stockholm (Marlow, 2011) have shown that a patient often switches between several alternative therapies in a multicultural setting instead of only choosing one. One day, they may visit a conventional health-care provider; the next day, they may visit a local shaman or New Ageinspired healer. They may also combine the prescribed pills bought at the pharmacy with homeopathic medicines and individually created diet plans inspired by the Internet and other popular media. Drieskens (2008) describes a similar phenomenon among her Egyptian informants.
The logic behind these, from my perspective, contradicting faiths may instead be what Coleridge termed the "willing suspension of disbelief." For "modern" individuals (i.e., someone educated according to the principles of Western enlightenment modernity) to collectively participate in and fully experience either a religious therapeutic ritual, an entertaining 3D-movie, or a scary roller coaster ride, it is essential for them to be "united by a willingness to momentarily suspend certain critical observations in favour of something prevented by those observations" (Jackson, 2012: 299).
With this mindset of a willing suspension of disbelief combined with sufficient time, it can probably be explained why the same person can have the necessary faith in two or more contradictory therapeutic systems. Alternatively, like P2, they can have faith in a form of therapy but not in the diagnosis given. This positive thinking and result-oriented mindset of the ritual actors is coherent with the theoretical focus that the important issue when studying ritual is what it does, not what it means (Seligman et al., 2008: 15). 16 The dramatic patient P3 is a tall young man of East African descent who is not very talkative when I meet him. He is an active basketball player who lives in northern Sweden. He has previously been treated for sihr in his knee. Since then, he has experienced new problems with sihr in one foot. During the interview, he admits that he stopped reading his prescribed prayers when the sihr in the knee disappeared. He also complains about recurring nightmares invovling being bitten by snakes. During the ruqya, the affected foot trembles intensely. During the ruqya, Raheem bends P3 forward and taps him on the back. After the session, he explained to me that he did so in order to draw the possessing djinn out of the foot. With his palm, Raheem feels the strong presence of several djinns tormenting P3, one inside and the others around him. He draws the possessing djinn up to P3's head and then applies intense pressure with his fingers to the cavity under the ear and the bridge of the nose. While reciting, he holds a receptacle with water close to his mouth. After that, he sprays the blessed water on P3's affected body parts. Whenever P3's left leg starts to tremble intensely, Raheem calms him down. [During the subsequent interview, Raheem explained that he disapproves of too much "drama" taking place during the ruqya.] Finally, P3 is told to read Ayat al-kursi and the last three suras three times at each of the five prayer times and before going to sleep. He is also given tea bags to drink and is instructed to bathe in the used tea leaves for protection.
When Raheem first met P3, he spent time converting and then interviewing P3's possessing djinn as he found the djinn very interesting. It told him how it interacted with Swedish [non-African] djinns, e.g., with dwarves [in contrast to P3, who is very tall; my reflection].
According to Raheem, the first djinn was a lion djinn. He explained to me that non-religious djinns from Africa usually identify themselves as lion or snake djinns. The people who had performed the sihr on P3 were also of East African descent and adhered to a local religion Raheem did not recognize. After the djinn was cast out, P3 did not return for a long time. Raheem was surprised when P3 came back because he had taught him how to protect himself against sihr.
After this experience, Raheem decided to minimize his attention to the djinns when performing ruqya in general. Instead, he would focus on the cause, e.g., sihr. He explains that sihr works "like a magnet" that will attract new djinns if not neutralized. One must therefore treat the reason that djinns are attracted to the person and not just evict them. 17 According to Raheem, one reason not to focus on the djinns when performing ruqya is that they usually lie and, if sihr is involved, they do not have as much power or knowledge as they claim to possess. Raheem will command the first djinn to bring the others if more than one is involved. He has specific Qur'anic verses to facilitate this. He told me that it generally is relatively easy to cast out the other djinns after the first djinn has left.
Raheem also suspects that it might be psychologically unhealthy for the patient if he pays too much attention to the djinns residing in the patient's body during the ruqya. Therefore, in contrast to when he started as a raqi, he is more direct nowadays and evicts them as quickly as he can.
The preschool patient
P4 is roughly three years old and is accompanied by her mother and grandmother. None of them speak Swedish or English, only the language of Raheem's mother's native country. P4 sits in her mother's lap during the session. Raheem gives her candy and makes sure she smiles before starting the ruqya. Raheem is gentler during the session than he is with adults. In the beginning, he only blows around her. He also abstains from spraying water. He only touches her head lightly at the end of the session. There is no veil separating her hair from Raheem's hand as is the case with the adults he treats. Finally, Raheem recites verses twice over the water and then P4 and her mother drink three sips each.
One might suspect that this gentler version of ruqya is an adaption to Swedish laws because a child is involved. Moreover, it is impossible for me to know if Raheem adapted any of his practices because I was observing him. However, based upon my fieldwork with Raheem, my subjective opinion is that this gentler version reflects Raheem's personality as a healer. Raheem claims that he has carried out ruqya on a three-year-old child, which is most probably illegal according to Swedish law. 18 The training of A female raqi apprentice P5 is thirteen years old and is dressed in a pink hoodie, pink sweatpants, and a veil. She is accompanied by A, who is the aunt of P5. They both speak fluent Swedish and are Muslims of Roma descent. P5 is very shy but agrees to let me observe her session. She regularly comes to Raheem. All the females in the family seem to be afflicted with sihr-related problems. P5's mother and grandmother are also regular patients of Raheem. For this reason, he is providing A with training to allow her to perform ruqya on her family every day between the sessions with Raheem. P5 has been diagnosed as a victim of a case of evil eye that has been untreated for five years as well as a being affected by a weak curse connected with their apartment. A explains that P5's head shakes when she is performing ruqya on her at home. Raheem is very gentle with P5 and, before starting the ruqya, he explains that it can be hard in the middle of the treatment but later it will become easier. He tells her to let him know if she wants him to stop at any time during the session. Raheem blows after each sura, first at her head and then over a bottle of water. She has her hood over her head when the session begins but removes it after five minutes to expose her veiled head. During the ruqya, Raheem asks A if she can feel the afflicted area on P5's head. A has her hand on P5's head and locates a spot she describes as being hot and having a hole in the center. It is four-and-a-half finger-widths from the hairline on the left side. Raheem confirms that she has found the correct sihr spot. He shows her two other affected spots, one above the temple and the other at the center of the neck. Raheem explains that one of the last two spots indicates that the evil eye has evolved into sihr. He continues reciting and blowing on P5. At the end of the session, he asks permission to sprinkle water on her. After P5 agrees, he asks her to close her eyes. Then, he sprinkles water on her head, hood, and face three times and asks her to open her eyes. P5 giggles. She is then told to drink three sips of water. She is given the bottle with the rest of the blessed water and is instructed to drink the remainder for three days at home. Further, P5 should recite Al-Fatiha seven times each morning and evening and listen to sura 36 and 27 for forty days. After the mid-day prayer, Raheem performs ruqya on A. As in the sessions with his other patients, he sits on an office chair and they sit on a prayer rug on the floor. A complains that she sometimes feels a negative sensation in her right wrist. Raheem explains that it is caused by performing ruqya on her family members in order to draw all negative influences from the rest of the body up to the afflicted head. Therefore, she is told that it is crucial to always blow at her hand afterwards. This is similar to the requirement to perform wudu (the ritual washing) but needs to be done only once after each session. Because of her perceived personal djinn-related problems, she is advised not to perform ruqya on more than three people each day. She also tells Raheem that when she stops reading, she has nightmares. However, she has learned to perform ruqya on the djinns when sleeping. She often wakes up with her palm on her stomach after she has had this experience.
Raheem explained after the session that A has brought several family members to him for ruqya. She once phoned him in the middle of the night on behalf of one of her cousins. Raheem then told her what passages to read over water and to then give her cousin the water to drink until Raheem could get to their place. A later brewed tea with some of the remaining water and, she claimed, "Allah [the word] appeared in the cup." When he arrived, her entire family was excited because of this wondrous event. Already before Raheem had heard about the teacup incident, he had a feeling that A would be a good raqi. Raheem told her to pray to God seven nights in a row and then decide if she would like to become a raqi to help her kin.
It will take two months of training for her to get to a basic level of proficiency. After that, additional training will be needed to teach her how to evict more stubborn djinns. She has already participated in several ruqya-sessions with Raheem and has performed sessions for family members as well.
While in training, she will assist him one day each week, during which he will observe and instruct her. Raheem told me that he thinks all large families need someone who knows basic ruqya. According to Raheem, there is a great need for more female raqi in particular in Sweden. He shared with me that he had encountered several female raqi abroad while he was in training. However, he said that he does not know if they treat both male and female patients outside their family or only the latter.
A raqi's reflections on non-djinn-related psychological afflictions
Raheem expressed a great interest in psychotherapy and in how to integrate it with Islam and ruqya. He explained that several people have approached him for help without any sihr-or djinn-related problems, most commonly because of gambling problems or watching pornography excessively. Several of his patients suffer from depression but do not feel comfortable with Western psychiatry. He made a comparison between ruqya therapy and cognitive behavioral therapy and psychoanalysis. The common ground, according to him, is that the goal is to change negative behavior, remove bad influences, and make the patient healthy and socially functional.
Raheem told me that he is convinced that some of his patients who claim that they see djinns are in fact psychotic. He suggested that it might be sihr that has caused the psychosis. Another reason for their condition might be that they have taken drugs. Raheem described ruqya as being very beneficial for most people. However, according to Raheem, becoming obsessed with listening to the Qur'an without any need for ruqya is an indication of psychosis. They escape their underlying problems by continuously listening to the Qur'an. He explained that he in such cases tries to comfort them in ways other than carrying out ruqya. He compared the situation with how a Western physician might use their psychological experience to cure patients with problems that are not of a physical nature. He also stressed that listening to patients is helpful in itself even when no other treatments are given.
Raheem insisted that one should be careful when offering religious therapy not to encourage what he termed "unbalanced behavior" since this, according to him, might cause conditions such as OCD. However, he clarified that he does not work with ritual symbols despite his interest in secular psychotherapy. He said that he sincerely believes in djinns, sihr, and other "hidden illnesses not yet recognized by Western medicine."
Concluding methodological remarks
Before I met Raheem I had only interviewed one other raqi, Didan, to ascertain the details of his way of performing ruqya (Marlow, 2023a;Marlow, 2023b). After my first observation of Raheem's personal performance of ruqya, I returned to Didan and asked him if he also uses his hands during ruqya, which he immediately confirmed. For Didan, this practice was not in any way secret; he had just either forgotten to tell me about it or considered it less important with the complementing embodied techniques than discussing the theological and ritual theories behind his performance of ruqya. One could perhaps regard this omission as an example of too much focus being placed on the textual dimension of Islam in discussing the lived practices of Islam as seen from a ritual specialist insider's perspective and too little attention being given to the universal techniques of healing with bodily contact. This omission also demonstrates the limitations of interviews as an ethnographic tool to "collect ex post accounts of practices that were performed in another context" (Dupret et al., 2012: 2).
At first, Raheem was also very hesitant to explain why he squeezed and pressed on various points on the body during the ruqya. However, since he had promised to answer my questions, and I had observed these practices repeatedly, he finally disclosed his use of the "satanic meridians." The case of P2 is an example of the dangers of presuming that someone who regularly visits a raqi unreservedly adheres to the full theological worldview of the ritual of ruqya (cf. Schielke, 2010). Despite her claims of not believing in the Islamic diagnosis of sihr, she still, in her pragmatic identification as a Muslim, preferred ruqya against sihr as a more potent Islamic therapy for her illnesses than competing secular therapies.
Finally, I hope that this paper has exemplified that, although detailed descriptive work may be criticized for its limited capacity for providing a general explanation, in comparison with grand-scheme theories, it has a deeper explanatory value, capturing human ambiguity, as "a thorough analysis of a chunk of the world as it actually functions" (Dupret et al., 2012: 1).
Author's contributions Not applicable.
Funding Open access funding provided by Abo Akademi University (ABO).
Declarations
Ethical approval Not applicable.
Competing interests The author declares no competing interests.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http:// creat iveco mmons. org/ licen ses/ by/4. 0/.
|
v3-fos-license
|
2018-04-03T03:33:11.474Z
|
2018-01-08T00:00:00.000
|
6746684
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "http://www.scielo.br/pdf/rlae/v25/0104-1169-rlae-25-e2949.pdf",
"pdf_hash": "fda59cbcf3206e185c9bc2d7e91e11d08363c8a2",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44410",
"s2fieldsofstudy": [
"Medicine"
],
"sha1": "fda59cbcf3206e185c9bc2d7e91e11d08363c8a2",
"year": 2017
}
|
pes2o/s2orc
|
Pregnancy complications in Brazilian puerperal women treated in the public and private health systems 1
ABSTRACT Objective: to analyze the prevalence of pregnancy complications and sociodemographic profile of puerperal patients with complications, according to the form of financing of the childbirth service. Method: cross-sectional study with interview of 928 puerperal women whose childbirth was financed by the Unified Health System, health plans and private sources (other sources than the Unified Health System). The sample was calculated based on the births registered in the Information System on Live Births, stratified by hospital and form of financing of the childbirth service. Data were analyzed using the chi-square and Fisher’s exact tests. Results: the prevalence was 87.8% for all puerperal women, with an average of 2.4 complications per woman. In the case of deliveries covered by the Unified Health System, urinary tract infection (38.2%), anemia (26.0%) and leucorrhea (23.5%) were more frequent. In turn, vaginal bleeding (26.4%), urinary tract infection (23.9%) and leucorrhoea (23.7%) were prevalent in deliveries that were not covered by the Unified Health System. Puerperal women that had their delivery covered by the Unified Health System reported a greater number of intercurrences related to infectious diseases, while women who used health plans and private sources reported intercurrences related to chronic diseases. A higher frequency of puerperal adolescents, non-white women, and women without partner among those assisted in the Unified Health System (p < 0.001). Conclusion: the high prevalence of complications indicates the need for monitoring and preventing diseases during pregnancy, especially in the case of pregnant women with unfavorable sociodemographic characteristics.
Introduction
Gestation is a physiological event in the women's life usually free from complications. However, hundreds of thousands of women die every year due to pregnancy and childbirth complications (1) .
Health problems during pregnancy have increased worldwide, mainly due to complex interactions between demographic factors and lifestyle, as well as advances in modern medicine (2) , with new diagnostic and therapeutic practices.
Among the main clinical pregnancy complications reported in the literature are Urinary Tract Infections (UTIs) (3)(4) , pregnancy-induced hypertension (PIH), anemia and hyperemesis (5)(6) . In the United States, a multicenter study of hospitalizations during pregnancy showed a 71% increase in the occurrence of PIH between 1994 and 2011 (7) . Another study also carried out in the United States pointed out that the main intercurrences associated with maternal mortality were pre-eclampsia and obstetric hemorrhage (8) .
Another common complaint in pregnancy are urinary tract infections (UTI), with well-known severity and frequency (9) . This complication represents one of the main risk factors for preterm birth and restriction of intrauterine growth, low birth weight and eclampsia (10) .
Anemia can occur in up to 19% of pregnant women worldwide according to estimates (11) and it is associated with low sociodemographic profiles, being more common in developing countries (12) .
Women with unfavorable socioeconomic levels, preexisting conditions such as diabetes, hypertension, anemia and heart disease, and adolescents or women over 35 years of age may be more likely to experience undesirable outcomes, since complications during pregnancy are predictors of maternal and fetal morbidity and mortality (13)(14) . It is estimated that, for each woman who dies during the gestation, another 20 to 30 experience acute or chronic complications, with permanent consequences that impair the body's functionality (15) . Intercurrences during pregnancy also affect the allocation of financial resources for maternal and child health. In the United States of America, a survey of 137,040 infants between 2007 and 2011 found a prevalence of 75.4% of women with at least one complication during the gestation period, increasing from US $ 987 to US $ 10,287 in the cost of care for newborns (16) .
In Brazil, many programs have been implemented for assistance, prevention and control of morbidity and mortality of women during pregnancy, childbirth and puerperium, especially at the national level, such as the Stork Network*, and state level, such as the Paranaense Mother Network in the State of Paraná ** .
However, these efforts have not fully achieved the expected goals. The Fifth Millennium Development Goal was to achieve a reduction of 75% in maternal mortality rates by 2015, but this was only 45% (1) worldwide, including in Brazil.
Knowing the prevalence, the main types of diseases and disorders and the sociodemographic characteristics Only two puerperal women refused to participate in the study, because of wear due to long hospitalization since gestation; these women were replaced. An online form was used in the Google Docs app, which allows agility in both collecting and storing data in spreadsheets. During fieldwork, the spreadsheets were checked daily, aiming at the safety and quality of the data, if necessary further consultation of the medical records and contact with the puerperal women were performed.
For information on the complications, the women were first asked whether they had experienced any of the As expected, the majority of women who had SUS births (91.6%) received prenatal care exclusively in the public network, whereas the majority of puerperal women with non-SUS births (99.5%) had prenatal care in the private or mixed network (Table 4).
Discussion
The high prevalence of pregnancy complications reported by the puerperae of Maringá, in the case of was found, despite taking into account the differences in the socioeconomic profile of the population and in the form of data collection (17) .
In addition to the high prevalence of pregnancy and PIH, are in agreement with the literature (4,6) , but with different prevalence values. The most reported complication during pregnancy was higher than the expected mean, which is 20%*, the same as in a study conducted in Nigeria in 2011, in which a prevalence of 21% was found (18) .
Responsible for approximately 10% of antepartum hospitalizations, UTIs, sometimes asymptomatic, may progress to pyelonephritis and cystitis and trigger complications for the fetus, such as premature birth and low birth weight (9,15) . These complications can be avoided by care quality during pregnancy, diagnosis and early treatment, as recommended by national, state and municipal protocols for prenatal care (19) . (20) . A study carried out in a hospital in Ethiopia in 2012 to analyze the prevalence and predictors of maternal anemia, there was a prevalence of 16.6% of anemia (21) , a lower value than that found in the present study. As a public health problem affecting low, middle and high income countries, the effects of anemia during pregnancy include low birth weight, some neurological diseases of the fetus and increased risk of maternal and perinatal mortality (20) . diagnostic tests and explanations that led them to report these occurrences. PMR occurs more frequently during the 24th to 26th week of pregnancy, and less frequently in the subsequent weeks (22) , and consists of the separation of the placenta implanted in the uterus (23) . In the present study, the highest frequency of PMR among women who had non-SUS births may be associated with the highest frequency ages above 35 years, a major risk factor for PMR* in this group.
PIH presented a higher incidence in this study than that found in a review with pregnant women from several countries of the world, which reported a incidence of 5.2 to 8.2% (24) . In this study, it was observed that the highest prevalence of PIH occurred among puerperal women who had non-SUS births, and the highest prevalence of gestational diabetes among those who had SUS births. is one of the complications with incidence of 5 to 10% in pregnant women*. PIH is considered a major cause of maternal morbidity and mortality in developing countries, with high rates of severe maternal morbidity and maternal mortality in Brazil (25) .
Other diseases reported in this study had a higher percentage among women who had SUS births, with also higher proportion of some infectious diseases such
|
v3-fos-license
|
2019-05-19T13:03:57.596Z
|
2016-11-28T00:00:00.000
|
62838764
|
{
"extfieldsofstudy": [
"Economics"
],
"oa_license": "CCBY",
"oa_status": "HYBRID",
"oa_url": "https://doi.org/10.4172/2151-6219.1000269",
"pdf_hash": "8979b09bd69452f3cb5e0c0fb38432a4c0b16bb7",
"pdf_src": "MergedPDFExtraction",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44411",
"s2fieldsofstudy": [
"Economics",
"Business"
],
"sha1": "e03cd0a3a11721f0e64178b7b360cd1b5e76824e",
"year": 2016
}
|
pes2o/s2orc
|
Financial Inclusion and Financial Performance of Microfinance Institutions in Rwanda: Analysis of the Clecam Ejoheza Kamonyi
The present research named “Financial inclusion and financial performance of microfinance institutions in Rwanda: Case of CLECAM EJOHEZA KAMONYI (2011-2014)”, had the main objective of evaluating the implementation of financial inclusion in microfinance institutions in Rwanda. The study used the techniques of questionnaire addressed to a sample of 162 calculated from a total population 11.121 CLECAM EJOHEZA KAMONYI members in collection of primary data and the technique of documentation for secondary data. During the treatment and analysis of data, the frequency calculation and tabulation, the IMF Factsheet and multiple regressions have been used. The major findings show that 70.4% of all respondents have been pushed by quick service and appropriate products to join LECAM EJOHEZA KAMONYI. 18.1% of all respondents choose this institution because it operates near for them while 11.5% of respondents said that they choose this institution for other reasons. 63.6% of all respondents said that the product and services respond to their needs on the level of excellent, 36.4% of all respondent are satisfied by the products and services on a very good the level. CLECAM EJOHEZA KAMONYI performs financially. In 2014, the portfolio at risk was 2.2%, the Operating Self-Sufficiency arrived at 143%, the portfolio yield was 25.6% while Operating expense ratio was 17%. The evolution of deposit depends on the number of members of CLECAM EJOHEZAKAMONYI, but the number of branches does not affect the deposits. The Operating expense depends on the number of branches but not on the number of members. The evolution of members has effect on evolution of share capital but the number of branches does not affect the evolution of paid up share capital. The deposits are correlated positively to net income at 92.6%. Citation: Harelimana JB (2016) Financial Inclusion and Financial Performance of Microfinance Institutions in Rwanda: Analysis of the Clecam Ejoheza Kamonyi. Bus Eco J 7: 269. doi: 10.4172/2151-6219.1000269
Introduction
Microfinance, according to Otero [1], is "the provision of financial services to low income poor and very poor self-employed people". According to Ledgerwood [2], these financial services generally include savings and credit but can also include other financial services such as insurance and payment services.
Anne [3] states that the microfinance is the practice of providing a variety of financial services to the low-income and poor clients. The diversity of services offered reflects the fact that the financial needs of low-income individuals or households and small enterprises can change significantly over time. These services include loans, savings, insurance, and remittances. Because of these varied needs, and because of the industry's focus on the poor, microfinance institutions often use non-traditional methodologies that are not used by the formal financial sector [3].
Financial inclusion typically defined as the proportion of individuals and firms that use financial services has become a subject of considerable interest among policy makers, researchers, and other stakeholders [4]. In international forums, such as the Group of Twenty (G-20), financial inclusion has moved up the reform agenda. At the country level, about two-thirds of regulatory and supervisory agencies are now charged with enhancing financial inclusion. In recent years, some 50 countries have set formal targets and goals for financial inclusion [4].
After the 1994 Genocide in Rwanda, the microfinance sector has known a dramatic progress through the support of relevant international and non-government organizations especially for humanitarians. These NGOs helped people by support of daily use of equipment, foods but had also the microcredit teaching program [5]. During the emergency period, in some cases the loans do not differ to grants or donations and sowed confusion among the population. Thus developed the non-repayment culture that resulted in non-performing loans, and therefore had a negative impact on results of micro-finance institutions [5].
In Rwanda, the introduction of UMURENGE SACCOs, in conjunction with the expansion of bank and MFI branches, the introduction of agent banking and the modernization of financial services such as mobile banking, ATMs and mobile money, have all helped to drive financial inclusion in Rwanda [6]. According to the FINSCOP survey conduct in 2012, the percentage of Rwanda's population accessing formal financial services has doubled from 21% to 42% and those completely excluded from the formal financial system has dropped by almost half, from 52% to 28% between 2008 and 2012 [7].
For the microfinance institutions which want the sustainability resists to deal with the poor because their income is not only low but also irregular and are therefore more vulnerable to external shocks and uncertainties of their cash flows [8]. Thus, we want to know if the introduction of financial inclusion had really negative impact on financial performance of MFIs in Rwanda. In addition, as we have seen the number of Rwandan population who has access on formal financial services has been doubled. So we are wondering the factors that accelerate financial inclusion on side of Rwanda's microfinance institutions.
Research Objectives
The objective of the research is to analyse the implementation of financial inclusion for financial performance of microfinance institutions in Rwanda.
Specifically, this research has the following objective: • To analyse the implementation of financial inclusion in CLECAM EJOHEZA KAMONYI, • To evaluate the financial performance of CLECAM EJOHEZA KAMONYI, • To measure the correlation between financial inclusion and financial performance of CLECAM EJOHEZA KAMONYI.
Literature Review
According to Ledgerwood [2], microfinance has evolved as an economic development approach intended to benefit low-income women and men. The term refers to the provision of financial services to low-income clients, including the self-employed.
Fatf [8] states that the terms "financial inclusion" is about providing access to an adequate range of safe, convenient and affordable financial services to disadvantaged and other vulnerable groups, including low income, rural and undocumented persons, who have been underserved or excluded from the formal financial sector. Financial inclusion dimension are grouped into three: Access, usage and quality [9].
Performance is the ultimate result of all the efforts of an organization. These efforts are to do the right things, in the right way, quickly, at the right time, at the lowest cost to produce the good results that meet the needs and expectations of customers, give them satisfaction and achieve the goals set by the organization [10].
International Journal of Engineering Research and Development (2012) conducted the research with objective of knowing why the vulnerable people are limited or no access to financial products and services in Tamil Nadu (Chine). The result show that the first factor was "Access to Financial Services", the second factor was "Flexible Terms on Savings & Deposits", the third factor was "Flexible terms of borrowing", the fourth factor was "Access to information about various services" and the fifth factor was "Responsibility". In the book "Financial Literacy: A Step for Clients towards Financial Inclusion", Monique [11] state that financial inclusion is a multidimensional, pro-client concept, encompassing better access, better products and services, and better use, without the third element, use, the first two are not worth much.
The issue regarding the best way to provide financial services to the poor has fuelled intensive debates between two different schools of thinking: institutionalism and welfarist. This opposition faces two requirements of microfinance: Targeting the poorest among the poor (social performance) and enhancing the profitability of the institution (financial performance). Is there a trade-off between these two performances or can they combine [12].
According to Herein et al. financial education is an important tool to address this imbalance and help consumers both accept and use the products to which they increasingly have access. Because it can facilitate effective product use, financial education is critical to financial inclusion. It helps clients to both to develop the skills to compare and select the best products for their needs and empower them to exercise their rights and responsibilities in the consumer protection equation
Methodology
This section summarized dimensions of the research, tools and techniques and methods used to achieve the research objectives.
Data collection
The technique of questionnaire addressed to 162 members of CLECAM EJOHEZA KAMONYI (total population was 11.121 members) in collection of primary data and the technique of documentation for secondary data was used. The sample was calculated by using the formula of Alain BOUCHAR with confidence level of 99% and a permissible error of 10%.
Data analysis
To assess the implementation of financial inclusion, the researcher used frequency calculation and tabulation by using SPSS 16.0 where the factors influencing the financial inclusion were identified.
For the evaluation of the financial performance, the researcher used MFI Factsheet 3.5 while the identification of the relationship between financial inclusion and microfinance financial performance, the correlation matrix extracted by using SPSS 16.0 was used to show the correlation between net income and deposits and the running of the models (multiple regression) by using the Ordinary Least Squares to determine the statistical inference between financial inclusion indicators (independent variables) and financial performance indicators (dependent variables). The following are the models used: Model 1: ED z t = β 0 + β 1 M t +β 2 B t +ε t Model 2: OE = ƒ0+ ƒ1M t + ƒ2B t +ε t Model 3: OSS t = Ý 0 + Ý 1 D t + Ý 2 LD t + Ý 3 PR t + ε t Model 4: PY t = AE 0 + AE 1 D t + AE 2 L t + AE 3 PR t+ ε t Model 5: PaR = Ñ 0 +Ñ 1 D t +Ñ 2 L t +Ñ 3 PR t + ε6 Besides these models, we computed the correlation between the net income and deposits of CLECAM EJOHEZA KAMONYI. The correlation coefficient of two variables, sometimes simply called their correlation, is the covariance of the two variables divided by the product of their individual standard deviations. It is a normalized measurement of how the two variables are linearly related. The correlation lies in the interval of [+1, -1]. If the correlation coefficient is close to 1, it would indicate that the variables are positively linearly related and the scatter plot falls almost along a straight line with positive slope. For -1, it indicates that the variables are negatively linearly related and the scatter plot almost falls along a straight line with negative slope. And for zero, it would indicate a weak linear relationship between the variables.
Analysis of implementation of financial inclusion in CLE-CAM EJOHEZA KAMONYI
The analysis of data, by using frequency calculation and tabulation, shows that many respondents, 51.2% of all respondents, have four years and above of experience with CLECAM EJOHEZA KAMONYI. It means that the answers given by the respondents are true because they know very well the institution.
The majority of the respondents were pushed to become members of CLECAM EJOHEZA KAMONYI by the quick service and the appropriate products (140 members of our sample). Other reasons such as methodology of solidarity group and proximity were responded by a small number of respondents (18.1% and 11.5% respectively).
Concerning the usage of services and products, 55.9% of all respondents use the savings product, 32.8% received the loan product and 11.4% of all respondents received the training on financial education. We assessed also the reasons that pushed some members to do not receive a given service or product. Some respondents didn't get loan because they did not want it or because they do not have information; 6% of the respondents did not get loan because they have loans in other financial institutions. The 78.4% of all respondents didn't receive training on financial education because they have no information about the training. Only two respondents said that they didn't receive the training because they have no time. According to the respondents who received the training (20.4%) said that the training was excellent (7.4%), very good (9.9%) and good (3.1%). This shows that the CLECAM EJOHEZA KAMONYI should increase training as long as this enhances the usage of its products and services.
The findings indicate that the product and services respond to customers' needs, this is confirmed by 63.7% of all respondents while 36.4% of all respondent said that they are very satisfied by the products and services. This shows that CLECAM EJOHEZA KAMONYI strives to adapt its product and services to its members' needs.
Generally the respondents are satisfied about the distance between the institution and the member's residence, according to their perceptions, 32.1% of the respondents are very satisfied of distance, 54.3% are satisfied and 13.6% are not satisfied. This situation justify that the products and services of CLECAM EJOHEZA KAMONYI are near of its members. The microfinance institutions should try to increase the number of branches, outlets and other service terminals so as to maximize the satisfaction of its members about the distance. About the working hours, 74% are very satisfied by the working hours, 25, 3% are satisfied while 0.6% is not satisfied by working hours.
Suggestions about the loan were made by the respondents, the 23.7% of them suggested to reduce the security deposit, 20.7% suggested to increase the period of reimbursement, 17.2% suggested to reduce the interest rate, 12.6% suggested to reduce the period of loan analysis, 11.5% required to enhance lending methodology. Concerning the suggestions about the proximity and accessibility of CLECAMEJOHEZA KAMONYI, 46.1% suggested to increase the number of sub branches or outlet. 20.6% suggested the increase of working days and increase of working hours. The adoption of technology has been suggested by only 12.7%.
Reference made on the above main findings, the implementation of financial inclusion in CLECAM EJOHEZA KAMONYI is based on accessibility and adaptation of product and services to the member's needs. The financial education is not included in factors which accelerate financial inclusion in this institution.
Analysis of financial performance of CLECAM EJOHEZA KAMONYI
The analysis of financial statements (balance sheet, income statement and other financial information) indicated that CLECAM EJOHEZA KAMONYI is performing well year by year.
Evaluation of annually financial variation indicators:
The total assets have been grown on the level of 121% in the year 2011, 51.8% in 2012, 5.3% in 2013 and 17.6% in 2014. Even if there is evolution of total assets, the decreasing of the percentage of evolution from 2011 to 2014 show that the trend is not good. The decreasing rate of assets has been caused by the composition of assets like net portfolio, total deposits, total borrowed funds where those posts have been grown with a declined percentage.
In the year 2012, the operating income have been grown to 91, 8% but this growth didn't increase significantly the equity, trough the increasing of the net profit, because the operating expenses have been also increased until 85%. The good situation could be when the operating income increases and the operating expenses decrease or if the increase in operating income is greater than increase in operating expense. Especially for the deposit and the number of members, Savings evolution is presented in the following graphic ( Figure 1). This graphic shows the evolution of total deposit and the evolution of the members. Those two aspects which have been grown in the same sense, their trend promise the good future and explain somehow that the product of saving respond to the needs of the members from 2010 to 2014. In 2012 where the PaR was very low, the portfolios at risk written off were 43.8%. This situation means that the deceasing of PaR to 0.5% has not been caused by the mechanism good management of loan but by the procedure of write off of 43%. The portfolio at risk of CLECAM Source: Extracted from primary data using MFI Factsheet, May, 2015 Sustainability: Return on Equity (ROE) is the most important profitability indicator; it measures an MFI's ability to reward its shareholders' investment, build its equity base through retained earnings, and raise additional equity investment [13]. There is no ROE in 2010 because the calculation of this ratio requires the data of 2009. The results mean that the equity of 100Frw in this institution generates 42.8, 32.9, 21.5, and 17.2 of income for the year 2011, 2012, 2013 and 2014. Even if this ratio is somehow high in 2011, from 2012, the ratio begun to decrease, in other word, the trend is not satisfied in 2014 where the ratio reached 17.2% in 2014. CLECAM EJOHEZA KAMONYI has to put in place the mechanism of increasing the net profit in order to improve this ratio. This situation justify that from 2012 the equity and the net profit have not been increased at the same level.
A Return on Assets is an indication of how well an MFI is managing its asset base to maximize its profits (Ruth, 2008). The ratio includes not only the return on the portfolio, but also all other revenue generated from investments and other operating activities [13].
About the ROA, the assets of CLECAM EJOHEZA KAMONYI are performing well. The analysis shows that 100 Frw of asset generates the income of 9.3 in 2011, 6.9 in 2012, 5.0 in 2013 and 4.5 in 2014. The ROA ratio has been decreased from 2012 to 2014 even if the results are positive because the profit is not sufficient in comparison with the total assets. The Operating Self-Sufficiency of CLECAM EJOHEZA KAMONYI shows that this institution covered all the expenses for the period of our research, as shown in the following graphic ( Figure 3). This graphic shows that in the year 2011 CLECAM EJOHEZA KAMONYI was able to cover all expenses on the level of 163.4%. In other year also, even if it was able to cover all expenses, the trend was not very well because for example this ratio was dropped to 141.5% and 143.7% respectively in 2013 and 2014.
Efficiency and productivity:
The comparison between portfolio yield and Operating expense ratio is summarized in the graphic (Figure 4). This graphic shows that the income generated by the loan is greater than the cost engaged. For example in 2011, the cost of 100Frw given as loan was 17.4 while this amount generates 29.1 of income. In addition when comparing the operating ratio with portfolio yield ratio (29.14 in 2011, 30.76 in 2012, 24.04 in 2013 and 25.67 in 2014), there is a good margin between those two ratios [14].
Financial structure indicators:
A set of financial ratios was calculated to see the performance of CLECAM EJOHEZA KAMONYI in terms of meeting short and long term obligations.
The capital adequacy ratio showed that this institution has the capacity of paying all liabilities with available equity on the level of 29% in 2010, 27% in 2011, 26% in 2012, 34% in 2013 and 37% in 2014. CLECAM EJOHEZA KAMONYI has to continue in this sense and avoid a decrease of this ratio. The ratio of leverage showed that the liabilities are 3.4 times of equity in 2010, 3.7 times of equity in 2011, 3.8 times of equity in 2012, 2.9 times of equity in 2013 and 2.7 times of equity in 2014. More this ratio decreases; more the institution becomes able to meet its short term and long term obligations (solvability). But the ratio short term liabilities/Total liabilities shows that CLECAM EJOHEZA KAMONYI had the problem of mobilization of long-term external funds because in all liabilities, short term were above 84% except in 2012 where this ratio was 61%.
In addition, the analysis shows that the deposits were 89% of total Source: Extracted from primary data using MFI Factsheet 2010-2014
Relationship between financial inclusion and financial performance
The empirical analysis of the effect of financial inclusion on the financial performance of CLECAM EJOHEZA KAMONYI revealed that the deposits depends on the number of members of CLECAM EJOHEZA KAMONYI, but, according to the results; the number of branches does not affect the deposits of CLECAM EJOHEZA KAMONYI. When the number of members increases by 1%, the deposits grow by 1.267% ceteris paribus. This shows how much CLECAM EJOHEZA KAMONYI has to enhance its mobilization so as to increase its members which entails the performance of deposits.
The number of branches/outlets affects positively the operating expense (the probability of the estimated parameter is less than 5%) while the number of members does not! When the number of branches increases by 1%, the operating expense of CLECAM EJOHEZA KAMONYI grows by 0.112 ceteris paribus. Though the sign is positive, the variable of number of branches creates expenses to CLECAM EJOHEZA KAMONYI which affect negatively the net income of the microfinance.
The findings showed that the number of active debtors, loan disbursed and portfolio rotation have positive effects (positive sign) on the Operational self-sufficiency of CLECAM EJOHEZA KAMONYI at 5% level of significance. The probability of their estimated parameters is less than 5%. Therefore the following interpretation is made: -When number of active debtors increases by 1%, the Operational self-sufficiency increases by 0.7% ceteris paribus.
-When the loan disbursed increases by 1%, the Operational selfsufficiency increases by 0.41% ceteris paribus.
-When the portfolio rotation increases by 1%, the Operational self-sufficiency increases by 0.38% ceteris paribus -The expected sign matches with the estimated sign.
These variables should be controlled in CLECAM EJOHEZA KAMONYI and held at a constant threshold to allow a continued financial performance. Having a big number of active debtors, a rising loan disbursed and high portfolio rotation increases the Operational self sufficiency of CLECAM EJOHEZA KAMONYI [15].
According to the results the number of debtors, loan disbursed and portfolio rotation do not have effect on the portfolio yield of CLECAM EJOHEZA Kamonyi as long as the probabilities of their estimated parameters are greater than 5% level of significance. It means that portfolio yield was influenced by other variables which are not included in the regression model. The lack of statistical inference in this model may be resulted from the short period of study (5years), if the period of study (sample size above or equal to 15) is expanded, then an interpretation would be made without some reserves (Appendix 1).
The loan disbursed and portfolio rotation does not have statistical effect at 5% level of significance on the portfolio at risk of CLECAM EJOHEZA Kamonyi. Only the number of active debtors has effects on the portfolio at risk at 5% level of significance. Therefore when the number of active debtors increases by 1%, the portfolio at risk decreases by 1.25% ceteris paribus.
In this research, we computed the correlation between Net income and deposits of CLECAM EJOHEZAKAMONYI to see if the evolution of deposits in CLECAM EJOHEZAKAMONYI is related to the net income from 2010-2014. The deposits show the usage of products and services of CLECAM EJOHEZA. The correlation coefficient is close to +1 and the deposits are correlated to net income at 92.6%.
|
v3-fos-license
|
2018-12-07T12:05:19.854Z
|
2013-06-28T00:00:00.000
|
55724299
|
{
"extfieldsofstudy": [
"Biology"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "http://www.todayscience.org/AS/article/as.v1i2p32.pdf",
"pdf_hash": "3ad3e07306633a08c5a82b52b1f1276c3fd06edb",
"pdf_src": "Anansi",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44412",
"s2fieldsofstudy": [
"Agricultural And Food Sciences",
"Environmental Science"
],
"sha1": "3ad3e07306633a08c5a82b52b1f1276c3fd06edb",
"year": 2013
}
|
pes2o/s2orc
|
Impact of Previous Vegetation Cover on Mycorrhizal Colonization and Performance of Moringa oleifera in Rainforest Regions of Cameroon
Moringa oleifera is a nutritional and medicinal plant. Conditions required for its cultivation have not yet been fully determined. This study was carried out in two localities in Cameroon to assess the impact of the previous vegetation cover (forest, Chromolaena odorata fallows and crop field) on mycorrhization and plant growth of M. oleifera. M. oleifera seedlings were grown in a greenhouse for 3 months in soil samples from the top soil layer. Plant height was measured every 2 weeks after sowing. Plant mortality, mycorrhizal colonization rate, plant height and biomass production were recorded after three months at the end of the experiment. Statistical analyses showed an effect of the type of previous land use on mycorrhizal colonization and growth of M. oleifera. The soil that had been C. odorata fallows was found to be more suitable for M. oleifera cropping in rainforest areas. In the second phase, soil biological and physicochemical properties will be determined to understand the extent to which these factors exert an impact on the mycorrhizal colonization rate and on M. oleifera performance.
Introduction
Especially in underdeveloped countries where a malnutrition rate of over 80% prevails, the food deficit could be reduced by increasing the consumption of potentially nutritious plant species (FAO, 2002).Moringa oleifera, commonly called Moringa and drumstick tree, is known worldwide for its food, medicinal, oil and water-purifying qualities (Audru, 1988).As leaves, roots and seeds of the Moringa are edible; it could be used to combat malnutrition and associated diseases (De Saint Sauveur & Broin, 2006).
M. oleifera is a perennial tree species that can grow to a height of 10 m.The length of its compound leaves ranges from 30 to 70 cm.Its fruit pods reach maturity 5-6 months after flowering.Mature pods release round black seeds with two cotyledons, each having three lateral wings (Figure 1).Analysis of the composition of M. oleifera leaves and fruits indicates that they contain essential amino acids, including arginine, histidine, lysine, phenylalanine, methionine, threonine, leucine, isoleucine and valine, as well as minerals and fibre (Foidl, et al., 2001).Its dried crushed roots are eaten as a condiment (Jumelle, 1930) whiles its leaves and young shoots are used to prepare soups (Busson, 1965).In several African and Asian countries, food for babies and pregnant women is often supplemented with Moringa leaf powder.Its fruits are consumed as a green vegetable and high quality cooking oil is extracted from the seeds.According to Dalziel (1955), M. oleifera roots and bark are used to remedy inflammation and joint pain.Its fruits are used to enhance sperm quality and quantity and also to relieve nervous weakness.Moringa seed powder can purify water via its flocculation properties, even neutralizing mobile germs (Audru, 1988), and it serves as a substitute for activated alum for water treatment.Moreover, the high quality oil obtained from M. oleifera seeds are used in making perfumes and even in luxury mechanical devices.Moringa leaf powder yields are 6 t/ha/year on average, sometimes reaching 15 t under the best conditions, thus generating an average annual income of nearly 750000 CFA francs (around 1500 USD)/ha for smallholders (De Saint Sauveur & Broin, 2006;Rajangam, Azahakia, et al., 2001).
Moringa is the only genus of the Moringacae family in the Brassicales order and includes around 12 known species.M. oleifera, which originated in the Agra and Oudh regions of northeastern India, is the most important and widespread of these species.This subtropical species is drought resistant and adapts to many different ecological and agricultural systems.It thrives at mean temperatures between 18.7°C and 28.5°C.The plants tolerate annual rainfall levels ranging from 480 mm to 4000 mm and they grow well in near alkaline soils, tolerating soils within a broad pH range (4.5-8) (James & Duke, 1983).Moringa plants respond well to chemical and organic fertilization and to colonization by arbuscular mycorrhizal fungi (Pamo, Boukila, et al., 2005).Mycorrhizal colonization enables host plants to react quickly to pest infestations (Singh, et al., 2000).However, the diversity and activity of these mycorrhizal fungi are known to be affected by pH and nutrient status of the soil (Huang, et al., 1983) as well as by organic matter content (Jurgensen, Harvey, et al., 1997).These soil features have been shown to be affected by soil disturbances due to tillage and consequently the previous cropping history and vegetation cover (Lawson, et al., 1990;Onguéné, 2000).
In Cameroon, Moringa plant regeneration is unfortunately hampered to various extents in rainforest soils under different crop management strategies.Hence, this first study of M. oleifera cropping assessed the impact of previous vegetation cover on mycorrhization and development of Moringa plants in rainforest soils.
Materials and Methods
This study was carried out in the vicinity of Awae and Minkoameyos, localities from Mefou-Afamba and Mfoundi departments in the Central (rainforest) region of Cameroon.This Central region is situated between 3° 31' and 6° 54' latitude N and between 10° 46' and 12° 52' longitude E (Figure 2).The two localities were chosen as a debut of a planned study of the potential expansion area of the species' cultivation in the Central Region, with more localities to be taken into account in the subsequent phase of the project.The choice of these localities was based on the similarity of the cropping systems practiced there.Furthermore, they are close and among the main suppliers of food markets in the capital city Yaoundé.The Central region is located in southern Cameroon on a large plateau with a mean elevation of 650 m above sea level (Westphal et al., 1981).It is part of a closed forest agro-ecological area under a wet equatorial climate with four seasons: two rainy seasons from September to November and March to June, alternating with two dry seasons from December to February and June to August.The mean annual temperature ranges from 23°C to 25°C.The closed forest zone has 1800 to 2400 mm/year of rainfall, with 1776 h/year of sunshine on average (MINADER, 2000).Its soils are largely highly desaturated red ferralitic soils with low organic matter and exchangeable base contents.These soils have a low (10)(11)(12) (Yerima & Van Ranst, 2005).At each study site, three previous vegetation covers were identified, i.e. forest, Chromoleana odorata fallow at least 3 years old, and crop field.Soil disturbance levels were considered to be (in increasing order): forest, fallow and crop field.Soil samples were collected from the top 20 cm soil layer.M. oleifera plants generated from germinated seeds were sown in 12 cm wide by 18 cm high polystyrene bags holding 2.5 kg of crude dry soil taken from the two sampling sites.Polybags filled with soil were maintained on an 80 cm high support for 3 months (November-February).In the two localities, soils used are red ferralitic, characterized by a clayey sandy texture with a pH between 4.0 and 5.5.The organic matter content was 1.4 -2.6% for Awae and 2.0 -3.5% for Minkoameyos.All plants were grown in the greenhouse located in Nkolbisson away from the sampling sites.The greenhouse was a hangar three meters high 1/3 covered with transparent sheet metal and 2/3 with aluminium sheet metal sheet, surrounded by a wall one meter high, filtering 50% of the direct solar radiation.No supplementary lighting was provided during the test.Daily watering to maintain sufficient soil moisture in the polybags was increased gradually.The experimental design was a randomized complete block design with three treatments, 4 plants per treatment and 4 replicates.The parameters monitored were the root mycorrhizal colonization rate, plant height and the above-ground biomass dry weight.After sowing, the height of each plant was measured every 2 weeks.The root mycorrhizal colonization rate and the above-ground biomass dry weight after three months i.e. the end of the experiment because these observations were destructive to the plants.The above-ground biomass dry weight and the mycorrhizal colonization rate (MCR) were determined, the biomass after 72 hr oven drying at 70°C and the MCR according to the most probable number (MPN) method (Anderson & Ingram, 1993).
cation exchange capacity (CEC) and become depleted after
The SPSS 10.1 software package was used for Levene's test of the homogeneity of variance and analysis of variance (ANOVA) for MCR, plant height and biomass as a function of the previous vegetation cover.Means of the above parameters were separated using the Waller-Duncan multiple comparison procedure.Partial correlation coefficients between previous vegetation covers and the study location were calculated.
Results
Very low early M. oleifera plant mortality was observed only in the forest soil from Awae.Around 12% mortality was recorded in all plants less than 6 weeks old planted in soil from this location, with no significant differences noted between the three types of previous vegetation cover.M. oleifera plant height, biomass and MCR were not correlated with the previous vegetation cover or with the number of living plants at either Awae or Minkoameyos.The recorded data were found to be homogeneous and thus did not require transformation for the remainder of the study (Table 1).
Mycorrhizal Colonization Rate
A microscopic analysis of roots revealed the same mycorrhizal structures (internal and external hyphae, vesicles and auxiliary cells) whatever the previous vegetation cover (forest, fallows and crops).The presence of sphere-shaped auxiliary cells with a flat surface amongst the observed © Science and Education Centre of North America mycorrhizal structures indicates that M. oleifera forms mycorrhizae with fungi of the Scutellospora genus as these are the only fungi known to form this type of auxiliary cell.On average, 72% of the observed organs were the external hyphae.
ANOVA for MCR revealed differences between the two study sites depending on the previous vegetation cover (Table 2).No differences in MCR were found when comparing the previous vegetation covers at Minkoameyos, unlike those in Awae where a significant difference was found (Table 1).The mean MCR was 34 % in the Awae samples.The highest MCR value in the Awae samples (40.5 %) was recorded in the cropped field soil, and the lowest (20.3 %) was obtained in the forest soil.The MCR values obtained at Minkoameyos were all higher than those obtained at Awae, regardless of the previous vegetation cover (Table 2).Note: Means with the same letter for the same site are not significantly different at 5 % level of significance.
Plant Height
Differences in plant heights were noted across the previous vegetation covers at Minkoameyos and Awae (Table 1).The highest plant heights (55.1 cm and 39.9 cm) were recorded in the fallowed soil at Minkoameyos and Awae, respectively.However, in absolute terms, the shortest plants (40.0 cm) were found in the cropped field soil at Minkoameyos but in the forest soil at Awae (30.8 cm) (Figure 3).
Note: Means with the same letter for the same site are not significantly different at 5 % level of significance Figure 3. Plant height after 12 weeks at the two studied localities according to previous vegetation cover Plant growth was ascendant during the whole duration of the trial.The superiority of fallow soils over forest and crop field soils is most observable.However, a slight overlap can be noted between the forest and field soils at Awae until week 6, after which plants in forest soils showed better growth, in absolute values, than crop field soils (Figure 4).
Biomass
Differences were noted across previous vegetation covers at Awae and Minkoameyos in terms of biomass.Mean biomass production at Minkoameyos and Awae was 638.8 mg and 240.8 mg, respectively.The highest biomass production at the two study sites was recorded in plant grown in fallowed soil at both Minkoameyos (834.7 mg) and Awae (332.7 mg).The lowest biomass production for each site was recorded in the field soil at Minkoameyos (520.7 mg.) and in the forest soil at Awae (150.3 mg), in absolute terms.Biomass production was found to be better in soils at Minkoameyos, regardless of the previous vegetation cover.Based on the mean values, presented in Figure 5, biomass only from fallowed soils differed from that produced in soils with the other previous vegetation covers.
Note: Means with the same letter for the same site are not significantly different at 5 % level of significance.
Figure 5. Variations in above-ground biomass at the two studied localities according to previous vegetation cover
At Minkoameyos, there were no correlations between the MCR and biomass or between the MCR and plant height.However, positive correlations between the MCR and biomass (r = 0.653; p = 0.021*) and between the MCR and plant height (r=0.707;p = 0.011*) were obtained for the Awae samples.
Discussion
The early mortality observed only in the forest soil at Awae was probably due to the low suitability of its soils to M. oleifera growth or mycorrhizal development.Plants from Awae recorded low values for all the parameters measured.Furthermore, soils samples were not treated before planting and might have hosted pathogenic strains of fungus or bacteria that interfered with plant health and growth.It has been reported that mycorrhizal colonization enables host plants to respond quickly to pest infestations (Singh, Adholeya, et al., 2000), which is in line with the findings of Janos (Janos, 1980), who showed that the extent of available mycorrhizal fungi is a key factor in determining forest composition during the early forest growth phase.It is also known that M. oleifera is attacked by certain fungi such as Cercospora sp., Puccinia sp., Oidium sp. and Sphaceloma morinda whose interactions with mycorrhizal fungi have yet to be clearly determined.
The low MCR observed in the forest soil at Awae is additional evidence that a pathogenic fungus infestation may have been involved.Daniels & Menge (1980) have reported that the abundance of mycorrhizal fungi or their activity can be affected by other pathogenic fungi or bacteria.The low MCR generally observed in soils at Awae could equally have been due to the physicochemical properties of the soils, such as the pH and soil texture.The chemical properties of the soil have been noted to exert a much greater effect on the abundance and activity of mycorrhizal fungus than do soil disturbances due to tillage (Hayman, 1982;Mosse, 1972).The MCR recorded at Minkoameyos, but not that recorded at Awae, was similar to that obtained by Onguéné (2000), who found that mycorrhizal colonization rates in rainforests in southern Cameroon were much higher than in fallows or cropfields.A better mycorrhizal colonization rate was expected in the fallowed soil because of the suitable environmental conditions created by C. odorata, which generates abundant organic matter that is recycled back into the soil.This is in agreement with previous findings (Splittoesser, 1984;Keeton et al., 1993) that organic manure decomposition boosted the level of soil humus, consisting mainly of cellulose, hemicellulose and lignin, which represent an efficient source of energy for soil microorganisms.In the Congo, desaturated and highly acidic ferralitic soils under C. odorata fallow cover were found to have a higher pH, associated mainly with calcium enrichment, than that of primary or secondary forest soils (Forester & Schwartz, 1991), suggesting that soil health and chemical properties would have influenced mycorrhizal colonization at the two sites studied here.
The plant height growth curves showed that growth was rapid at the beginning of the study, with a slowdown near the end.This pattern of plant height growth may be explained first by the depletion of nutrient reserves of the seeds and the soil in the polybags depending on the previous vegetation cover, suggesting that the plants no longer had enough mineral elements for its normal growth.Crop field soils appeared to be more impoverished than forest and fallow soils.Furthermore, in soils from certain previous vegetation covers, plants may have had a low root density reducing the uptake of mineral elements.Therefore M. oleifera plant growth could be improved via fertilizer applications (Pamo et al, 2005) with organic fertilizers which increase the root density (Palm et al., 2001), leading to improved nutrient uptake.Fallowing contributes to restoring soil fertility.Better M. oleifera plant growth was observed in C. Odorota fallow soils, attesting to the relatively high fertility of fallow soils.Similarly, Palm et al. (2001) and Foidl, Makkar, et al., (2001) also observed that cow dung enhanced M. oleifera plant growth in Nicaragua.It has been demonstrated that C. odorata recycles a considerable quantity of organic matter back into the soil, thus improving the soil structure and hampering mineral leaching by limiting downward water movement in the soil (Chevalier, 1952;Van der Meulen, 1977).Organic matter, organic carbon, nitrogen and phosphorus are more available in soils fallowed under C. odorata.Fallows with this species thus enhance agricultural sustainability, with a reduction in fallowing time and in organic fertilization (Jibril & Yahaya, 2010).
At study sites, the plant height and biomass production findings could be explained by the previous vegetation cover and the soil physicochemical properties.What can be considered better growth and biomass production recorded in fallowed soils at both study sites would be the result of the soil fertilizing effects of C. odorata, which corresponds to the results of Kanmegne, Duguma, et al. (1999), who found that this species significantly increased the soil nutrient availability.These results are in line with the findings of Agoumé & Birang (2009) that the fertility of C. odorata fallow soils was high.Other authors (Agbim, 1987;Herren-Gemmill, 1991;Assa, 1987) also noted that C. odorata substantially enhanced the mineral and organic fertility of relatively infertile soils.However, mycorrhiza also markedly improves plant growth by promoting and increasing nutrient uptake via the roots.At equivalent fertility, soils that provide the best mycorrhization conditions have the best results for mycorrhizal plant crops (Habte & Manjunath, 1987).It can be assumed that soils at Minkoameyos provide better micorrhization conditions than those at Awae.However, having said this, M. oleifera introduction to Awae remains possible because soil chemical and heath conditions can be controlled using appropriate agricultural practices.
The lack of a correlation at Minkoameyos between the MCR and biomass or between the MCR and plant height could be due to the fact that other factors either that mycorrhizal colonization which have played a moderating effect of the plant response.For example, higher availability of phosphorus which contributes to the development of the root system on which plant growth depends could found at Minkoameyos and relatively insufficient at Awae where mycorrhizal colonization alone would have led to improved growth and biomass.Another contributing element can be the available water holding capacity that could have been relatively satisfactory in the soils at Minkoameyos and insufficient in the soils at Awae where it has been improved as a result of micorrhization.
This study highlighted MRC in forest, fallows with C. odorata and crop-filled soils, with the best growth in C. odorata fallow soils.C. odorata fallow soils appear to be suitable for M. oleifera's high dry matter production and growth followed by forest soils.The chemical properties and microbiology of the soil had the stronger effect on the study parameters, particularly at Awae.Despite the relatively low MCR, soils fallowed under C. odorata could be recommended for growing M. oleifera because they provide the best conditions for plant development.Forest soils are better than crop field soils in absolute terms for MRC, dry matter production and plant growth.In practice, C. odorata fallow soils are allowed to rest for 3 to 5 years before a subsequent cropping cycle with the previous food crops that do not include M. oleifera.Furthermore, in the study localities as well as in the whole rainforest areas, C. odorota fallow soils occupy relatively smaller area compared to vast forest soils for the implementation of M. oleifera's plantations.Crop field soils are already dedicated to desired crops and could not be easily converted to M. Oleifera's plantations.Only intercropping of M. oleifera with other crops would be possible in fallows.Logically, the establishment of M. oleifera plantations or intercropping system in which M. oleifera would be the dominant crop will only be possible in the vast areas of forest soils of the Central region of Cameroon.In addition to more specific characterisation of the soils in Awae and Minkoameyos, future research would seek to isolate and identify local strains of fungi and bacteria that may be pathogenic to tree crops and could interfere with the growth of M. oleifera or/and mycorrhizal development.
Figure 1 .
Figure 1.Moringa oleifera tree with flowers and immature pod
Figure 2 .
Figure 2. Localities studied in the Central Region of Cameroon
Figure 2 .
Figure 2. Localities studied in the Central Region of Cameroon
Figure 4 .
Figure 4. Plant growth in soils with different previous vegetation covers at the two studied localities according to plant age
Table 1 .
Analysis of variance for mycorrhizal colonization rate, plant height and biomass per locality
Table 2 .
Root mycorrhizal colonization rate values at the two study sites according to the previous vegetation covers
|
v3-fos-license
|
2016-05-12T22:15:10.714Z
|
2014-09-01T00:00:00.000
|
14397183
|
{
"extfieldsofstudy": [
"Mathematics",
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://genomebiology.biomedcentral.com/track/pdf/10.1186/s13059-014-0447-6",
"pdf_hash": "92ab2bc5c460e9576661092328501d8e7618d763",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44414",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"sha1": "52d7f67f1a489a9f7c1a72ebb13bffa8abca7d2d",
"year": 2014
}
|
pes2o/s2orc
|
The multitude of molecular analyses in cancer: the opening of Pandora’s box
The availability of large amounts of molecular data of unprecedented depth and width has instigated new paths of interdisciplinary activity in cancer research. Translation of such information to allow its optimal use in cancer therapy will require molecular biologists to embrace statistical and computational concepts and models. Progress in science has been and should be driven by our innate curiosity. This is the human quality that led Pandora to open the forbidden box, and like her, we do not know the nature or consequences of the output resulting from our actions. Throughout history, ground-breaking scientific achievements have been closely linked to advances in technology. The microscope and the telescope are examples of inventions that profoundly increased the amount of observable features that further led to paradigmatic shifts in our understanding of life and the Universe. In cell biology, the microscope revealed details of different types of tissue and their cellular composition; it revealed cells, their structures and their ability to divide, develop and die. Further, the molecular compositions of individual cell types were revealed gradually by generations of scientists. For each level of insight gained, new mathematical and statistical descriptive and analytical tools were needed (Figure 1a). The integration of knowledge of ever-increasing depth and width in order to develop useful therapies that can prevent and cure diseases such as cancer will continue to require the joint effort of scientists in biology, medicine, statistics, mathematics and computation. Here, we discuss some major challenges that lie ahead of us and why we believe that a deeper integration of biology and medicine with mathematics and statistics is required to gain the most from the diverse and extensive body of data now being generated. We also argue that to take full advantage of current technological opportunities, we must explore biomarkers using clinical studies that are optimally designed for this purpose. The need for a tight interdisciplinary collaboration has never been stronger.
Neoplastic transformation can start in nearly every cell type in the human body. It is recognizable as cells that have the ability to divide uncontrollably and to escape aging mechanisms and naturally occurring cell death, resulting in the growth of a tumor. Tumors have different features, depending on the organ of origin and the level of differentiation of the tumor cells. At certain points in development, a tumor will be influencing its microenvironment, ensuring, among other things, vascularization and cooperation with the immune system. A tumor can progress further, evolving into malignant disease, by invading the surrounding tissue, disseminating into the bloodstream or lymphatic channels, and establishing metastases in other parts of the body, often with fatal consequences for the affected individual. The facets of such transformations are linked to distinct biological processes, but these differ according to cell type (that is, the cell of origin), the local microenvironment, host factors such as an individual's genetic background and age, and exogenous and endogenous environmental influences [1].
The diversity of cancer, in both biological and clinical terms, is well acknowledged and has been extensively studied. Today, with increasingly sophisticated technologies at our disposal, highly detailed molecular features of individual tumors can be described. Such features are often referred to as being layered, occurring at a genomic (DNA) level, a transcriptomic (mRNA) level and a functional (protein) level. The proteins are the key functional elements of cells, resulting from transcription of a gene into mRNA, which is further translated into protein. This simplistic way of describing the relationship between the layers has gradually changed during the past decades of functional and molecular insight. Protein synthesis is no longer perceived as a linear process, but as an intricate network of a multitude of operational molecules. Astonishing progress has been made in the discovery of molecules that are able to influence transcription and translation, such as DNA-modifying enzymes and non-translated RNAs, and of mechanisms that are able to control the processing, localization and activation of proteins [2,3].
A picture is emerging of individual cells within a tumor that can differ at the genomic, epigenomic and transcriptional levels, as well as at the functional level [4,5]. Mutations and epigenetic alterations create the required phenotypic diversity that, under the influence of shifting selective pressures imposed by the environment, determines the subclonal expansion and selection of specific cells. The development of solid tumors thus follows the same basic principles as Darwinian evolution. Most single nucleotide polymorphism (SNP) variants that arise in human evolution are neutral in respect to survival advantage; over a period of time, these variants are typically fixed in or die out from the genome according to chance. Other variants provide a survival advantage [6] and will, over time, dominate the cell population, leading to distinct haploid signatures. Cancer may involve hundreds or thousands of mutations, with each mutation potentially contributing to tumor fitness. Most of these mutations are assumed to be passengers, but a limited number have driver capability, sometimes only in a subpopulation of cells [7][8][9].
There is an intricate interplay between subpopulations of tumor cells and among tumor and normal cells in the microenvironment, and tumor topology is likely to play a role in this context. Our knowledge of molecular mechanisms in cancer development and progression are mainly derived from model systems such as in vitro cell cultures and animal models, as well as from descriptive molecular analyses of tissue samples. Model systems have been crucial for understanding molecular interactions and their implications in cancer, but they cannot fully mimic tumor conditions in vivo. Tissue samples, on the other hand, contain both a microenvironment and subpopulations of cancer cells, but they represent only a snapshot in an individual tumor's life history.
Until recently, cancer studies mainly considered only one or a few molecular levels at a time. Altered protein expression can have several causes [10]; it can be due to copy-number gain, a translocation event that combines the gene with an active promoter, alteration of factors that modify DNA or influence the transcription machinery, or modifications of mRNA or the protein itself. Revealing the various downstream effects of such alterations is potentially useful for tumor classification and for prediction of treatment response and prognosis [11].
The pathways that affect or are affected by tumor development need to be identified, but there may be little hope of intervening unless we know which molecular factors control the pathways in each patient. In addition, a key to a more fundamental understanding of the biological dynamics will be to consider tumors from a systems biology perspective. Systems biology seeks to understand a tumor as an interplay between various processes and external stimuli, the ultimate goal being to predict the effect of a perturbation of any part of the system [12]. Detailed studies of the regulatory networks and molecular interactions that take place in different types of cells under various conditions will be crucial for understanding the biological and clinical behavior of normal and malignant cells. This will require analyses of both large-scale omics data and deeply characterized data sets derived from functional studies, such as those developed in the LINCS project [13]. The importance of functional studies as a foundation for molecular diagnostic tools has been illustrated by a recent work in which a histone demethylase, JARID1b, was found to have an oncogenic function in breast cell lines that undergo luminal differentiation [14]. The detailed multilevel alterations induced by JAR-ID1b were analyzed in a pathway-specific manner to develop a diagnostic test. The index thus designed was applied to a breast cancer dataset that included both DNA copy number and mRNA expression data, showing that inferred JARID1b activity was prognostic for estrogenreceptor-positive disease. A cell-type-specific functional understanding of molecular alterations will be increasingly important to improve the success of molecular assays in clinical decision-making.
Bringing all the information together
Standard breast cancer care is primarily based on former research employing clinical features, histopathology measurements and analyses of a handful of molecular interactions. Current technology allows advanced molecular characterizations of tumor samples at multiple layers and down to single cell resolution, thus dramatically increasing the number of measurements that can be obtained from clinical tumor samples ( Figure 1b). The wealth of data from such massive parallel analyses represents a serious computational and interpretational challenge, but also new inferential opportunities.
In many ways, these developments in molecular biology reflect the progress in other natural sciences, including chemistry, physics and astronomy. For example, as distant astronomical objects were observed at an increasing number of wavelengths and at increasing resolution, more detailed models were gradually formulated of processes that had been visible even to Galileo, albeit at a much cruder scale. In much the same way, the various omics data and other molecular data now available will provide new perspectives on known processes in a tumor and its environment, allowing more detailed pictures to be drawn. The complexity of the biological system does not increase, only our ability to observe it and to build realistic models and make useful predictions.
Facing these challenges sometimes requires the rethinking or extension of well-established concepts and procedures. For example, when classical statistical hypothesis testing is applied to thousands of cases simultaneously, inaccurate specification of the null model can result in severe over-or under-reporting of significant cases. The very fact that many tests are performed in parallel, however, allows the null model to be empirically determined and thus corrected [15]. Another example concerns the determination of the significance threshold (the threshold used to decide whether a P-value is small enough to reject the null hypothesis) when many hypothesis tests are being performed and the benefit of detecting many real effects can justify a small proportion of false positives, a problem for which the false discovery rate (FDR) was invented [16]. Gaining the most from the rapidly growing flow of information is an intellectual enterprise as much as it is a technical one. Even the most skilled molecular biologist can no longer see the full consequences for existing hypotheses of a new piece of information, unless practical mechanisms are in place to assimilate and eventually integrate the new information with the wealth of existing knowledge. This will require extensive data sharing, more sophisticated statistical and bioinformatical tools for integrative analyses, mechanisms that promote shared analyses, and increasing computing power (see [17] and references therein). In addition, we need novel strategies for sharing hypotheses and models -an equivalent of the physicists' standard model, which represents a common reference that novel results can be confronted with and challenge.
To achieve this goal, deeper integration between biology, mathematics and statistics should be sought by developing practical and sufficiently general frameworks for the formulation and testing of complex biological hypotheses. The benefits of having such frameworks are likely to grow rapidly as the complexity of observations and models increases. In addition, it will be important to be able to share analyses publicly in a standardized fashion; this is a serious challenge when the analysis is based on multiple tools running on different computing platforms, and both the tools and the platforms are subject to regular updates and dependencies on external data sources. Low-dimensional representations of biological systems, that is, representations that can be mathematically described using a small number of variables (5 to 10 or fewer), are also likely to play an important role in the future; if nothing else, they appeal to our intuition and ability to think conceptually. With more data and more molecular levels available, however, the low-dimensional representation (projection) can be judiciously selected to reflect the most relevant properties of the processes governing the behavior of the system under study. The rapid growth in computing power, novel statistical methodologies and computational tools that can handle datasets of increasing size and complexity gives further cause for optimism. As Galileo needed mathematics to describe and interpret his observations, today we need the theory and tools of mathematics and statistics to develop our understanding of life.
[The universe] cannot be read until we have learnt the language and become familiar with the characters in which it is written. It is written in mathematical language, and the letters are triangles, circles and other geometrical figures, without which means it is humanly impossible to comprehend a single word.
Galileo Galilei
The question remains as to how to find the best projection in an ocean of irrelevant features that do nothing but increase the dimensionality of the problem. While most of statistics concerns some form of separation of signal from noise, the problems encountered when thousands of variables are involved, commonly called the curse of dimensionality, make the analysis of such data intractable with classical statistical techniques. From a geometric point of view, the main problem is that high-dimensional spaces are very large and the concept of localness, which is fundamental to a wide range of statistical methods, breaks down when they are analyzed.
The key to escaping the curse of dimensionality is, first, the realization that the actual number of parameters in a model, or its degrees of freedom, can be controlled by imposing constraints on those parameters. Thus, we may include thousands of variables in a regression model and obtain a sensible estimate, as long as we impose appropriate restrictions on the coefficients to be estimated. Second, under suitable conditions, such constraints have been shown to improve the estimate in a precise statistical sense [18]. Third, we have the practical means today to impose such constraints in a biologically sensible way, by defining precise assumptions (priors) on the model's behavior that are based on available knowledge and observations. Such priors can, for example, be used to incorporate spatial information, such as where in the cell protein-protein interactions occur, or to build into a model knowledge of specific molecular interactions or functions. Knowledge that is used to define priors can come from other molecular levels, other patient materials, other cancer types, or normal specimens [19].
From integrative models to cancer diagnostics and treatment
Yet to us all -scientists, physicians in charge of patient care and potential cancer patients -the ultimate question remains: how can the wealth of knowledge that is available be translated to improved patient treatment? For decades this question was considered to be an academic exercise of interest to scientists alone, but today, translation research is recognized as mandatory for the identification of mechanisms that are responsible for therapy resistance. The past decade of research has, for example, refined breast cancer classification to include data on gene expression and copy number alterations, and revealed the prognostic impact of molecular-based classification [20,21]. These examples, however, relate to prognostication: an overall outcome that is influenced by tumor biology and therapy effects in a way that generally does not allow dissection of the mechanism of sensitivity toward the therapeutic agents employed [22]. In breast cancer, emerging evidence suggests that tumors belonging to the 'basal-like' class have a particular sensitivity for platinum-containing compounds and in particular PARP inhibitors. This sensitivity relates to defects in a particular functional pathway (homologous repair) that characterizes these tumors in individuals who carry a BRCA1 germline mutation; these defects may also affect other tumors within the 'basal-like' category [23].
Experimental evidence should be interpreted with caution; for example, the tumor suppressor gene TP53 has been intensively studied for more than three decades, but new evidence relating to its role in cancer is continuously emerging. The importance of different mechanisms of drug action to the in vivo chemosensitivity of pathways that are affected by the TP53 mutation remains unclear (see references in [24,25]). This illustrates the need for translational studies properly designed to address these questions. Although the endpoint from a therapeutic perspective is overall survival, endpoints like relapse-free as well as overall survival need to be addressed fully in the context of large, randomized phase III trials.
Over the past decade, studies carried out in neoadjuvant or presurgical therapy settings have employed parameters such as a drop in the cancer antigen KI67 during the first weeks on endocrine therapy. While parameters like having a pathological complete response or primary progression can still be used as surrogate endpoints for long-term outcome (see discussion in [26]), recent results [27] indicate that correlations between tumor shrinkage and long-term outcome are not working well in all clinical settings as exemplified by tumors in patients carrying a germline BRCA1 mutation. There is a need for a systems biology approach to identify redundant pathways [28] and, in particular, to determine how such mechanisms may work differently in different tumor forms. These approaches must also consider the potential impact of the microenvironment on sensitivity to treatment. When administered to breast cancer patients on endocrine treatment or to patients harboring activating mutations in the phosphoinositide 3-kinase (PI3K) pathway, the mammalian target of rapamycin (mTOR) inhibitor everolimus improved outcome [29], but this drug was ineffective among patients whose tumors harbored mutations in redundant pathways [30]. Agents that target activating mutations, including the BRAF oncogene, have been shown to be highly effective in malignant melanomas; by contrast, these same agents work poorly in metastatic colorectal cancer because of a compensatory increase of epithelial growth factor receptor (EGF-R) activity [31]. Observations such as these should not provoke pessimism; they merely underline the need to implement the proper models and parameters in clinical studies.
As for improving outcome, studies in which tumor tissue specimens are collected before and during therapy should continue. Novel techniques, including massive parallel sequencing and different types of omics technologies, allow the study of tumor biomarkers in a way we could only dream of a decade ago, and make it possible to correlate these biomarkers to tumor regression in response to therapy. In parallel, samples must be collected during large phase III trials so that, in due time, we can develop well-annotated tumor banks that, when combined with clinical information, will be able to confirm the impact of molecular-based diagnostic tests on long-term outcome. Perhaps even more important is the collection of repeated samples from tumor tissue and circulating DNA during therapy to monitor clonal changes [32].
Studies on predictive biomarkers have traditionally measured alterations at initial diagnosis, prior to surgery and compared the presence of genetic mutations or other disturbances to clinical outcome defined by tumor shrinkage. Implementation of massive parallel sequencing, however, allows the estimation of biomarkers within a clonal setting, offering a unique possibility of evaluating changes in biomarkers during the course of therapy. For instance, if a certain gene mutation is detected among 80% of all cells at the initiation of therapy but disappears after three months of chemotherapy (independent of any tumor shrinkage), this marker is associated with cells that are therapy sensitive. Conversely, a biomarker identified in 10% of tumor cells prior to therapy but among 80% after therapy may be considered a marker of drug-resistant cells surviving therapy. While the issue of tumor heterogeneity should be taken into account, modern techniques of sampling allow several samples to be collected in parallel in a non-traumatic setting.
Finally, we should not forget the pathologists and the ability of 'the old dogs to perform their old tricks'. Repeated histologic examinations, using techniques such as fluorescence in-situ hybridization (FISH), are required not only in the interest of confirming gene amplifications and assessing intra-tumor heterogeneity; for example, the beta-galactosidase assay could be applied to clinical samples to assess the potential importance of senescence (as outlined in animal models) to chemotherapy efficacy in human tumors [33].
The spirit of hope
Understanding the biological mechanisms behind cancer requires the ability to identify biological processes in individual tumors and within the different cell types (Figure 1c,d), as well as to integrate a multitude of observations made at several molecular levels. It is an overwhelming task, but one that needs to be pursued along with the development of novel ways of combining and integrating scientific evidence across several molecular levels, study cohorts of various designs (adjuvant and neoadjuvant), many research groups and different diseases. Here, the novel developments in large-scale statistical inference and the empirical Bayes approach, which unifies aspects of the frequentist and Bayesian philosophies (Box 1), are likely to play a major role in the years to come.
Pandora opened her box impelled by her curiosity. What was left in Pandora's box was the Spirit of Hope. In our context, this is the development of novel computational approaches, statistics and models combined with the ongoing pursuit of characterizing cancers of all different types and stages. The field would be well served by a concerted world-wide effort to make molecular profiles, associated clinical or biological data and analytics publicly available in standardized fashion that facilitates the development of analytics. Closer ties should be forged between biology, mathematics and statistics, thus moving away from the concept of applying mathematical and statistical tools to solve specialized tasks and towards a common interdisciplinary framework for expressing and testing biological hypotheses and models.
Box 1: Frequentist versus Bayesian philosophies
The frequentist approach to statistics considers the probability of an event to be the relative frequency of that event in a large number of trials. According to this view, a statistical hypothesis is fixed and cannot be assigned a probability, while the data used to test it are considered to be random. The Bayesian approach to statistics views probabilities as quantities reflecting states of knowledge or belief, and probabilities can be assigned to statistical hypotheses. To determine the credibility of a null hypothesis, a frequentist would calculate the probability of the observed data given the hypothesis, whereas a Bayesian would calculate the probability of the hypothesis given the data.
Empirical Bayes combines elements of the Bayesian and frequentist points of view by allowing the priors used in Bayes analysis to be estimated from the data. Conceived of more than 60 years ago, it is only now, with the current generation of data sets involving a huge number of parallel experiments, that the full force of empirical Bayes is brought to bear. By offering the opportunity to build realistic, data-based biological assumptions into our statistical models, empirical Bayes will be a valuable tool for developing the next generation of integrative analysis methods.
|
v3-fos-license
|
2023-10-10T12:01:57.273Z
|
2023-10-09T00:00:00.000
|
263776994
|
{
"extfieldsofstudy": [],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://environmentalsystemsresearch.springeropen.com/counter/pdf/10.1186/s40068-023-00317-4",
"pdf_hash": "0ede8e254f646bd7638c3eb20bd6ff59c1b473f4",
"pdf_src": "Springer",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44416",
"s2fieldsofstudy": [
"Environmental Science",
"Engineering"
],
"sha1": "b6a74dd4a8b327b39cedced78c12a2877a30ef2c",
"year": 2023
}
|
pes2o/s2orc
|
Dam breach analysis of Kibimba Dam in Uganda using HEC-RAS and HEC-GeoRAS
Dam failures have severe consequences on human life and property. In the case of an earth filled Kibimba Dam located in Eastern Uganda, the occurrence of a flood equal to or larger than the probable maximum food (PMF) could result in catastrophic economic losses including loss of human life. This study utilized the USACE Hydrologic Engineering Center's River Analysis System (HEC-RAS) and Hydrologic Engineering Center’s Geographic River Analysis System (HEC-GeoRAS) to analyze the potential dam break of Kibimba Dam, considering overtopping and piping failure scenarios. The results of the analysis revealed that the spillway of Kibimba Dam possesses sufficient capacity to safely discharge a flood resulting from a probable maximum flood peak of 400 m3/s. Therefore, the dam is not susceptible to breach under the overtopping failure mode. However, the dam failed under the piping failure mode. To assess the downstream impact of the dam break, the breach hydrographs resulting from piping failure were examined. Consequently, the study investigated the effects of flood propagation downstream of the dam. This resulted in varying inundation depths of up to 6 m and velocities ranging from 1.2 to 10 m/s. These findings highlight the devastating consequences of Kibimba Dam's failure, particularly affecting rice field plantations, infrastructure, and other economic activities in the downstream area. Therefore, the outcomes of this study are crucial for the development of Emergency Action Plans that incorporate dam breach and flood routing analyses specific to the affected downstream regions.
Introduction
Dams are hydraulic structures built across the water course creating a reservoir in which water is stored to serve multiple purposes such as; flood risk management, navigation, hydropower generation, water supply for municipal and industrial, irrigation, fish and wildlife, low flow augmentation, and recreation (USACE 2016).Despite their positive impacts, when dams fail, they jeopardize the environment and public safety.During the past years, numerous dam failures have caused property damage and loss of human lives.For example; Banqiao Dam and the Shi-mantan Dam failure in 1975 claimed the lives of around 85,000 people in China (Sachin 2014), Patel Dam burst claimed the lives of 41 people leaving 2000 others homeless in Kenya (Soy 2018), Merriespruit tailings dam in South Africa killed 17 people and demolished several houses (Tail Pro Consulting 2002).Other dams whose bursting disrupted the environment and public safety include; St. Francis Dam (Rogers 2006), Buffalo Creek Dam (Gee 1999), Canyon Lake Dam (National Weather 2015), Teton Dam (Graham 2008), Kelly Barnes Dam (Sowers 1978), and Lawn Lake Dam (Root 2018).
Failure of dams may be due to earthquakes, overtopping (spillway capacity insufficient during large inflows), extreme storms, foundation failures, piping and seepage through the dam or along internal conduits.Of these, overtopping is the most common cause of earth-filled dam failure (Khosravi et al. 2019).According to Ackerman & Brunner (2005), knowledge of the subsequent flood wave attributes and the area inundated can help alleviate the probable loss of lives and property damage likely to occur during a disastrous dam failure.This could be through using the flood results to develop contingency response plans and future land use planning.Tail Pro Consulting (2002) suggested that the Merriespruit tailings dam failure could have been prevented if an appropriate operating manual and emergency plan for the dam was existing and been implemented effectively.These floods often times lead to loss of lives and destruction to properties such as destroy properties including houses (Kiwanuka et al. 2021).Kumar et al. (2017) proposed analyzing the behavior flood based on the observed floods before suggesting possible flood management measures.All these are premised on providing knowledge concerning flood-prone areas which aids in the development of warning systems and evacuation plans (Ríha et al. 2020).
Several researchers and organizations have put up their findings to deal with the subject of dam break analysis modelling.They have developed several models for example; BRDAM model (Brown & Rogers 1981) used to perform erosion simulation of an earth dam in the event of overtopping or internal erosion, BREACH and Dam Breach Forecasting model (DAMBRK), National Weather Service dam-break flood forecasting model (NWS DAM-BRK), numerical model (Jonathan & Fread 1984), empirical formulae, analogy, and hydraulic modeling (Ríha et al. 2020), and HEC-RAS model (Brunner 2014).Recently, HEC-GeoRAS and HEC-RAS have become prevalent in model building and analysis of the flooded area using geographic information systems (GIS), and modeling dam failure scenarios respectively.This could be attributed to the availability of terrain data and the ease in developing hydraulic models which can simulate a dam breach scenario and assess the consequential flood wave (Ackerman et al. 2005;W. S. Mohammed-Ali & Khairallah 2022).Leoul & Kassahun (2019) applied HEC-RAS and HEC-GeoRAS in analyzing the dam breach of Kesem Kebena dam.The study found out that the spillway has sufficient capacity for the flood resulting from the probable maximum flood (PMF).Using the HEC-RAS model, Mohammed-Ali et al. (2021) investigated the riverbank uncertainty consequential to the discharge difference of hydropower plants.HEC-RAS was similarly applied by Mohammed-Ali et al. (2021) to analyze the effects of outflow features upsetting the stability of lower Osage riverbank.Mehta & Yadav (2017) used HEC-RAS model to evaluate the flood conveyance performance of River Tapi.Within Uganda, Eyiiga (2019), applied HEC-RAS to model dam breach and flood inundation mapping of Bujagali Dam in Uganda.Other studies where HEC-RAS was applied in Dam Breach analysis include; Belay, (2017); Raman & Liu, (2019);and Xiong, (2011).
Kibimba dam was constructed with a government's initiative of increasing food production and improve people's livelihood within the country.This dam could break and cause huge economic and human life losses if a flood equal to or larger than the probable maximum flood occurred.Prior to this study, no study was conducted to analyze the dam breach analysis for the Kibimba dam.To provide necessary information to dam operators and policy makers, this study aimed at estimating the dam breach outflow hydrograph, routing the dam break hydrograph through the downstream river reach and floodplain and computing the inundation water depth and time of Kibimba dam.The findings of a dam break analysis are vital in preparation of inundation maps which foster the planning and implementation of precautionary measures, monitoring systems and Emergency Action/ evacuation Plans during a flood crisis.
Study area
Kibimba dam (Fig. 1) is an earth filled dam located on River Kibimba in the Victoria basin in Uganda.It has an open water surface area of 4.5 km 2 , constructed to provide irrigation water to 450.23 Km 2 of Kibimba rice scheme.The area receives and experiences small variation in temperature, humidity, and wind throughout the year.It lies at an average elevation of 1176 m above sea level receiving a moderately high annual rainfall of 900-1400 mm distributed between two rainy seasons, late February-June and August-November, with a peak in April (BirdLife 2021).
Data
The study is based on existing Meteorological, hydrologic, and topographic data collected from different organizations.Hydrologic data of Kibimba River including Probable Maximum Flood (PMF), inflow hydrograph, outflows of spillway and base flow, and physical dam data (Table 1 and Fig. 2) that is; reservoir capacity and reservoir storage versus elevation curve were obtained from Tilda Uganda limited-a company operating the dam on behalf of the ministry of water and environment.
Physically-based numerical models for example; BREACH program (Fread 1988), require more comprehensive information of the soil properties which was scanty for the case of Kibimba dam.Additionally, these models rely on bed-load type erosion formulas, making them suitable for some stages of the breach process (Wahl 1998).This study therefore, employed the Mac-Donald & Langridge-Monopolis, (1984) (Eqs.3 and 4) and Froehlich, (2008Froehlich, ( & 1995) ) empirical formulas (Eqs. 1 and 2) to estimate the dam breach parameters of Kibimba Dam.These regression equations have performed better in several researches (Duressa & Jubir 2018;Leoul & Kassahun 2019;Mehta et al. 2021).
Froehlich, (1995) regression equations for the average breach width and time For earth fill dams: V eroded = volume of eroded material from the dam embarkment (m 3 ).
V out = volume of water that passes through the breach (m 3 ).
River hydraulics model building and simulation
Detailed Digital Elevation Model of resolution 12.5 × 12.5 m (DEM) bearing the data for the main channel and overbank floodplain areas was used to create a river hydraulics model in ArcGIS.A triangulated irregular network (TIN) in vector format was preferred as it allows accurate description of the land surface with (2) minimum data and its ease in addition of data for linear features that direct the water flow (such as roads, levees, or ridges lines) (Ackerman et al. 2005).HEC-GeoRAS was used in the creation of datasets (collectively referred to as RAS Layers).HEC-GeoRAS processes geospatial data to support hydraulic model building and analysis of water surface profile results (HEC 2005).Land use data was used in estimating Manning's roughness coefficients.Due to the absence of observed water surface elevation information such as gaged data and high-water marks, Manning's n values were not calibrated.A value of 0.03 was adopted based on the vegetation type of the area (USACE 2016).Mannings n value are dependent on surface roughness, vegetation, channel irregularities, scour and deposition, and suspended material (USACE 2016).The completed datasets were exported to HEC-RAS for hydraulic modelling.
HEC-RAS as a one-dimensional river hydraulics model executes both steady-flow and unsteady-flow water surface profile calculations through a network of open channels (HEC 2002).It performs computations of flood wave propagation following a dam failure scenario by solving the full Saint-Venant equations (Ackerman et al. 2005).It also computes water surface profiles for steady, unsteady flow and flow regimes (subcritical, super critical, and mixed flow) (Mehta & Yadav 2017).The model uses the weir equation to calculate discharge for an overtopping breach and the orifice equation for a piping breach.The mean discharge is used in the estimation of volume of water released, equivalent pool elevation drop and discharge for the successive time-step to build the breach hydrograph (NRC 2012).
The model requires geometric data (Fig. 3), steady and unsteady flow data for computations.The geometric data establishes the connectivity of the river system as cross sections placed at intervals along streams characterizes the conveyance of the stream and its adjacent flood plain (Duressa & Jubir 2018).
The HEC-RAS model was used to process the built datasets.Prior to simulation, additional information on cross sections, data for hydraulic structures, flow data, and boundary conditions were added to the river hydraulics model.The probable maximum flood hydrograph (Fig. 3) for the reservoir and normal depth were used as the upstream and downstream boundary conditions respectively.A dam in HEC-RAS is modelled as an inline structure (Fig. 4) characterized by a weir profile (includes a spillway) and gates for normal low-flow operation.
Modelling the water storage behind the dam using HEC-RAS, can be calculated from either the cross sections taken from bathymetric survey data of the reservoir or using the storage area with an elevation-volume relationship which represents the storage volume behind the dam (Ackerman et al. 2005).Due to the absence of bathymetric survey data, this study considered the use of elevation-volume relationship to model the volume of water stored behind Kibimba dam.This study basically considered overtopping and piping failure modes in Kibimba dam given that these failure modes are independent and occur at different parts in the dam (Ríha et al. 2020).The peak flows of breach outflow hydrographs from HEC-RAS model simulations were compared with those calculated from emprical Eqs. 5, 6 and 7.
The outflow hydrographs were routed downstream to assess the maximum water surface (depth and velocity) and the inundation on the rice fields and other property that could be affected by the failure of the dam.This water surface profile maps give a preliminary assessment of the flood risk and prior knowledge for emergency preparedness (Ackerman et al. 2005).
Results and discussions
Regression equations were used to determine breach parameters (Table 2).MacDonald & Langridge-Monopolis, (1984) has a short breach formation time as compared to Froelich.This implies that the breach outflow hydrograph will take a short time to peak.It is observed that under both overtopping and piping failure, a total collapse of Kibimba Dam is possible as the computed breach width exceeds the dam's crest width of 5 m.Duressa & Jubir, (2018) in their study reported a linear relationship between the breach width and the peak discharge. (5) From the peak flow discharges (Table 2), it can be noted that the maximum breach flow obtained from HEC-RAS model simulation (Figs. 5 and 6) is found to be close compared to the calculated peak discharge by Froehlich, (2008Froehlich, ( & 1995) ) and MacDonald & Langridge-Monopolis, (1984).Leoul & Kassahun, (2019) in a similar study reported similar findings.
The shape of the hydrographs (Figs. 5, 6) with a short rising limb and a long falling limb shows that the flood arrived rapidly and discharged progressively.Both failure modes had similar time to peak.The arrival time for flood wave determined by velocity and water surface elevation in the HEC-RAS model is vital for emergency action plans (Duressa & Jubir 2018).Comparing the modeled breach outflow hydrographs (Figs. 5,6) shows that on average, the peak discharge resulting from overtopping failure is less than that triggered by piping.This shows that the risks caused by piping failure could be more than those caused by overtopping (Duressa & Jubir 2018).
For a dam to fail by overtopping, discharge has to flow over the dam crest (Amini et al. 2017).Since this scenario did not happen for Kibimba dam, it implies that the spillway has enough capacity to safely discharge the flood resulting from the probable maximum flood of 400 m 3 /s.Similar results were reported by Leoul & Kassahun, (2019) in their study.
A scenario was developed where the inflow hydrograph was augmented to 2 times the designed probable maximum flood.This was intended to ascertain the level of peak discharge at which the dam would fail by overtopping.It was found that the spillway could not safely pass this discharge and thus resulting in dam failure by overtopping.
Results from the piping failure analysis show that Kibimba dam experienced piping failure.The reservoir water level was greater than the assumed center-line elevation for the breach.The resulting hydrographs from this failure were used to analyze the flood propagation downstream of the dam.Investigating the flow of water during a flood event, gives a feasible insight of locations that are at high risk of experiencing the potential negative effects of flooding (Kumar et al. 2023).Mitigation strategies could be employed in these locations to reduce flood impacts (Mehta et al. 2022a, b).
The flood inundation maps (Fig. 7) were created in the Ras mapper directly in the HEC-RAS given that it has an integrated geo-spatial capability (D.J. Mehta et al. 2022a, b).Inundation depths varied to a maximum of 6 m, with velocities ranging from 1.2 to 10 m/s.The variation in the velocity is attributed to the change in the topography of the area in which the flood traverses.With this high velocities and inundation depths, it is evident that the rice fields would be destroyed.The flooding could also affect the aquatic ecosystem in the river (Mehta & Kumar 2022).This calls for a detailed risk analysis to prepare land use plans to safe guard the community from human and property loss.Mehta, et al., (2022a, b) recommended the use of hydraulic parameters from the model simulation to design flood protection measures.
Conclusion and recommendations
Failure of dams cause socio-economic and environmental catastrophes that calls for risk management and development of management plans.Simulation of dam break informs decisionmakers, managers, and authorities to develop plans to manage a crisis and avoid the disastrous impacts of a dam failure.This study analyzed Kibimba dam failure under both overtopping and piping failure modes with probable maximum flood as an input to the reservoir.The breach parameters and the peak flow discharges were calculated using the Froehlich, (1995) andFroehlich, (2008) regression equations and the results compared to the model outputs.It was observed that the peak flow discharges were close to the model output discharges.
The breaching of the embankment due to overtopping was not possible as the spillway showed adequacy to safely discharge the flood equal to the probable maximum flood.The peak flow from piping failure was higher than from overtopping.Therefore, a further analysis of inundation at the downstream of the dam to obtain the water surface profile was carried out.The inundation depths and velocities indicated that failure of Kibimba dam affects the rice fields, infrastructure, and other economic activities in the downstream of the dam.The results of dam break analysis would enable the operators understand and design mitigation measures for the likely flooding impacts.This study's findings are vital in land use planning and in generating emergency response plans to aid in alleviating disastrous property and human life loss encompassing dam break and flood routing analyzes for the affected downstream areas.Additionally, it will boost the community's resilience towards catastrophes, emphasizing the ability to lessen the probable impacts of a disaster as well as to effective recovery response after a dam break disaster.
The breach formation model does not provide a more detailed description of the physical process of erosion that takes place when an embankment dam does fail.However, future research could explore the use of more advanced models to obtain more accurate results.Some of the proposed open questions for future research may include; how to incorporate more detailed information about soil characteristics and composition into dam breach models, vegetation, channel morphology and sediment transport and how to develop more sophisticated risk assessment frameworks that take into account the uncertainties and complexities of dam breach scenarios.The authors recommend the use of high-resolution topographic data for future studies to improve the accuracy of the model.
h w = depth of water above the breach (m) where; W b = bottom width of the breach (m).h b = height from the top of the dam to bottom of breach (m) Z 1 = average slope of the upstream face of the dam.Z 2 = average slope of the downstream face of the dam.
Table 1
Physical data of Kibimba Dam (Source: Tilda Uganda limited)
|
v3-fos-license
|
2018-04-03T00:00:34.947Z
|
2013-11-21T00:00:00.000
|
9145903
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.spandidos-publications.com/etm/7/2/352/download",
"pdf_hash": "ba08bad81f5aab7eb850e8223516f9b29a867ff3",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44417",
"s2fieldsofstudy": [
"Medicine"
],
"sha1": "ba08bad81f5aab7eb850e8223516f9b29a867ff3",
"year": 2013
}
|
pes2o/s2orc
|
Candida glabrata infection following total hip arthroplasty: A case report
Candida glabrata infection following total hip arthroplasty is rare and, due to the insufficiency of standardized clinical and evidence-based guidelines, there is no appropriate therapeutic schedule. The present study reports the case of a 44-year-old patient with Candida glabrata infection following a total hip arthroplasty. The patient was successfully treated by administration of intravenous and oral voriconazole without removal of the prosthesis. This case illustrates the significance of postoperative follow-up, clinician experience and the choice of the correct antifungal agent. In this case, we found in the early stage of Candida glabrata infection, we were able to control the infection without surgery through thorough irrigation. This reduces patient suffering and economic burden.
Introduction
Candida glabrata infection following total hip arthroplasty is a potentially devastating complication. If identified late, removal of the prosthesis and a course of appropriate systemic antifungal therapy are required. In knee or hip arthropalsty surgery, there are comparatively higher risk of infection than with smaller joints, due to the longer operation time and the low blood flow. Only when the symptom period for the patient was a short-time span, it is reasonable to leave the prosthesis in the patient and use debridement to remove the infection (1). The present study reports a case of Candida glabrata infection that arose under the incision following a total hip arthroplasty, in which removal of the prosthesis was avoided. To the best of our knowledge, this is the first report of a case of Candida glabrata infection around an incision in which the prosthesis has been preserved.
Case report
A 44-year-old male patient underwent a bilateral total hip arthroplasty due to osteonecrosis of the femoral head in a two-stage surgery. During the first stage, the patient underwent the left total hip arthroplasty. Prior to receiving the right total hip arthroplasty, light subcutaneous swelling and redness was identified at the distal incision on the left side. The patient exhibited no symptoms of prosthetic loosening or infection, such as fever, chills, start-up pain or pain at rest. A probable infection around the incision was suspected. A one-month course of vancomycin was administered intravenously to the patient. Following this, the swelling had disappeared and the patient received the right total hip arthroplasty. However, 2 months later, the patient presented to the West China Hospital (Chengdu, China) with a 20 day-history of subcutaneous swelling of the left side distal incision. The patient continued to exhibit no common symptoms of joint infection and the active range of motion was normal. Other clinical findings included elevated inflammation markers, including a C-reactive protein (CRP) level of 22 mg/l (normal value, <5 mg/l) and an erythrocyte sedimentation rate (ESR) of 44 mm/h (normal value, <21 mm/h). Ultrasound confirmed the presence of a cyst that was not connected with the articular cavity. The patient was diagnosed with a superficial infection and the debridement of soft tissues was performed. Intraoperatively, it was noted that the contents of the cyst resembled tuberculosis (Fig. 1). Postoperatively, the patient was administered isoniazid and rifapentine while awaiting a microbiology report. The results of the report showed that Candida glabrata was present. The pathological section of the specimen revealed fungal infection and chronic inflammation (Fig. 2). Methenamine silver staining showed black Candida, PAS staining showed purple Candida and H﹠E staining showed edema tissue and inflammatory cell infiltration. The Candida glabrata isolate showed susceptibility to itraconazole [minimum inhibitory concentration (MIC), 2 µg/ml], amphotericin B (MIC, 0.5 µg/ml), 5-fluorocytosine (MIC, 4 µg/ml), fluconazole (MIC, 8 µg/ml) and voriconazole (MIC, 1 µg/ml). The patient was initially administered intravenous amphotericin B in escalating doses. When the dosage of amphotericin B was increased up to 1 mg/kg per day, the patient refused to continue receiving amphotericin B due to severe gastrointestinal reactions. Consequently, the patient was switched to voriconazole. The patient tolerated the 6-week course of antifungal treatment without any adverse events, and the CRP level and ESR returned to normal. The redness and swelling at the distal operative site disappeared. Aspiration of the hip was also negative. At the 3 month follow-up the patient did not exhibit swelling and the range of motion of the left hip was normal. Imageological examination showed no signs of prosthetic-loosening or infection (Fig. 3).
Discussion
Candida glabrata infection following total hip arthroplasty is a potentially devastating complication. Moreover, in the absence of standardized clinical and evidence-based guidelines, it is difficult to manage. Candida glabrata has been historically considered as a relatively nonpathogenic saprophyte and rarely causes serious infection in humans. However, with widespread use of immunosuppressive drugs, broad-spectrum antibiotics and azole antifungals, Candida glabrata is now more frequently isolated from clinical specimens (2). There are three possible etiologies of Candida glabrata infection. These include direct seeding via trauma, iatrogenic causes (surgery) and hematogenous spread (3). In the present case, the patient had a medical history of prolonged antibiotic treatment. However, another potential risk factor is that the cyst was located in the muscle layer. It is assumed there was a large amount of dead-space in the muscle layer during the surgery. Hematoma formation may occur within this dead-space and may disrupt blood supply to the surrounding tissue, thus preventing antibiotic entry (4).
Routine treatment usually includes the surgical removal of all bioprosthetic components. Early-onset infections may be eradicated by debridement and a long course of parenteral antibiotics. Antibiotic therapy is based on the definitive microbiological diagnosis and the sensitivity to the antibiotics. Generally, 6 weeks of parenteral antibiotics are recommended for prosthetic joint infections (5). Postoperatively, the patient did not exhibit symptoms of infection, including fever, vomiting and groin pain. The radiograph of the bilateral hip did not reveal the presence Ultrasound confirmed that the cyst was not connected with the articular cavity. The active range of motion of the left hip was invariably normal postoperatively. Repeated aspiration of the left hip was negative. Consequently, the decision was made to retain the prosthesis.
The appropriate course of antibiotic treatment was selected, based on the sensitivity of the infection to specific antibiotic agents. The protocols for the treatment of infections associated with hip arthroplasty, which include 6 weeks of parenteral treatment, have been demonstrated previously (6)(7)(8). Due to the severe gastrointestinal reactions of the patient to amphotericin B, voriconazole was administered instead. This was the antifungal drug to which the infection had the second highest susceptibility in the present case. Following 6 weeks of voriconazole treatment, normalization of CRP and ESR was achieved.
Candida glabrata infection following total hip arthroplasty is extremely rare. This infection is generally asymptomatic or gives rise to mild signs of infection in the early stages. If identified late, diffusion of the infection may result in irreversible deformity and pain with severe osteoarticular destruction (9). Thus, early diagnosis and treatment are important in the management of Candida glabrata. If there are minimal signs of infection following arthroplasty, close co-operation between the clinician and laboratory are required in order to identify the infectious agent.
The present case illustrates the significance of postoperative follow-up and the experience of the clinician. If a patient presents abnormal symptoms without signs of common infection following hip arthroplasty, the possibility of a fungal infection should be considered.
|
v3-fos-license
|
2019-08-17T00:22:09.341Z
|
2017-01-01T00:00:00.000
|
212482400
|
{
"extfieldsofstudy": [],
"oa_license": "CCBY",
"oa_status": "HYBRID",
"oa_url": "http://article.sciencepublishinggroup.com/pdf/10.11648.j.sjph.20170505.14.pdf",
"pdf_hash": "014b7f02c846bef5000fb2b7a881760875651d40",
"pdf_src": "MergedPDFExtraction",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44419",
"s2fieldsofstudy": [
"Medicine"
],
"sha1": "3a91b482ef6f6981061ab09bddf9ee2c37b2dbfb",
"year": 2017
}
|
pes2o/s2orc
|
Telemedicine Diffusion in a Developing Country: A Case of Ghana
Telemedicine allows healthcare professionals to evaluate, diagnose, and treat patients. International telemedicine is detrimental and necessary in order to exchange information through electronic communications to improve and assist in patient healthcare. In this study we examine and assess the telemedicine practice in the developing country of Ghana. Healthcare coverage is an expensive worldwide epidemic and population growth in developing nations continues to remain high. This requires support from national leaders and the citizens who comprise the societies. The slums of Accra, the capital of Ghana are inhabited by low-income earners and migrants. Citizens are unlikely to insure as they move closer to poverty regardless of the risk-aversions they may face regarding illness. Limited benefits of being insured and failure to uphold promised benefits will also negatively affect the Ghana citizens to remain insured as they move to a fight or flight mindset for survival. With support from government leaders, Telemedicine can influence Ghana in a very positive way. With financial support as well as technological support the Health epidemic in Africa can be reduced and better manageable.
Introduction
Ghana, Africa is a developing country in West Africa. In 1957 Ghana became the first African nation to achieve independence from its colonial ruler. Although Ghana has suffered mixed political and economic fortunes, some see Ghana as a model for North -South cooperation [1]. With a wide area of 238,500 square kilometers and an estimated population of 20 million people it is important to realize the significance of healthcare. The life expectancy in Ghana is 57 years old [1].
Telemedicine is a developing system to better help medical institutions to better use information and technology to treat patients. It is the use of video and informational systems to help doctors and other medical staffs better diagnose and treat patients [2]. Through these information systems doctors can locate articles and information that may have been previously used before to treat patients and use that to quickly and more efficiently treat patients in the future.
Healthcare is a growing issue in Africa, and with the spread of HIV and AIDS on the rise it is necessary to have the tools required to be able to handle the issue of infecting others. Not only is telemedicine useful in treating patients from a distance in can help patients who are in rural areas receive the healthcare or information necessary to be treated accordingly [2]. By using telemedicine and the information provided via telemedicine it can cut down on the amount of hospital space and beds significantly by remotely treating patients.
Komofo Anokye Teaching Hospital is the second largest hospital in Ghana [1]. Built in 1954 it was known as the Kumasi Central Hospital and help only 500 beds. In 1975 it was converted to a teaching hospital and accredited for postgraduate training by the West African College of Surgeries and now holds 1000 beds. Telemedicine will be a huge step in the right direction regarding healthcare in developing third world countries, yet having proper medical supplies and medicines remain a major issue.
Need for Telemedicine
The need for telemedicine is as necessary as the healthcare itself. Telemedicine can revolutionize healthcare in Ghana by producing a system that can drastically reduce the amount of time needed for patient care. With doctor-patient relationships improved through telemedicine, patients can receive the medical attention they need without having to travel to a major institutions for care. Also, with information systems improved with databases and research, medical staff can easily research and develop more efficient treatments for more dire patients.
The spread of HIV/AIDS can be better assessed with faster diagnoses of patients who contract the disease to help prevent the disease from spreading more with patients being more aware of their status. With an estimated 250,000 people with HIV/AIDS (CIA. gov), it is necessary to be able to access and implement the use of telemedicine. The risk of infection from major infectious diseases like malaria or yellow fever are very high specifically in Ghana Africa.
Telemedicine can influence the way people use the internet as well, According to the CIA World Fact book, only 19.6% of the population in Ghana use the internet. With resources being so limited the use of telemedicine can be a difficult goal to achieve without the financial assistance of the government and other third parties [3].
The need for more efficient healthcare systems is limited and chronically underfunded [3]. Most of the healthcare in Ghana is provided by the government and administered by The Ministry of Health and Ghana Health Services [3]. Although there is shortage of funds to operate these health facilities effectively, Ghana has about 200 hospitals. Some of these hospitals and clinics are for profit but the overall number of these is below 2% [3]. Urban areas have the most amount of health facilities while rural areas are deprived of the healthcare that is necessary for some individuals to survive. Patients in rural areas have to travel long distances to get the patient care that they need and even with the journey being successful, most families and communities lack the funds to receive proper care in these rural communities. To remedy this, telemedicine services are being put in place to reduce the amount of time and money to receive proper healthcare.
Overview of Telemedicine Infrastructure in Ghana
Ghana's telemedicine adoption rate highly depends on the support of third-party funding and the economic balance of Ghana. There are only seven health centers in Ghana [4]. One of the main obstacles hindering the use of conventional healthcare is the distance between each health center. The journey to these health centers often require 4-wheel drive vehicles in order to safely navigate to each center.
E-health or electronic Health is one of the most rapidly growing areas of technology in Health today, especially in developing countries like Ghana [5]. The use of E-health along with advances in telemedicine will dramatically improve the way patients receive care who may not have transportation to these health centers.
Ghana shows promise in their efforts to develop Telemedicine and other E-health applications [5]. The Novartis Telemedicine Project is stationed in the Bonasso cluster; this cluster holsters 6 villages and has an estimated population of about 35,000 people. Communities in the cluster are very diverse and separated, most of the villages are separated by miles of unpatched road and challenging terrain making it especially difficult to receive efficient healthcare without having to endure a hazardous journey to one of the seven health centers. This difficulty is furthered by the limited number of health centers, and weather conditions that may make it impossible for some cases to even receive healthcare. Taking into account the several diseases prominent in Ghana such as Malaria, Anemia, Tb, HIV/AIDS, it affects the every health of the patients in the communities. To reduce the danger in traveling and minimalize traveling the health centers placed throughout the communities have taken steps to eliminate the risks almost completely.
One of the many steps that have been implemented is the use of teleconsultation. The practice of teleconsultation reduces the time needed and the overall cost of receiving healthcare dramatically [5]. Healthcare can be addressed and delegated to those who need it through protocols develops by Med Gate in Switzerland (NTP); the teleconsultation program is being pioneered to adapt and better utilize the techniques implemented by Med Gate.
The first step in maximizing efficiency in telemedicine was to evaluate and assess the current need for technology. The next step was to identify and enroll medical staff and important health personnel and enroll them into a workshop to develop their skill in telemedicine applications; during this phase of the Novartis Telemedicine Project, healthcare personnel and even doctors attended extensive workshops and were presented with mobile technology and more ways to interact with telecommunication applications as well. The benefit of this was to create a skilled group of doctors and medical staff to successfully practice telemedicine and teleconsultation services to reach patients who do not have transportation to health centers. With the help of the Ghana Ministry of Communications, several telecommunication stations and antenna were installed across the Bonasso Cluster, this in tern created a better range for telecommunication systems and telemedicine applications. The extended range and signal provided increased network accessibility to over 21 communities and all seven health facilities, which is necessary for the successful implementation of telemedicine throughout Ghana.
Through the support of the Novartis Foundation, medical staff were able to procure mobile phones and other telecommunication devices that could be used in the advancement of telemedicine in Ghana. It is clear that telemedicine is successfully being a break through in the advancement of medicine in developing countries. Future endeavors of the Project include providing 24-hour support tp teleconsultation centers, development in logistics, human resources and technical advancements, as well as a greater number of workshops that will be available to staff in order to ensure the proper use of telemedicine applications [6].
The growth of telemedicine has improved dramatically since 2011. According to an online article written by Yomi Kazeem, Ghana could soon surpass South Africa, Ethiopia, and Mali in the construction of Telemedicine [7] A telemedicine consultation center set up in Amansie West provides around the clock support with experienced medical staff who could provide extensive medical advice over mobile phones and networks. Ghana Health Services reports that 60% of call were maternity-related and 54% of call is 2013 were resolved entirely by phone. During the 3-year phase starting in 2012 the Telecommunication Center in Amansie West only served 30 communities but currently they service the entire district.
Cases: Telemedicine and E-Health in Ghana
Case 1 -Telemedicine E-Health Prof T C Ankra, a professor of medicine at Komoko Anokye Teaching Hospital in Kumasi, Ghana leads his team to a 16-year-old male patient who could barely speak, due to massive swelling on the left side of his face. The boy had been out of school for 4 months and upon his examination they found a mix of large and small lymphocytes. The treatment for his illness calls for Cyclophosphamide, however the hospital pharmacy where the boy lays has none. Due to the use of electronic health, which is the use of information and communication technology, Physicians were able to use satellite communication and computers to find this diagnosis and treatment. The doctor started a treatment of dexamethasone to help reduce some of the swelling.
The pharmacist sadly told the family that the Cyclophosphamide would not be arriving due to its high cost and irregular demand. He further instructed them to seek private pharmacies in the city to find treatment. Of the two private pharmacies nearby, one does not carry the drug, however the other one does. The family had already spent 100,000 cedi to provide the health care services and secure a hospital bed at the Komoko Anokye Teaching Hospital. The drug costs 35,000 cedi, however due to the multiple treatments the boy needs the total cost will come to a minimum price of 250,000 cedi.
Case 2 -Teleradiology
With an estimated 250000 citizens whom are Human Immune Deficiency Virus (HIV) positive in the country of Ghana, Africa the need for healthcare and telemedicine is detrimental. Tuberculosis, a potentially serious infectious disease that affects the lungs, is very common among HIV/Aids patients. To ensure proper diagnosis and treatment, radiologic evaluations must be performed. With few radiologists available Ghana has collaborated with UNAIDS Program Coordinating Board to improve this division of healthcare in the country. The World Health Organization (WHO) recommends a ratio of 228 health professionals per 100,000 population. The director of Health Service for Greater Accra region stated the doctor patient ratio was approximately one doctor to 15,259 patients in a year [8]. Lack of radiologic interpretation results in higher patient morbidity and mortality.
Upon the implementation of teleradiology for the Komoko Anokye Teaching hospital from 2012-2013 they were able to use X-Ray images from 158 patients. Eighty-six percent of X-Rays performed were chest radiographs, 7.8% were spine radiographs, and the other 5.8% undocumented. Results of this implementation has changed patient management by reducing the time it takes to diagnose and also helped prevent misdiagnosis. Teleradiology has enhanced patient care by collaborating radiologists. Ghana reduced new HIV infections by 53% from 2001-2014. Ghana and the West African Region has addressed the need for better healthcare for particular populations at higher risk. These gains will help move toward an AIDS-free generation.
Case 3 -Teledermatology
With low doctor to patient ratios, dermatologists are few to none in Ghana's community, As access to mobile communication increases dermatologists are now able to use the mobile telecommunications infrastructure to provide "mobile teledermatology", which uses mobile devices to provide dermatologic services at a distance rather than face to face consultations [9]. This study evaluated diagnoses made by 3 Ghanaian dermatologist examining patients face to face compared to a Ghanaian teledermatologits using Samsung mobile platform and a U.S. teledermatologist using a computer. 34 patients with skin symptoms were randomly selected from the cities of Accra and Kumasi in Ghana.
As the face to face visits were made images and data were collected with the use of a Samsung mobile telephone and sent to the U.S. and Ghanaian teleconsultants. Through on the phone access to the world-Wide Web-based interface the Ghanaian and U.S. teledermatologists diagnoses were in accordance with the face to face Ghanaian dermatologists. The degree of accuracy comparing face to face visits with the ghanaian and U.S. teleconsultants were 80%, with eczematous eruptions most common, followed by acne, drug rash, pigmentary alterations, tinea versicolor, and others [9]. Mobile teledermatology is a positive step in the healthcare in Ghana and has helped eliminate costly equipment, providing a cost effective solution.
Case 4 -Teleconsultation
In Ghana it is extremely difficult to receive healthcare without traveling long distances. Most patients can never make it to a healthcare facility due to the lack of transportation and safe means of delivery to each location. Due to the extreme road conditions and lengthy distances between health centers and communities, patients rarely, if at all, receive healthcare and most likely end up dying or suffering sever illness or disability. Although health centers are placed in highly populated areas, it is the rural areas and communities that need the most help. There are new applications and new means of getting healthcare to these patients as well. The method that is becoming a standard in healthcare in rural areas is teleconsultation.
Teleconsultation is the consultation between doctors and other doctors or doctors and patients on a video link or channel.
With teleconsultation the amount of risk involved in receiving healthcare in rural areas can be reduced for the patient. A Teleconsultation service was introduced in the Amansie-West district in 2010 [10] it linked communication between the district hospital and the local teleconsultation clinic. The service was placed there to assess the healthcare professionals perceptions of the benefits and challenges of servicing this area, and to identify possible areas of improvement [10]. The trial received positive feedback from medical staff and was described a dramatic improvement to the quality of care, which in turn reduced the need to refer patients to the district hospital. Some problems occurred such as phone service delays, stressful workloads on the telecommunication staff, and inadequate information received from phone calls, but steps have been taken to rectify problems that arose. In conclusion, the teleconsultation service had the potential to greatly improve the quality of care to those who needed it the most, However, problems due to technical difficulties threaten the potential effectiveness of the teleconsultation. Through proper training and maintenance, teleconsultation should be the future of medicine in developing countries.
Case 5 -Telecommunication
In Ahanta West, Ghana a study was conducted using a SMS tool called Measure SMS. This tool was developed in congruence with Tripod Software LTD to use SMS date transfer using basic mobile phones as opposed to an application to to its low cost [11]. It was safe to say that health workers were more likely to own a basic mobile phone as opposed to a smartphone. The SMS tool was used in 34 of the 114 communities in Ahanta West.
Dix Cove District Hospital as well as seven clinics and ten community-based health services are the community's healthcare providers. In May of 2014 a study team visited the communities for three weeks, while community healthcare workers (CHW) reported information on lymphedema and hydrocele cases from their communities. Researchers in collaboration with the study came from the Kumasi Center for collaborative Research, Kwame Nkruma University of Science and Technology, and Kumasi, Ghana and the National LF Program. The CHWs in Ghana are volunteers with no education requirement, assigned one to each community. The health workers were then trained in six training sessions where they learned how to identify lymphedema and hydrocele, how to classify lymphedema severity by stages of mild, moderate, and severe, as well as basic lymphedema management. Each healthcare worker was required to record details of each case and send as a SMS to a device which Measure SMS app was installed. This method was able to help diagnose and treat patients. There is ongoing investments in mobile network coverage which increase suitability of the tools necessary such as Measure SMS to ensure they reach full potential.
Case 6 -Teleradiology At the Korle Bu Teaching Hospital in Accra, Ghana a study was conducted to analyze nephrectomies performed in adults over a twelve-year span. Nephrectomy is the removal of a kidney and is used for malignant as well as benign lesions. In evaluations of renal pathologies they have used abdominal ultrasound, urography, abdominopelvic computerized tomography (CT) scans and radioisotope renal scans. Over the twelve year study sixty-two nephrectomies were carried out. The average age of the patients were forty-nine plus or minus sixteen years, and the male to female ratio was 1:1. The data was taken and analyzed using a Statistical Package for Social Sciences for Windows operating system version 19. Studies showed that 85% were proven to be malignant, while 14.5% were found to be benign [12].
Case 7 -Telemedicine Lassa Fever is an acute viral hemorrhagic illness and is named after the town Lassa in Nigeria where the first case was discovered. Carried by rodents humans become infected through aerosols, direct contact with droppings or urine of infected rodents, or blood or secretions from an infected person. In a particular case a nineteen years old male farmer in the Ashanti Region was overcome with chillas and joint pain for three days. The initial laboratory results showed malaria parasites and a diagnosis of severe malaria was made and he was admitted the same day in the general ward with other patients. After two hours upon arrival the patient developed muscle cramps, palpitations, delirium, and bleeding from the nose, ears, moth and anus [6]. The diagnosis of Viral Hemorrhagic Fever was then given and upon considering referring the patient to either Agroyesum Hospital or Komofu Teaching Hospital the patient died that day. If the availability to doctors and medicine by way for telemedicine were an option a life could have been saved.
Dr. Einterz of the Kolofata district Hospital states that he is the only doctor over the district's 75,000 mostly poor people. He expresses the importance and need of telemedicine but with technology failures it is very difficult. There is ignorance and illiteracy, and unreliable sources of electricity. Several larger villages have electricity however there are prolonged blackouts. When blackouts occur the pump that brings water from the underground source fails and the community must fill buckets from open wells or boreholes. Telemedicine needs to be implemented in these distressed areas to find curative and preventative care. Telephone line connections are scarce along with paved roads and post office access, but the implementation of telemedicine is necessary for the growth of this area.
Case 8 -Prenatal Care
The practice of genital mutilation continues to go unnoticed and unchallenged in several villages in West African countries. Young women are even forbidden access to prenatal and maternity care. Because of traditional customs women are to remain in the home during the first twelve months of marriage. In this case a young pregnant woman by the name of Deborah Asamoah, who resides in the Ashanti Region of Ghana. Two months into her pregnancy she became ill and nauseas and started vomiting. She visited a clinic where they were able to give her medicine and was able to regain her appetite. Months later upon going into labor Deborah went to the clinic to find the midwife was not there, only Louisa the community health official. She told Deborah not to worry and she would then deliver the baby. The Novartis Foundation, a telemedicine project in Ghana was provider of this healthcare service and they adopted implemented the use of telemedicine practice. Deborah was in labor for a long time and after the delivery of the baby the bleeding did not stop. Before referring her to a hospital, Louisa was able to call the doctor who diagnosed which drugs needed to be given. The implementation of telemedicine through telecommunications was necessary as there were no ambulances available and the process of finding a vehicle and being able to afford a vehicle were slim to none.
Conclusion
Telemedicine is the future of healthcare in rural areas as well as urban areas in developing countries. With the support of Government-based programs and third-party funding, telemedicine can successfully make its way to patients in secluded and diverse areas. The Novartis Telemedicine project continues to provide telemedicine applications and workshops to educate and inform medical staff and medical institutions so that telemedicine can be effective and affordable. The extended range provided satellites will service rural areas thoroughly to reduce the need and cost of transportation to medical institutions dramatically.
Through proper training and methods of distributing technology to doctors, Telemedicine will lead the way in cost-efficient, safe, and reliable healthcare for those who cannot afford or have no transportation. Ghana is still developing, with a structured healthcare system being set in motion, telemedicine will influence the way medicine is delivered, especially to rural areas. More patients won't have to attend a district hospital or any other main medical institution because of teleconsultation, and with teleradiology and other applications being set in place, patients may not even have to leave their homes to receive updates on medical records and conditions.
|
v3-fos-license
|
2019-04-04T13:13:33.483Z
|
2013-07-17T00:00:00.000
|
94127615
|
{
"extfieldsofstudy": [
"Chemistry",
"Physics"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://aip.scitation.org/doi/pdf/10.1063/1.4824622",
"pdf_hash": "d3029524401cdff8dbe389017d87a7c04874844c",
"pdf_src": "Arxiv",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44420",
"s2fieldsofstudy": [
"Physics",
"Materials Science"
],
"sha1": "d3029524401cdff8dbe389017d87a7c04874844c",
"year": 2013
}
|
pes2o/s2orc
|
Nanoscale capacitance: a classical charge-dipole approximation
Modeling nanoscale capacitance presents particular challenge because of dynamic contribution from electrodes, which can usually be neglected in modeling macroscopic capacitance and nanoscale conductance. We present a model to calculate capacitances of nano-gap configurations and define effective capacitances of nanoscale structures. The model is implemented by using a classical atomic charge-dipole approximation and applied to calculate capacitance of a carbon nanotube nano-gap and effective capacitance of a buckyball inside the nano-gap. Our results show that capacitance of the carbon nanotube nano-gap increases with length of electrodes which demonstrates the important roles played by the electrodes in dynamic properties of nanoscale circuits.
I. Introduction
Measuring electronic properties of single molecules requires nanometer spaced metallic electrodes, which are usually called a nano-gap. A nano-gap configuration, as shown in Fig. 1, is not only used widely in experimental measurements of electronic properties of single molecules [1], but also used as the standard model in theoretical modeling of quantum electron transport [2]. Recently, the nano-gap configuration was proposed to be used in DNA sequencing by measuring the difference in dc conductance of the nano-gap 2 when different DNA nucleotides (adenine (A), cytosine (C), guanine (G), and thymine (T)) go through it [3]. However, since electronic conductance in this configuration is mainly contributed from electron tunneling from one electrode to the nucleotide and then to the other electrode, it depends exponentially on effective spaces between the nucleotide and electrodes [4]. Thus a slight difference in the effective spaces will cause huge difference in dc conductance, which makes the difference of nucleotides undistinguishable. It was reported that thousands of measurements and a good statistics might be needed in order to wash out effects of different spaces to distinguish different nucleotides [5]. In order to alleviate the difficulties of dc conductance measurement technique, we proposed to add ac capacitance measurement [6]. Unlike conductance which is contributed from tunneling, capacitance is mainly contributed from Coulomb interaction between the nucleotide and electrodes hence is not exponentially sensitive to the effective spaces between them. Averaging over repeat measurements of capacitances may provide a better signal/noise ratio than similar averages in conductance measurements. Thus adding capacitance measurement may dramatically reduce the number of repeats.
Various theoretical models have been developed to calculate dynamic conductance [7] and capacitances [8] of nanoscale structures by extending algorithm of dc conductance, which focused on the middle device and made some assumptions to simplify the electrodes.
However, in this nano-gap configuration, ac transport is very different from dc transport thus a different model needs to be developed, as illustrated in Fig. 2. In dc transport, current is steady not only in time, but also in position along the transport direction, as shown in Fig. 2(a). Currents through any cross-areas are the same no matter in the infinite electrodes or in the middle device. The current measured in electrodes has to go through the middle device. This is why we can use the I-V character measured in the two electrodes to study the property of the middle device. However, in ac transport, current is changing not only in time, but also in position along the transport direction, as shown in Fig. 2 ac currents through different cross-areas in electrodes in general are different because of charge accumulation. They will also be different from that in the middle device. In macroscopic, usually the difference can be neglected and a unified current along the electrodes can be defined due to the pretty small surface/bulk ratio. The accumulated charges are negligible when comparing with the transported charges. The current in the device can also be defined the same as the unified one by introducing the concept of displacement current. At nanoscale, with large surface/bulk ratio, the difference cannot be neglected and defining a unified current is almost impossible or otherwise questionable. Since an ac current can move back and forth, it can exist in the electrodes alone and does not have to go through the middle device. Thus the current in the electrodes can be in general much greater than that through the middle device.
Measuring dynamic I-V character in the two electrodes of a nano-gap configuration cannot tell directly the ac property of the nanoscale device in-between. Moreover, the contribution from the electrodes can be much greater than that from the middle device in modeling dynamic transport properties of nanoscale circuits, thus we need keep in mind that the electrodes may be more important than the middle device and need to be considered carefully. In case one needs to find out the property of the middle device only, the contribution from the electrodes should be properly removed. In this paper, we present a model to calculate capacitances of nano-gap configurations and define effective capacitances of nanoscale structures in the nano-gaps. We implement our model by using a classical atomic charge-dipole approximation [9,10] and apply it to calculate capacitance of a carbon nanotube(CNT) nano-gap and effective capacitance of a buckyball (C60) inside the nano-gap. Our results show that the capacitance of the electrodes can be much larger than that of the middle device thus will contribute more to dynamic currents when connecting in a nanoscale circuits.
II. Capacitances of nano-gap configurations and effective capacitances of nanoscale structures
We use the schematic drawing in Fig. 3 to represent a nano-gap configuration. In Fig. 3(a), a device is inserted into the gap between the electrodes. In Fig. 3(b) there is no device.
Please note each electrode in Fig.3 has a finite length [11], which is just a small part of a semi-infinite electrode. In order to approach semi-infinite electrode, the electrode length will be gradually increased later in the calculations.
Since the two electrodes in Fig. 3(a) are finite in size, we can assume a finite positive charge +Q accumulated on the left electrode and a finite negative charge -Q accumulated on the right electrode. The potential difference V between the two electrodes can then be calculated and hence the capacitance of the nano-gap can be defined as C Q V .
In order to find out the contribution from the middle device only, we can similarly calculate capacitance of the nano-gap in Fig. 3(b), '' C Q V . This is the mutual capacitance between the two electrodes only. We then define the difference between C and C' as the effective capacitance of the device, ' d C C C . We will show below that this definition leads to a convergent Cd as a function of increasing length of the electrodes.
III. Classical Charge-Dipole approximation
The model in Section II requires calculations of the potential difference ∆ between the two electrodes when one has +Q and the other has -Q. To find out the potential difference, we use an atomic Charge-Dipole approximation, which has been successfully used in carbon nanostructures [9,10]. The charge distribution of each atom is approximated by a charge q and a dipole p, which are assumed to be Gaussian distributions. The above model and method are then applied to study the capacitance of a CNT nanogap and effective capacitance of a C60 in the nano-gap. As shown in Fig. 4(a), we are using a nano-gap which includes two (5,5) CNTs. The gap between them is 1.23 nm. To apply the method, we use electron affinity of carbon atom 1.26 eV, and width of Gaussian distributions R = 0.06862nm [9]. We then assume the charge on the left electrode is +e and that on the right electrode is -e, and calculate the charge distributions and determine potential difference between the two electrodes. Fig. 5 shows the calculated charge distributions when each electrode has a length of 8 unit cells (NC = 8). Clearly, the C60 in the gap is positively charged on one side and negatively charged on the other side due to the charges in the electrodes. The charges on the C60 then will change the charge distributions and potential profiles in the electrodes through Coulomb interaction and hence contribute to the capacitance of the nano-gap. This contribution is the earlier defined effective capacitance of middle device. After determining potential difference between the two electrodes, the capacitance of the nano-gap is then calculated by / C e V . Fig. 6(a) presents the calculated capacitance as a function of the length of electrodes NC. By increasing the length of electrodes, the capacitance keeps going up, which shows the importance to properly include electrodes in studying dynamic transport properties. In order to find out the contribution from the C60, we also calculate the C' which is the capacitance of the CNT nano-gap without the C60 in-between as shown in Fig. 4(b). The effective capacitance Cd of the C60 is then calculated as the difference between C and C' and plotted in Fig. 6(b). Obviously the effective capacitance Cd also increases with the length of electrodes at the beginning, however, it converges very quickly. Longer than 64 unit cells, the effective capacitance of C60 converges to 0.03532 e/V, which is much smaller than the capacitance of the electrodes. Please notice that the converged value of C60 is still electrode-dependent. It will change when changing to different electrodes. It is the dielectric effect of C60 on the capacitance of the nano-gap.
IV. Capacitance of a CNT nano-gap and effective capacitance of C60 in it
That's why we call it an effective capacitance of the device.
V. Conclusions
We present a model to calculate capacitances of nano-gap configurations and define effective capacitances of nanoscale structures. By assuming charge accumulation in the two electrodes of a nano-gap, the capacitance of the nano-gap is calculated by determining potential difference between the two electrodes. The effective capacitance of a nanoscale structure is then defined by the difference between the two capacitances of the nano-gap with and without the structure in between. We implement the model by using a classical atomic charge-dipole approximation and apply it to calculate capacitance of a CNT nano-gap and effective capacitance of a C60 inside the nano-gap. Our results show that the capacitance of the CNT nano-gap increases with the length of electrodes and the effective capacitance of the C60 reaches a converged value at certain length of electrodes. Moreover, the converged effective capacitance of the C60 is much smaller than the capacitance of the CNT nano-gap, which demonstrates the importance to consider the contribution of electrodes in studying dynamic transport properties of nanoscale circuits.
This research is supported by an award from Research Corporation for Science
Advancement.
|
v3-fos-license
|
2018-04-03T01:40:35.560Z
|
2014-01-01T00:00:00.000
|
14601164
|
{
"extfieldsofstudy": [
"Psychology",
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://doi.org/10.1068/i0664",
"pdf_hash": "1a2d70bd6e08f2ba149d324fd1a78e2df26e1104",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44422",
"s2fieldsofstudy": [
"Art"
],
"sha1": "1a2d70bd6e08f2ba149d324fd1a78e2df26e1104",
"year": 2014
}
|
pes2o/s2orc
|
Is this a “Fettecke” or just a “greasy corner”? About the capability of laypersons to differentiate between art and non-art via object's originality
Which components are needed to identify an object as an artwork, particularly if it is contemporary art? A variety of factors determining aesthetic judgements have been identified, among them stimulus-related properties such as symmetry, complexity and style, but also person-centred as well as context-dependent variables. We were particularly interested in finding out whether laypersons are at all able to distinguish between pieces of fine art endorsed by museums and works not displayed by galleries and museums. We were also interested in analysing the variables responsible for distinguishing between different levels of artistic quality. We ask untrained (Exp.1) as well as art-trained (Exp.2) people to rate a pool of images comprising contemporary art plus unaccredited objects with regard to preference, originality, ambiguity, understanding and artistic quality. Originality and ambiguity proved to be the best predictor for artistic quality. As the concept of originality is tightly linked with innovativeness, a property known to be appreciated only by further, and deep, elaboration (Carbon, 2011 i-Perception, 2, 708–719), it makes sense that modern artworks might be cognitively qualified as being of high artistic quality but are meanwhile affectively devaluated or even rejected by typical laypersons—at least at first glance.
Introduction
When Joseph Beuys' famous artwork "Fettecke" 1 (1982; Engl. "greasy corner") was nearly destroyed by a diligent facility manager in 1986, a vivid societal debate emerged on what an artwork is about and how such a work is defined in modern times. "Fettecke" obviously polarised attending beholders as many people clearly identified it as a great work of contemporary art whereas others defamed it as just being an object made of greasy substance without any link to art at all. Whereas many laypersons seemingly reject some works of contemporary art, it has not been scientifically investigated whether laypersons are at all able to assess artistic quality. Although a series of studies exist that investigate the role of stimulus symmetry, complexity, familiarity, fluency, artistic style and so-called good Gestalts on aesthetic appreciation and evaluation (e.g. Augustin, Carbon, & Wagemans, 2012a), the categorisation of objects as artworks or not has not been addressed so far. We only know that context triggers and modifies aesthetic appreciation (Carbon & Jakesch, 2013;Leder Belke, Oeberst, & Augustin, 2004), for instance when inspecting an unfamiliar object in the context of a museum or art gallery, we assign more aesthetic quality (e.g. more pleasantness, more interestingness) to the targeted entity (Locher, Smith, & Smith, 2001). Additionally, context factors have been revealed by means of putative authenticity (Wolz & Carbon, 2014) or the specific entitling of artworks (Leder, Carbon, & Ripsas, 2006;Millis, 2001). The aforementioned shortlist of possible influences and Is this a "Fettecke" or just a "greasy corner"? About the capability of laypersons to differentiate between art and non-art via object's originality the complex interplay between such aspects and further factors such as personality traits, expertise or interest show the meaningfulness of this topic.
To form a judgement, we have to mentally represent and evaluate the assessed piece of art. Part of the perception of an object to be evaluated is the situational context within an episode (e.g. the encoding specificity, Tulving & Thomson, 1973). The episodic context can be regarded as a kind of scale, embedded into the object which is to be evaluated. As such, the aesthetic judgement process combines bottom-up (analysis of the object) and top-down sub-processes (e.g. by the situation, but also by the level of expertise) (see Carbon & Jakesch, 2013;Leder et al., 2004). A judgement highly depends on the category system of individuals. These determine which information is retrieved, processed and available in the mental representation, the mental model. This idea was propagated specifically for aesthetics by Martindale's (1984Martindale's ( , 1988 cognitive theory based on the theory of semantic networks (Quillian, 1968). A semantic network represents semantic relations between concepts, as a form of knowledge representation. The retrieval of knowledge occurs through the activation of a node. This activation is called spreading activation (Collins & Loftus, 1975). Accordingly, an object is perceived as an aesthetic target object if a combination of specific nodes is activated in the network (cf. Faerber, Leder, Gerger, & Carbon, 2010).
In the present study, we used the same context for evaluating all stimuli. As this context did not contain any indication of a typical art environment such as a museum, participants were referred to object-related properties. One such object-related variable which was identified as being important, at least in contemporary art, is ambiguity-a quality that offers specific and distinct interpretations of the object. As the processing of aesthetic stimuli was described as a kind of problem solving process (Tyler, 1999), ambiguity and partly the resolving of such ambiguities with the by-product of understanding parts of the meaning of an artwork seems particularly important. The effort of resolving such problems with subsequent better understanding of an artwork might be a part of the pleasure that emerges from a deeper aesthetic experience (Russell, 2003). Biederman and Vessel (2006) explain this on a neural level: The greater the amount of interpretable information, the more activity is possible in the visual association areas and therefore the more perceptual pleasure is produced for the viewer. As pointed out only recently, ambiguity is indeed a characteristic of many artworks (Jakesch & Leder, 2009;Muth, Pepperell, & Carbon, 2013). Mamassian (2008) highlighted that ambiguity in the visual arts is special as the perceiver has no particular task when inspecting artworks. Therefore, the perceiver does not suffer negative consequences as a result of not being able to resolve the ambiguity in the artwork (but see Cupchik, 1992).
Besides ambiguity, laypersons can describe artworks on the basis of a series of other variables. Augustin and colleagues revealed that each different aesthetic domain such as visual art versus film and music has its own distinct pattern of relevant aesthetic concepts (Augustin et al., 2012a). They showed that when laypersons are asked to describe works of art they not only used relatively undifferentiated concepts like "beautiful" (cf. Jacobsen, Buchta, Köhler, & Schröger, 2004), but also employed terms with strong affective associations reflected by words like "wonderful" and clear cognitive associations such as "originality" (Augustin, Wagemans, & Carbon, 2012b) In the present paper, we aimed to find out which variables are mainly responsible for assessing the "artistic quality" of ambiguous objects in order to have an idea of what fuels the classification of any aesthetic object into art or non-art. Previous research demonstrated that contextual information has an important impact on aesthetic evaluation, but here we focused on which key variables the aesthetic evaluation of ambiguous objects is based when contextual information is missing. Furthermore, we investigated whether laypersons and art experts can see a difference between the objects and whether factors other than beauty may reflect artistic quality.
Participants
Seventeen persons (12 female, 5 male) aged between 19 and 33 years (M 5 24.1 years, SD 5 4.2) participated in the experiment, all of whom were students of the University of Bamberg. They had normal or corrected-to-normal vision confirmed by the Snellen Eye Chart test; furthermore, normal colour vision was assured by a short version of the Ishihara Colour Test. All participants were naïve about modern art as they all lacked special training in the arts. We verified the notion of being laypersons by asking them about the possession of art reference books and about their experience with art exhibitions as in Leder et al. (2006).
Stimuli
We used pictures of ambiguous objects as they are not clearly identifiable and solvable entities for which we clearly need cognitive effort to infer meaning-in this regard they are also not easily processed on a perceptual level due to basic mechanisms such as fluency (see Reber, Schwarz, & Winkielman, 2004; see also Albrecht & Carbon, 2014). We retrieved 213 colour photographs of various objects, 134 contemporary art objects (e.g., Salvador Dali's "Lobster-Telephone" or Tracy Emin's "My Tent" installation) and 79 unaccredited objects ("everyday objects"). Both art objects and everyday objects were selected during extensive Internet research. We took care to secure a comparable degree of ambiguity of copies of both object categories. The criterion of ambiguity is characterised in that the chosen art and everyday objects have no clear meaning and function and can thus be arbitrarily described as an "art object" or as an everyday object ("non-art object"). At first appearance, both object categories are very similar, so that the distinction between them is based on formal criteria. We pre-categorised the objects as art objects, when they were exhibited in a prestigious museum or produced by a renowned artist. If an object did not satisfy the formal criteria, it was considered as a non-art object.
Procedure
All images were repeatedly shown over five blocks, for each block all images were fully randomised again and again. Participants were asked to rate one image after another on 7-point Likert-scales (1 5 not at all to 7 5 very strong) regarding one of the following five dimensions (one dimension consistently during each block): 1) Preference, 2) originality, 3) ambiguity, 4) understanding and 5) artistic quality. The block order was kept constant across all participants. Participants were asked to respond as quickly and accurately as possible, so following their first impression. After each block, participants made use of a small break; the whole study lasted approximately 90 min.
Results
As seen in Figure 1, all dimensions correlated significantly with artistic quality. In contrast to the empirical findings, which we have mentioned above, originality (r 5 0.87, p,.0001) and ambiguity (r 5 0.87, p,.0001) were positively correlated with artistic quality, as was preference (r 5 0.51, p,.0001) although to a lesser extent. Understanding, on the other hand, was negatively correlated with the dimension artistic quality (r 5 20.48, p,.0001).
A compatible pattern of results emerged when conducting a multiple regression analysis with artistic quality as dependent variable (see Table 1). Once again, preference played only a minor part in explaining artistic quality (b 5 0.13). Much more prominent as predictor was ambiguity (b 5 0.29) and, even more, originality (b 5 0.47). As in the single bivariate analyses, understanding was clearly negatively associated with artistic quality (b 5 20.34). The overall explained variance of the whole linear regression model was 89% (R 5 .948, N 5 213, p,.0001), so artistic quality could be substantially predicted by the targeted four variables.
We also found significant differences between all ratings of art and non-art objects (Figure 2), analysed by two-tailed t-tests: preference: t(211) 5 3.00, p 5 .003, d 5 0.43; originality: t (211) Experiment 1 provides insights into evaluating objects of artistic quality by using a broad variety of stimulus material, consisting of art and non-art, and thus unaccredited, objects. Of all analysed variables (preference, originality, ambiguity, understanding), originality was the best predictor for artistic quality, while preference only showed a relatively weak, but still significant, association. In addition, artistic quality was positively associated with ambiguity but negatively with understanding. Furthermore, we demonstrated that participants performed quite impressively in differentiating between art and non-art objects in an implicit way. This is quite astonishing as all participants were naïve about art, particularly contemporary art.
Experiment 2
To check whether the correlations we found in Experiment 1 were valid even for art-trained people, we ran Experiment 2. The research about expertise has a long tradition in psychology and has been extensively studied in cognitive research fields such as chess playing (Simon & Chase, 1973) as well as in more perceptual research fields such as face processing (Schwaninger, Carbon, & Leder, 2003). Expertise is seen as a very high level of domain-specific knowledge with specific highly trained skills which are accumulated through experience and duration of deep elaboration (Van den Bos, 2007). The image perception of art-trained viewers and non-trained viewers was investigated in several studies, for instance, demonstrating that art-trained and non-trained viewers judge artworks in a qualitatively different way. It has been shown for example that for art-trained viewers, complexity (Silvia, 2006) and originality (Hekkert & Van Wieringen, 1996) are related to artistic quality. In comparison to art-trained viewers who process artworks more on a subordinate level (Belke, Leder, Harsanyi, & Carbon, 2010), non-trained viewers rather look at obvious details on the mere content of an artwork or at how the artwork was primarily made (Cupchik & Geboyts, 1988). Augustin and Leder (2006) dealt with art expertise relating to contemporary art. They found that experts process artworks more in relation to style, whereas non-experts do so using personal criteria such as feelings. Vogt and Magnussen (2007) provided further evidence for different viewing strategies of art-trained and non-trained beholders: Non-trained viewers spend more time on areas with recognisable objects and human features than art-trained viewers do. The aim of Experiment 2 was to analyse whether art-trained people differ in the variables that predict their assessment of artistic quality of our target objects compared with the laypersons in Experiment 1.
Participants
We recruited twenty participants (9 female, 11 male), all of whom were students at Cardiff School of Art and Design with intense general fine art training (aged from 19 to 38 years, M 5 24.8 years; SD 5 5.8). All had normal or corrected-to-normal vision, again assured by standard vision and colour vision tests as described in Experiment 1.
Stimuli
Based on the results of Experiment 1, 80 images (40 art objects/ 40 non-art objects) were selected from the pool of 213 images from Experiment 1 to reduce the duration time of the experiment. As known from Experiment 1, participants evaluated some images very similarly on the target dimensions, so we dropped such redundant items without losing much of the variety of the entire set for all the targeted dimensions of preference, originality, ambiguity, understanding and artistic quality.
Procedure
The procedure was the same as in Experiment 1.
Results
We found a highly compatible pattern of relationships between our variables as in Experiment 1. All used variables correlated significantly with artistic quality as can be seen in Table 2. When conducting a multiple regression analysis with artistic quality as dependent variable (see Table 3), we observed an interesting specific predicting role of preference (b 5 0.40) and ambiguity (b 5 0.36): In contrast to laypersons, experts based their assessments of artistic quality more on their own personal preference instead of the probably ultimate criterion for an artwork of being "original" (Wolz & Carbon, 2014). This was quite surprising and could mean experts place more trust in their gut feelings (here: trusting in the affective value of their own liking) than cognitively analysing the object by assessing originality. The overall explained variance of the whole was again very high at 82% (R 5 .909, N 5 80, p,.0001), so artistic quality again could be substantially predicted by three out of four variables-only understanding did not significantly contribute to the whole model. This might be interpreted as further evidence that art-trained people do not use typical cognitive processes such as analysis of their level of understanding, but based their evaluations rather more on gut feelings and intuition. Hence, the idea of explicitly trying to understand artworks by reading the "hidden message" might be a particular mode of cognitive processing prevalent in laypersons but not experts. By using two-tailed t-tests, we again found good performance in telling apart art and non-art objects across all dimensions: Preference: t(78) 5 1.88, p ,.064, d 5 0.44, originality: t(78) 5 5.54, p ,. 0001, d 5 1.26, ambiguity: t(78) 5 6.43, p ,. 0001, d 5 1.47, understanding: t (78) 5 23.78, p ,.0001, d 5 0.87 and artistic quality: t(78) 5 5.37, p ,. 0001, d 5 1.22.
General discussion
The present study provides further insights into the multidimensionality of variables underlying the assessment of artistic quality. In our experiment, we employed a broad variety of stimulus material, consisting of art and (unaccredited) non-art objects. We used two very different groups of participants: in Experiment 1 laypersons and in Experiment 2 art-trained persons assessing the material.
For the laypersons we found that of all analysed variables (preference, originality, ambiguity, understanding), originality was the best predictor for artistic quality, while preference only showed moderate associations. In addition, artistic quality was positively associated with ambiguity and negatively associated with understanding. Furthermore, we demonstrated that even laypersons could already easily differentiate between art and non-art objects in an implicit way-this is quite astonishing for the group of participants who were naïve to art, particularly naïve to contemporary art. Based on these data, it can be assumed that contemporary art objects do not need to be fully understood in order to be considered as "high art," but rather by their originality and ambiguity-the data can probably even be interpreted in such a way that some items which cannot be understood very well are qualified as being art particularly because they are not easy on the mind. The results also show an additional aspect: Although we could show that art objects were liked more than non-art objects on average, we also revealed that other predictors were much more influential than the variable prefer- ence which has been notoriously assumed to be the most relevant variable in aesthetics. This reflects findings by colleagues (e.g. Augustin et al., 2012a, 2012b) that showed that "beauty" or linked concepts such as attractiveness are far from being the most influential for appreciating and processing art. For the art-trained experts whom we tested in Experiment 2, we found a similar pattern among the key variables as we did for the laypersons. Again, preference was not the best predictor for artistic quality compared to originality and ambiguity. Understanding, interestingly, although again showing a negative correlation with artistic quality, did not reach significance as a predictor in the multiple regression model. Experts seem to assess artworks more through personal preference than through any other variable which has been addressed in our study. As already described by Leder et al. (2004) in the model of aesthetic experience, art-trained persons have more knowledge and experience about art, which has an impact on implicit processing as well. So, it seems on the one hand that art experts perceive art rather implicitly and automatically, relying more on their gut feeling to identify an art-object-gut feelings of course in the sense of highly trained processes by processing a huge number of objects combined with extensive semantic knowledge. On the other hand, laypersons tend to use more explicit and analytical ways of evaluating ambiguous objectsa phenomenon well known in the context of art exhibitions where laypersons continually ask for meaning of the art and often long for the dissolution of ambiguities. Evidence for this can also be observed in the data pattern of the present experiments, particularly in the difference of the ratings of understanding across both studies: For the layperson-model, understanding predicts artistic quality together with the other utilized dimensions, but these effects could not be shown for the art-trained experts. There might be several reasons for this dissociate finding. Artists might understand artworks much better and deeper, but in an implicit way-on an explicit level, understanding is less important for them. Alternatively, art-trained persons might better cope with lack of understanding or not solving the ambiguity, because other aspects such as ambiguity or originality are more important in their evaluations. Leder, Gerger, Dressler and Schabmann (2012) have investigated the aesthetic appreciation of classic, abstract and modern artworks. They found also that understanding plays a greater role in the valuation of art by non-art trained persons, while understanding always depends on the style of the artwork. Additionally, Van den Cruys and Wagemans (2011) reported that less understanding (prediction error) of an artwork can even cause a deeper elaboration of art, because observers have to spend more time finding the meaning in art. Accordingly, Pepperell (2011) assumed that the mental effort a person has to invest in order to recognise the content of a piece of art has a positive influence on aesthetic appreciation.
To sum up, art experts seem to rely on their gut feelings or intuition when evaluating art while laypersons try to understand the hidden message of the art object. However such experts' gut feelings might actually be very reliable and valid heuristics.
Manuela
Haertel holds a B.Sc. in Psychology and is an M.Sc. student of the University of Bamberg. In addition to studying psychology, she also aims to gain a degree in Art History. She works as a JuWi (junior researcher) at the Department of General Psychology and Methodology and is particularly interested in aesthetics research, focusing on contemporary art. Regarding her interests, she is completing an internship at the School of Art and Design and has worked together with Professor Robert Pepperell, who investigated the nature of perceptual consciousness. She is a member of the research-group "EPAEG" (www.epaeg.de).
|
v3-fos-license
|
2022-10-22T15:08:27.491Z
|
2022-10-19T00:00:00.000
|
253061509
|
{
"extfieldsofstudy": [],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.mdpi.com/2504-3110/6/10/609/pdf?version=1666180681",
"pdf_hash": "720841548ef4104084be4d701f7ca1fbcad77252",
"pdf_src": "ScienceParsePlus",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44423",
"s2fieldsofstudy": [
"Materials Science",
"Engineering"
],
"sha1": "22b4e79eb357c43949b53cfde40cb921771d011d",
"year": 2022
}
|
pes2o/s2orc
|
Strength Analysis of Cement Mortar with Carbon Nanotube Dispersion Based on Fractal Dimension of Pore Structure
: Carbon nanotubes (CNTs) are considered among the ideal modifiers for cement-based materials. This is because CNTs can be used as a microfiber to compensate for the insufficient toughness of the cement matrix. However, the full dispersion of CNTs in cement paste is difficult to achieve, and the strength of cement material can be severely degraded by the high air-entraining property of CNT dispersion. To analyze the relationship between the gas entrainment by CNT dispersion and mortar strength, this study employed data obtained from strength and micropore structure tests of CNT dispersion-modified mortar. The fractal dimensions of the pore volume and pore surface, as well as the box-counting dimension of the pore structure, were determined according to the box-counting dimension method and Menger sponge model. The relationship between the fractal dimensions of the pore structure and mortar strength was investigated by gray correlation. The results showed that the complexity of the pore structure could be accurately reflected by fractal dimensions. The porosity values of mortar with 0.05% and 0.5% CNT content were 15.5% and 43.26%, respectively. Moreover, the gray correlation between the fractal dimension of the pore structure and strength of the CNT dispersion-modified mortar exceeded 0.95. This indicated that the pore volume distribution, roughness, and irregularity of the pore inner surface were the primary factors influencing the strength of CNT dispersion-modified mortar. was calculated. The results show that the matrix strength of CNTs modified mortar was significantly improved; when the content of CNTs was 0.5%, matrix relative matrix strength increased by 71.18% [23]. and J.W.; methodology, Y.X. and Y.Y.; validation, J.W., Y.X., and Y.Y.; data curation, Y.Y.; writing—original draft preparation, Y.X. and Y.Y.; writing— review and editing, Y.Y.; visualization, J.W.; supervision, J.W.; project administration, Y.X.; funding
Introduction
Cement-based materials are widely used in various engineering structures such as buildings, bridges, dams, and roads because of their excellent compressive properties and durability. However, cement-based materials have certain limitations, such as low tensile strength and insufficient toughness [1][2][3][4]. When engineering structures are exposed to the external environment and subjected to loading, pores and cracks are easily generated, severely degrading the normal service life of these structures. In view of its shortcomings, a large number of scholars have made some rich achievements in studying the methods of improving the strength and toughness of cement-based materials by changing watercement ratio, cementitious materials, types and content of fibers, admixtures, etc. Rao and Chen et al. studied the effect of water-cement ratio on mortar and considered it an important parameter that affects the strength of mortar [5,6]. Pereira-De-Oliveira et al. studied the effects of different kinds of fibers such as polypropylene fiber, glass fiber, and polypropylene fiber on the strength of mortar, and thought that the effects of different fibers on the strength and durability of mortar were similar [7]. In engineering applications, fiber materials are often used to improve the properties of cement-based materials [8]. However, crack development can only be limited to a certain extent by steel fibers, acrylic textile fibers, polypropylene fibers, etc., and the generation of cracks cannot be fundamentally improved. Therefore, some scholars focus on microfiber materials, among which CNTs are the focus of research.
CNTs are a kind of nanofiber material with exceptional tensile properties and toughness, and their tensile strength is 100 times greater than that of steel. Moreover, as a microfiber, CNTs can not only inhibit the emergence and development of microcracks in cement-based materials, but also improve the mechanical properties and durability of cement-based materials. Some studies have shown that the shrinkage of hardened cement mortar can be inhibited by CNTs, and crack resistance can be significantly improved. When 0.1% CNTs are added to hardened cement mortar, the self-shrinkage inhibition rate can reach 40%, and the porosity and microcracks of cement paste are significantly reduced [9]. Xu and Rocha reported that the strength of mortar increased with the addition of CNTs; 0.1% CNTs could increase the flexural strength by 46%. The bridging and filling effects of CNTs have been observed by scanning electron microscopy (SEM) [10,11]. However, Liu and Huang reported that the mortar strength first increased and then decreased with CNT addition. In the study by Huang, the mortar strength increment reached 30% [12][13][14]. Thus it can be seen that the conclusions of different scholars are diametrically opposite. In order to explore this reason, many scholars have conducted in-depth studies on the microstructure of cement-based materials.
Some studies are exploring the reasons for the different CNT effects on mechanical properties and durability. Wang and Nochaiya analyzed the influence of CNTs on pore structure and found that CNTs could reduce mortar porosity [15,16]. Gdoutos reported that the macroscopic and nanoscale mechanical properties and nanostructure of mortar could be improved by the addition of CNTs. The matrix of mortar was strengthened by CNTs on a nanometer scale by increasing the C-S-H quantity and decreasing the porosity [17]. CNTs have high surface atomic ratio and surface energy; therefore, it is easy to agglomerate. The effect of CNTs on mortar is limited by agglomeration. The surfactant used to disperse CNTs has a great negative effect on cement-based materials. Therefore, some scholars think that CNTs as a dispersant increase the porosity. Reales et al. mixed a surfactant with CNT dispersion and found that the negative effect of the surfactant on the mortar matrix was greater than the positive effect of CNTs [18]. Hu et al. analyzed the effects of 0.05% and 0.5% CNT dispersions on the mechanical properties and microstructure of cement mortar. They found that the air introduced into the dispersant increased the porosity and affected the strength of the mortar [19]. Correlation analysis of the effect of CNT dispersion on strength of cement-based materials is rarely performed. Therefore, such as study has significance for the utilization of CNT-reinforced cement-based materials.
The relationship between the macroscopic properties and microstructure of cementbased materials has been a key problem in the research on cement materials. Most studies explored the microstructure characteristics of cement-based composites through SEM, X-ray diffraction, nanoindentation, and other microanalysis technologies. A theoretical model that considers the relationship between microstructure changes and macro-performance evolution was subsequently established. Li et al. established the flexural strength curve of cement-based composites with porosity and capillary content; CNTs could optimize the pore structure and improve compactness [20]. Gao et al. derived the curves of flexural and compressive strengths, including porosity. Then, they analyzed the relationship between the CNT diameter and porosity [21], considering that the pore structure of mortar has an important influence on strength and durability. However, the pore structure characteristics of cement-based materials are intricate. Conventional parameters, such as porosity and pore size distribution, which can characterize the pore structure, cannot quantitatively describe the pore shape, specific surface area, diameter, and spatial distribution [22,23].
Because the traditional parameters cannot meet the needs of the test, some scholars combine cement-based materials with mathematics to seek new parameters to quantitatively evaluate the complex changes of pore structure. The fractal dimension has been a hot topic in recent years. Fractal theory provides a scientific method for examining the irregular and complex natural phenomena. As a basic mathematical concept and a measure of complex structures, the fractal dimension is mainly applied to quantitatively represent the intricacy of geometric forms and space filling ability. The emergence of fractal theory has offered a new avenue for the exploration of complex and disordered phenomena in cement-based materials. Previous studies have shown that the fractal dimension can be effectively applied to study the pore structure (e.g., pore shape, specific surface area, diameter, and spatial distribution) of cement-based materials [24][25][26][27][28]. Some scholars have studied using different types of fractal dimensions to characterize pore structure, such as the fractal dimension of pore volume, fractal dimension of pore surface, and fractal dimension of porosity. Wang et al. introduced the principle, testing technology, and fractal dimension model of seven kinds of fractal dimensions commonly used in cement-based materials [29]. Qing et al. used the box-counting method to calculate the fractal dimension from the scanning electron microscope image [30]. Some scholars have established a correlation between macro-cement-based and micro-cement-based materials through the fractal dimension [31][32][33]. Jin studied the correlation between mortar strength and pore structure through experiments. The results indicated that the fractal theory was more accurate than conventional parameters in characterizing pore size distribution [34]. Han investigated the correlation between fractal characteristics and concrete strength, and subsequently formulated a mathematical model that considered fractal dimension and compressive strength [35].
The influence of pore structure change caused by CNT dispersion on the strength of cement-based materials is a key problem in the application CNT-reinforced cement-based materials. To analyze the correlation between the pore characteristics and strength of CNT-modified cement-based materials, this study employed the box-counting dimension method and Menger sponge model. The fractal dimensions of pore volume and pore surface, as well as box-counting dimensions, of mortar modified by 0.05% and 0.5% CNT dispersions were determined. The correlation between the mechanical properties (including strength) and pore structure of CNT-modified cement-based materials was quantitatively analyzed.
Raw Materials
A dispersion of CNTs was prepared with nonionic surfactant (named TNWDIS, carbon nanotube water dispersant provided by Chengdu Organic Chemicals Co., Ltd. (Chengdu, China)) and multiwalled carbon nanotubes with diameters in the range 30-80 nm; the dispersion medium was deionized water. As a dispersant, the content of TNWDIS was 0.25 times the mass of CNTs. The CNT image shown in Figure 1a, captured using a transmission electron microscope (TEM), was provided by Chengdu Organic Chemical Co., Ltd.; it can be seen that the CNTs were rarely intertwined. Furthermore, the CNT diameter shown in the image is about 50 nm, which agrees with the size range of CNT diameter 30-80 nm. The 10% CNT content of the dispersion is the black liquid shown in Figure 1b. Cement P.I 42.5 (Chinese standard) was used; its fundamental properties are summarized in Table 1. Natural river sand with a fineness modulus of 2.94 was utilized. Superplasticizer was applied to modify the working properties of fresh cementitious composites.
Preparation and Curing of Mortar
Cement mortar specimens containing CNTs at 0.05% and 0.5% of the cement mass were prepared. CNTs were mixed into mortar through a CNT dispersion, and the content of CNTs in the CNT dispersion was 10%. The dosage of the CNT dispersion in the two CNT mortar groups was 2.25 g and 22.5 g, respectively. Ordinary mortar specimens without CNTs were also prepared for comparison. The three groups of mortar specimens had the same water-cement ratio of 0.5, and the mass ratio of sand to cement was 3:1. The mortar specimens, 40 × 40 × 160 mm in size, were formed according to the preparation method specified in the Chinese national standard, GB/T 17671-2021. Mortar fluidity was tested before the specimens were formed. Each test group had four groups of mortar specimens, and each group was prepared using triple test molds. The test group number and the quantity of raw materials used in each group are summarized in Table 2. The poured specimens were maintained in a standard curing box for 48 h. After removing the molds, the specimens were immersed and maintained in water and then placed in the standard curing room at 25 °C until the test age was reached. The four groups of specimens were cured for 3, 7, 14, and 28 days; they were removed from water upon reaching the curing age. The compressive and flexural strengths of the mortar specimens were tested, and pore structure analysis and SEM observation were performed.
Preparation and Curing of Mortar
Cement mortar specimens containing CNTs at 0.05% and 0.5% of the cement mass were prepared. CNTs were mixed into mortar through a CNT dispersion, and the content of CNTs in the CNT dispersion was 10%. The dosage of the CNT dispersion in the two CNT mortar groups was 2.25 g and 22.5 g, respectively. Ordinary mortar specimens without CNTs were also prepared for comparison. The three groups of mortar specimens had the same water-cement ratio of 0.5, and the mass ratio of sand to cement was 3:1. The mortar specimens, 40 × 40 × 160 mm 3 in size, were formed according to the preparation method specified in the Chinese national standard, GB/T 17671-2021. Mortar fluidity was tested before the specimens were formed. Each test group had four groups of mortar specimens, and each group was prepared using triple test molds. The test group number and the quantity of raw materials used in each group are summarized in Table 2. The poured specimens were maintained in a standard curing box for 48 h. After removing the molds, the specimens were immersed and maintained in water and then placed in the standard curing room at 25 • C until the test age was reached. The four groups of specimens were cured for 3, 7, 14, and 28 days; they were removed from water upon reaching the curing age. The compressive and flexural strengths of the mortar specimens were tested, and pore structure analysis and SEM observation were performed.
Mortar Strength Test
Mortar strength testing includes checking the compressive and flexural strengths. First, all the 40 × 40 × 160 mm 3 specimens were placed on the flexural loading frame. Three-point flexural loading was implemented at a loading rate of 0.06 mm/min along a span of 120 mm. After the mortar specimens failed by flexural loading, the damaged specimens were placed on a compressive loading fixture. A 40 × 40 mm 2 area in the middle of the damaged specimen was obtained for a compressive loading test with a compression loading rate of 0.12 mm/min. Twelve strength tests were conducted for the four experimental groups: three for each age group. The pore structure of mortar cured for 28 days was measured using the linear traverse method (LTM). After completing the flexural strength test, a 20 × 30 × 30 mm 3 slice was selected. The LTM was implemented according to the standard "Test code for hydraulic concrete" (SL352-2006). First, the slices were ground and buffed using a burnisher with a rotary speed of 50 rpm for 30 min. Then, the ground and polished surfaces were dried at 50 • C for 3 h. White barium sulfate powder was appropriately dusted on the measured surface of the slice, and the excess powder was removed after pressing. Lastly, the processed slice was placed in an automatic pore structure analyzer to determine its pore distribution and porosity.
SEM Test
The microstructure of mortar samples with different CNT contents was observed by SEM. After completing the flexural strength test, cubic particles, 1 cm on each side, were obtained from damaged specimens and immersed in alcohol to prevent hydration. Then, the samples were removed from ethanol and vacuum-dried at 50 • C. The distribution of CNTs in the mortar and the effect of CNTs on mortar porosity and hydration products were observed by SEM.
The experimental program, test method, and parameters set in the test of CNT mortar specimens are shown in Figure 2. The results of mortar fluidity, specimen weight, strength, and porosity fraction are provided in Table 3.
Three-point flexural loading was implemented at a loading rate of 0.06 mm/min along a span of 120 mm. After the mortar specimens failed by flexural loading, the damaged specimens were placed on a compressive loading fixture. A 40 × 40 mm 2 area in the middle of the damaged specimen was obtained for a compressive loading test with a compression loading rate of 0.12 mm/min. Twelve strength tests were conducted for the four experimental groups: three for each age group.
Pore Structure Measurement
The pore structure of mortar cured for 28 days was measured using the linear traverse method (LTM). After completing the flexural strength test, a 20 × 30 × 30 mm 3 slice was selected. The LTM was implemented according to the standard "Test code for hydraulic concrete" (SL352-2006). First, the slices were ground and buffed using a burnisher with a rotary speed of 50 rpm for 30 min. Then, the ground and polished surfaces were dried at 50 °C for 3 h. White barium sulfate powder was appropriately dusted on the measured surface of the slice, and the excess powder was removed after pressing. Lastly, the processed slice was placed in an automatic pore structure analyzer to determine its pore distribution and porosity.
SEM Test
The microstructure of mortar samples with different CNT contents was observed by SEM. After completing the flexural strength test, cubic particles, 1 cm on each side, were obtained from damaged specimens and immersed in alcohol to prevent hydration. Then, the samples were removed from ethanol and vacuum-dried at 50 °C . The distribution of CNTs in the mortar and the effect of CNTs on mortar porosity and hydration products were observed by SEM.
The experimental program, test method, and parameters set in the test of CNT mortar specimens are shown in Figure 2. The results of mortar fluidity, specimen weight, strength, and porosity fraction are provided in Table 3.
Methodology
Fractal theory has been introduced to describe the pore structure of mortar. On the basis of the fractal dimension of the pore structure, the correlation between the mortar strength and pore structure can be established. The Menger sponge model and boxcounting method are the main models used to calculate the fractal dimensions of the pore structure. In this study, using the slices as samples, the fractal dimensions were calculated by employing the Menger sponge model [35,36] and the box-counting method [37][38][39][40].
Menger Sponge Model
The fractal dimensions of pore volume, D v , and pore surface, D s , of the mortar are calculated using Menger sponge model. The pore structure test classifies bubbles according to their chord lengths; hence, the number of bubbles and porosity can be determined. The fractal dimension can be calculated by combining the experimental data with the Menger sponge model. The construction process is shown in Figure 3, and the specific construction method is described below.
basis of the fractal dimension of the pore structure, the correlation between the mortar strength and pore structure can be established. The Menger sponge model and box-counting method are the main models used to calculate the fractal dimensions of the pore structure. In this study, using the slices as samples, the fractal dimensions were calculated by employing the Menger sponge model [35,36] and the box-counting method [37][38][39][40].
Menger Sponge Model
The fractal dimensions of pore volume, Dv, and pore surface, Ds, of the mortar are calculated using Menger sponge model. The pore structure test classifies bubbles according to their chord lengths; hence, the number of bubbles and porosity can be determined. The fractal dimension can be calculated by combining the experimental data with the Menger sponge model. The construction process is shown in Figure 3, and the specific construction method is described below. A cube, with a size denoted as R, is defined as a primitive component. Then, it is equally divided into m 3 cubes; the size of each cube is R/m. Moreover, N cubes are deleted according to a selected rule, which is shown in Figures 3 and 4. The number of leftover cubes is Ni = m 3 -n. The remaining small cubes are recurrently iterated in accordance with the above measure. The small cubes of various sizes removed with each iteration can be regarded as pores or microcracks with different sizes. After k iterations, the state of the cube is the same as the pore state of the mortar. Size rk and number Nk of the remaining cubes are determined as follows:
……
In the above equations, Nk and k can be expressed as A cube, with a size denoted as R, is defined as a primitive component. Then, it is equally divided into m 3 cubes; the size of each cube is R/m. Moreover, N cubes are deleted according to a selected rule, which is shown in Figures 3 and 4. The number of leftover cubes is N i = m 3 − n. The remaining small cubes are recurrently iterated in accordance with the above measure. The small cubes of various sizes removed with each iteration can be regarded as pores or microcracks with different sizes. After k iterations, the state of the cube is the same as the pore state of the mortar. Size r k and number N k of the remaining cubes are determined as follows: In the above equations, N k and k can be expressed as where D = lg (m 3 − n)/lgm is the fractal dimension. The volume, V k , of the leftover cube can be expressed as The pore volume can be expressed as Combining the Menger sponge model with the pore structure test, the necessary parameters to calculate the fractal dimension are provided by the pore structure test data. The pore volume, V ϕ , can be calculated on the basis of the volume and porosity of the sample. The pore diameter is r k , and R is the maximum pore diameter measured in the sample. Various parameters and methods of derivation are selected depending on the different interpretations of Menger sponge model. Accordingly, V v , V s , and V p (fractal dimension of porosity) are derived. In the process of calculating fractal dimension, the multifractal phenomenon may occur because of the complex pore structure. Multifractals are defined as follows: let R d be a d-dimensional space. F is a d-dimensional subset of R d and a support of measure µ. If, under a certain partition, the fractal set produced by (F, µ) is the union of several fractal subsets, and each fractal subset has different fractal dimension, (F, µ) is called multifractal.
The pore volume, Vφ, can be calculated on the basis of the volume and porosity of the sample. The pore diameter is rk, and R is the maximum pore diameter measured in the sample. Various parameters and methods of derivation are selected depending on the different interpretations of Menger sponge model. Accordingly, Vv, Vs, and Vp (fractal dimension of porosity) are derived. In the process of calculating fractal dimension, the multifractal phenomenon may occur because of the complex pore structure. Multifractals are defined as follows: let R d be a d-dimensional space. F is a d-dimensional subset of R d and a support of measure μ. If, under a certain partition, the fractal set produced by (F, μ) is the union of several fractal subsets, and each fractal subset has different fractal dimension, (F, μ) is called multifractal.
Box-Counting Method
The basic mathematical expression of the box-counting method is as follows: let F be any nonempty bounded subset on R n ; δ is the size of the box, and N(δ) is the minimum number of boxes required to cover F. If D exists when δ approaches 0, the following formula applies: where D is called the box-counting dimension of F. A positive number, K, is given by Equation (8).
The box-counting dimension is expressed by Equation (9).
Box-Counting Method
The basic mathematical expression of the box-counting method is as follows: let F be any nonempty bounded subset on R n ; δ is the size of the box, and N(δ) is the minimum number of boxes required to cover F. If D exists when δ approaches 0, the following formula applies: where D is called the box-counting dimension of F. A positive number, K, is given by Equation (8).
The box-counting dimension is expressed by Equation (9).
The pore structure parameters measured by the pore structure analyzer include the number of bubbles in a certain range of the chord length. The bubbles in the mortar are assumed to be regular spheres. Combined with the definition of the box-counting dimension, n round boxes are used to cover the bubbles in the mortar. The size, δ, of each box corresponds to the diameter, d i (i=1, 2, . . . , n), of bubbles. These boxes are used to cover bubbles with diameters ≥ d i . The bubble diameters ≥ d i are converted into diameter d i using the principle of equal area. The number of converted bubbles with diameter d i is obtained. The sum of the numbers of converted bubbles and bubbles whose original diameter is d i is recorded as N di . From the foregoing, a group of data ((d 1 , N d1 ), (d 2 , N d2 ), (d 3 , N d3 ), . . . , (d n , N dn ))composed of the diameters and numbers of bubbles can be obtained.
When the group of data are linearly regressed in double logarithmic coordinates, the slope of the regression line is the box-counting dimension (D d ). It can be simplified as shown in Equation (10).
Mortar Strength
The test results of the compressive and flexural strengths of three groups of mortar specimens, C0, C1, and C2, at four ages (3, 7, 14, and 28 days) are shown in Figure 5. Figure 5a shows the increase curves of the compressive strengths of C0, C1, and C2 obtained by linear regression analysis. The relationship between the compressive strengths of C0, C1, and C2 and age was logarithmic; the R 2 value of the linear regression exceeded 0.98. The slope can represent the increasing rate of compressive strength with age. The slopes of C0, C1, and C2 were 16.753, 13.822, and 14.288, respectively, indicating that the increasing rate of the compressive strength of the mortar without CNTs exceeded that of the mortar with 0.05% and 0.5% CNT dispersions. However, the test results indicated that the compressive Figure 6a shows binary images of the pore distributions of samples C0, C1, and C2 (black for the mortar matrix and white for the pores). It can be intuitively indicated that the porosity of C1 and C2 was significantly higher than that of C0, and, as the amount of CNTs increased, the porosity increased. The porosity of hardened mortar varied with the pore radius, as shown in Figure 6b. The results show that the change trends of the porosity of the three groups of mortar specimens were fundamentally the same. Overall, the order of the porosity of mortar specimens was C2 > C1 > C0. In the pore radius range 1-30 μm, porosity increased with the pore size. The porosity fluctuated over a small range when the pore radius range was 30-350 μm. Lastly, the porosity fluctuated greatly and reached the peak when the pore radius range was 350-900 μm. The peak porosity values of the three groups of mortar specimens (C0, C1, and C2) were 2.1%, 4.4%, and 12.5%, respectively, and the overall porosity values were 6.22%, 15.50%, and 43.26%, respectively.
Pore Structure
Apparently, CNT dispersion increased both the pore size and the porosity of mortars to different degrees. This was the reason for the reduction in the strength of the mortar with 0.5% CNT dispersion. This observation is consistent with the results of other studies. Surfactants and dispersants negatively affect the microstructure and mechanical properties of mortar [16,17]. In the experiment, the porosity of CNT-modified mortar increased significantly, the measured porosity of the mortar was 43.8%, and the compressive strength test shows that the strength of mortar was 23.2 MPa. The previous mortar model composed of pores and mortar matrix was used to quantitatively analyze the effect of CNTs dispersion on mortar. The relationship between the increase of porosity and the enhancement effect of CNTs on matrix was explored, and the enhancement range of mortar matrix by CNTs was calculated. The results show that the matrix strength of CNTs modified mortar was significantly improved; when the content of CNTs was 0.5%, matrix relative matrix strength increased by 71.18% [23]. Figure 5b shows the increase curves of the flexural strengths of C0, C1, and C2 obtained by linear regression analysis. The relationship between the flexural strengths of C0, C1, and C2 and age was logarithmic. The R 2 value of the linear regression exceeded 0.97. The slopes of C0, C1, and C2 were 2.840, 2.241, and 3.842, respectively, indicating that the increase rate of the flexural strength of the mortar with 0.5% CNT dispersion was higher than those of the mortars with 0.05% CNT dispersion and without CNTs. However, the test results indicated that the flexural strengths of the mortars with 0.05% and 0.5% CNT dispersions increased and decreased, respectively. Figure 6a shows binary images of the pore distributions of samples C0, C1, and C2 (black for the mortar matrix and white for the pores). It can be intuitively indicated that the porosity of C1 and C2 was significantly higher than that of C0, and, as the amount of CNTs increased, the porosity increased. The porosity of hardened mortar varied with the pore radius, as shown in Figure 6b. The results show that the change trends of the porosity of the three groups of mortar specimens were fundamentally the same. Overall, the order of the porosity of mortar specimens was C2 > C1 > C0. In the pore radius range 1-30 µm, porosity increased with the pore size. The porosity fluctuated over a small range when the pore radius range was 30-350 µm. Lastly, the porosity fluctuated greatly and reached the peak when the pore radius range was 350-900 µm. The peak porosity values of the three groups of mortar specimens (C0, C1, and C2) were 2.1%, 4.4%, and 12.5%, respectively, and the overall porosity values were 6.22%, 15.50%, and 43.26%, respectively.
Pore Structure
Apparently, CNT dispersion increased both the pore size and the porosity of mortars to different degrees. This was the reason for the reduction in the strength of the mortar with 0.5% CNT dispersion. This observation is consistent with the results of other studies. Surfactants and dispersants negatively affect the microstructure and mechanical properties of mortar [16,17]. In the experiment, the porosity of CNT-modified mortar increased significantly, the measured porosity of the mortar was 43.8%, and the compressive strength test shows that the strength of mortar was 23.2 MPa. The previous mortar model composed of pores and mortar matrix was used to quantitatively analyze the effect of CNTs dispersion on mortar. The relationship between the increase of porosity and the enhancement effect of CNTs on matrix was explored, and the enhancement range of mortar matrix by CNTs
SEM Results
Representative sample fragments for SEM are shown in Figure 7; it can be seen tha the colors of C0, C1, and C2 deepened with the increase in CNT content. The micro structure of blank group C0 showed the presence of C-S-H in amorphous form, and tin pores are distributed in the matrix of the mortar. In the microstructures of C1 with 0.05% CNT dispersion and C2 with 0.5% CNT dispersion, the CNTs in the cement hydratio product were well dispersed. The CNT content of C2 was observed to significantly ex ceed that of C1. The structures of C1 and C2 were obviously looser than that of C0, with large number of pores. The CNTs and the hydration products (C-S-H) of cement forme a meshwork microstructure. The meshwork microstructure composed of CNTs and hy dration products of cement can be used as a strengthening structure to improve th mortar strength and toughness [19,41,42]. In this study, the compressive and flexura strengths of C1 were found to increase. However, the SEM images show that the por size and number of C1 and C2 distinctly exceeded those of C0. The increase in porosity disadvantageous to strength improvement; this is also the main reason for the reductio in the strength of C2. The main cause of this phenomenon is the strong air-entrainin property of CNT dispersion. In this study, the influence of the change in pore structur on strength caused by CNT dispersion was analyzed according to fractal theory.
SEM Results
Representative sample fragments for SEM are shown in Figure 7; it can be seen that the colors of C0, C1, and C2 deepened with the increase in CNT content. The microstructure of blank group C0 showed the presence of C-S-H in amorphous form, and tiny pores are distributed in the matrix of the mortar. In the microstructures of C1 with 0.05% CNT dispersion and C2 with 0.5% CNT dispersion, the CNTs in the cement hydration product were well dispersed. The CNT content of C2 was observed to significantly exceed that of C1. The structures of C1 and C2 were obviously looser than that of C0, with a large number of pores. The CNTs and the hydration products (C-S-H) of cement formed a meshwork microstructure. The meshwork microstructure composed of CNTs and hydration products of cement can be used as a strengthening structure to improve the mortar strength and toughness [19,41,42]. In this study, the compressive and flexural strengths of C1 were found to increase. However, the SEM images show that the pore size and number of C1 and C2 distinctly exceeded those of C0. The increase in porosity is disadvantageous to strength improvement; this is also the main reason for the reduction in the strength of C2. The main cause of this phenomenon is the strong air-entraining property of CNT dispersion. In this study, the influence of the change in pore structure on strength caused by CNT dispersion was analyzed according to fractal theory. size and number of C1 and C2 distinctly exceeded those of C0. The increase in poro disadvantageous to strength improvement; this is also the main reason for the redu in the strength of C2. The main cause of this phenomenon is the strong air-entra property of CNT dispersion. In this study, the influence of the change in pore stru on strength caused by CNT dispersion was analyzed according to fractal theory.
Fractal Dimension of Pore Structure
The pore structure test results indicate that the porosity values of slices C0, C1, and C2 were 6.69%, 15.5%, and 43.26%, respectively. The porosity of C1 was twice as high as that of ordinary mortar, and the porosity of the mortar with 0.5% CNT dispersion was six times greater than that of ordinary mortar. However, the change in the porosity value does not accurately and quantitatively represent the intricacy of the pore structure. To examine the change in the pore structure of mortar, the fractal dimension was applied to characterize the intricacy of the pore structure. The calculation results are presented below.
Fractal Dimension of Pore Volume
Equation (11) can be derived from Menger sponge model: The calculation formula for the fractal dimension of pore volume, D v , can be derived by taking the logarithm of Equation (11): where V k is the solid volume, and r k is the pore diameter. The solid volume and corresponding pore diameter are calculated according to the measurement data of the pore structure. Then, the logarithm is obtained to draw the curve; D v is given by the curve gradient. Figure 8 shows the logarithmic curves of the solid volume and pore diameters of C0, C1, and C2. The diagram shows that the solid volume increased with the pore size; however, the growth trends of the three curves differed. The increase in C0 was the smallest, and that in C2 was the largest, indicating that the CNT content affected the pore volume of mortar. As the CNT content increased, the pore volume also increased. Moreover, the change trend was more significant.
The overall changes in the three logarithmic curves were similar. Figure 8 indicates that the growth rate of the solid volume in the early stage of the logarithmic curve was relatively low. Then, it gradually increased with the pore diameter. Evidently, the growth rate of solid the volume varied. Hence, one linear regression fitting was insufficient to describe the overall trend in the change of the logarithmic curve and the complex change process of the pore structure; multistage linear fitting was required. The logarithmic curve can be divided into two parts according to the change in the slope of the curve. This reflects the multifractal characteristics of the mortar's pore structure that were mainly due to the irregularity of the pore distribution and variety of pore shapes.
Fractal Dimension of Pore Surface
Equations (13) and (14) can be derived from Menger sponge model: The fractal dimension of the pore surface, Ds, can be obtained by taking the logarithm of Equation where rk is the pore diameter, and V is the accumulative pore volume (diameter rk ). The parameters required for the calculations can be obtained by testing the pore structure and then obtaining the logarithm to draw the curve. The gradient of the curve yields Ds; Ds = 2 denotes that the pore structure has a completely smooth plane. When Ds ap- To describe these characteristics clearly, the three logarithmic curves were divided into two parts: region I (bubble chord length: <100 µm) and region II (bubble chord length: >100 µm). Then, the fractal dimensions of these two regions were calculated. The calculation results show that the fractal dimensions of the two regions differed. Although the fractal dimension of region I was between 2 and 3, and the R 2 value of the linear regression was less than 0.8. This indicates that, although the pore structure with the bubble chord length of <100 µm had fractal characteristics, the complexity of the structure could not be accurately reflected. The fractal dimension of region II was between 2 and 3, and R 2 was greater than 0.98. This indicates that the fractal characteristics of the mortar's pore structure with the bubble chord length of >100 µm were significant. The pore distribution and intricacy of the pore structure were quantitatively and accurately reflected by the fractal dimension.
Therefore, the fractal dimension of the pore volume of region II could be applied as a parameter to characterize the complex process of the variation in the mortar pore structure with the CNT content quantitatively.
Fractal Dimension of Pore Surface
Equations (13) and (14) can be derived from Menger sponge model: The fractal dimension of the pore surface, D s , can be obtained by taking the logarithm of Equation (13): where r k is the pore diameter, and V ϕ is the accumulative pore volume (diameter ≥ r k ). The parameters required for the calculations can be obtained by testing the pore structure and then obtaining the logarithm to draw the curve. The gradient of the curve yields D s ; D s = 2 denotes that the pore structure has a completely smooth plane. When D s approaches 3, the pore structure becomes coarser and more intricate; hence, D s must satisfy 2 < D s < 3. Figure 9 shows the logarithmic curve of −dV ϕ /dr k and the pore diameters of C0, C1, and C2. Note that −dV ϕ /dr k decreased as the pore diameter increased. The decreasing trends of the three curves varied; this was opposite to the change process of the logarithmic curve in the volume fractal dimension. Similarly, the decrement in −dV ϕ /dr k of C0 was the smallest, and that in C2 was the largest. This indicates that the CNT content affected the mortar's pore surface. The range of change in the pore surface increased with the amount of CNTs; moreover, the change trend was more significant.
The change processes of the logarithmic curves of C0, C1, and C2 were similar. Figure 9 indicates that the decrease rate of −dV ϕ /dr k in the early stage of the logarithmic curve was relatively low and gradually rose with the pore diameter. The slope of the curve evidently varied. Thus, one linear regression fitting was insufficient to describe the overall change trend of the logarithmic curve and the complex change process of the pore structure; multistage linear fitting was necessary. This also reflects the multifractal features of the mortar's pore structure.
The curve could be divided into three parts according to the change in the slope of the logarithmic curve: region I (bubble chord length: <100 µm), region II (100 µm < bubble chord length < 500 µm), and region III (bubble chord length: >500 µm). The pore surface fractal dimensions of regions I, II, and III were calculated as described above; the results are shown in Figure 9. The calculation results indicate that the fractal dimensions of the three regions considerably varied, indicating that the roughness and irregularity of the inner surface of pores with different chord lengths considerably differed. The fractal dimensions of region I (2.0196, 2.039, and 2.0168) were between 2 and 3, and the R 2 value of the linear regression was less than 0.9. This shows that, although the pore structure of the mortar with a chord length of <100 µm had fractal characteristics, these could not exactly represent the complexity of the structure. The fractal dimensions of region III (4.291, 3.3935, and 3.8424) exceeded 3, which is nonphysical from the point of view of surface geometry [35,43]. The fractal dimensions of region II were 2.1773, 2.3521, and 2.1412, and R 2 exceeded 0.94. This indicates that the fractal characteristics of the pore structure of the mortar with a bubble chord length between 100 and 500 µm were remarkable. The roughness and irregularity of the pore internal surface were quantitatively and accurately reflected by fractal dimensions.
Therefore, D s of region II could be applied as a parameter to characterize the complex process of the variation in the mortar's pore structure with the CNT content quantitatively. structure of the mortar with a bubble chord length between 100 and 500 μm were remarkable. The roughness and irregularity of the pore internal surface were quantitatively and accurately reflected by fractal dimensions.
Therefore, Ds of region II could be applied as a parameter to characterize the complex process of the variation in the mortar's pore structure with the CNT content quantitatively. Region Ⅲ Linear fitting for region Ⅰ Linear fitting for region Ⅱ Linear fitting for region Ⅲ lg(r) Region Ⅲ Linear fitting for region Ⅰ Linear fitting for region Ⅱ Linear fitting for region Ⅲ lg(r) Region Ⅲ Linear fitting for region Ⅰ Linear fitting for region Ⅱ Linear fitting for region Ⅲ lg(r)
Box-Counting Dimension
According to the box-counting method, the bubbles with a diameter > di were converted into those with a diameter of di using the principle of equal area. The distribution of the total number of bubbles is shown in Figure 9a. The figure shows that the difference in the number of bubbles of C0, C1, and C2 decreased with the increase in pore size. When the bubble chord length was 10-100 μm, the number of bubbles rapidly changed.
The logarithmic curves of the pore diameter and bubble number are shown in Figure 10b. Figure 10 indicates that the change processes of the logarithmic curves of C0, C1, and C2 were similar, and the total number of bubbles decreased with the increase in pore diameter. Moreover, the logarithmic curve of the bubbles was virtually linear. The pore structure had no multifractal characteristics; hence, it could be fitted by linear regression. The R 2 value of the linear regression exceeded 0.95, indicating that the bubble distribution of the mortar had significant fractal characteristics. The box-counting dimension, Dd, can be obtained using Equation (10). The fractal dimensions of C0, C1, and C2 were 2.2479, 2.3572, and 2.3237, respectively. Therefore, the complexity of the pore structure was quantitatively and accurately reflected by the box-counting dimension. The box-counting dimension could be applied as a parameter to characterize the complex process of the variation in the pore structure of the mortar with the CNT content quantitatively.
Box-Counting Dimension
According to the box-counting method, the bubbles with a diameter > d i were converted into those with a diameter of d i using the principle of equal area. The distribution of the total number of bubbles is shown in Figure 9a. The figure shows that the difference in the number of bubbles of C0, C1, and C2 decreased with the increase in pore size. When the bubble chord length was 10-100 µm, the number of bubbles rapidly changed.
The logarithmic curves of the pore diameter and bubble number are shown in Figure 10b. Figure 10 indicates that the change processes of the logarithmic curves of C0, C1, and C2 were similar, and the total number of bubbles decreased with the increase in pore diameter. Moreover, the logarithmic curve of the bubbles was virtually linear. The pore structure had no multifractal characteristics; hence, it could be fitted by linear regression. The R 2 value of the linear regression exceeded 0.95, indicating that the bubble distribution of the mortar had significant fractal characteristics. The box-counting dimension, D d , can be obtained using Equation (10). The fractal dimensions of C0, C1, and C2 were 2.2479, 2.3572, and 2.3237, respectively.
Gray Relational Analysis
The mechanical properties of mortar are determined according to its internal microstructure, and the pore structure is a significant part of the microstructure. Many parameters, which reflect the internal defects of mortar from different aspects and affect the mortar strength to a certain extent, can characterize the pore structure. However, each parameter can only reflect the change in the pore structure of mortar in a particular aspect; it is not the only factor influencing the mortar's mechanical properties. Accordingly, to analyze the influence of various parameters on the mechanical properties of mortar, the introduction of gray relational analysis (GRA) is necessary [31,[44][45][46].
To study the superficial and deep-seated relationship among the various factors in the system, the GRA uses the indeterminate system with small samples and inferior data as the study object. The main factors among the influencing factors are identified to comprehend the main characteristics of the system. In this study, the main parameters characterizing the pore structure of mortar include Dv, Ds, Dd,, and porosity; however, the amount of test data of each parameter is low. Therefore, GRA was used to study the effect of various parameters on mortar strength. The specific calculation method of the GRA is described below.
The data series of Dv, Ds, Dd, and porosity of mortar are defined as a comparison series expressed in Xi(k). The data series of mortar strength with different CNT contents are defined as a reference series expressed in Y(k). Considering the variation in the size and dimension of each series, the test data were normalized using Equations (16) and (17).
The absolute value of the difference between the reference series and comparison series is calculated and expressed as i(k) at point k: The gray coefficient is calculated using Equation (19). Therefore, the complexity of the pore structure was quantitatively and accurately reflected by the box-counting dimension. The box-counting dimension could be applied as a parameter to characterize the complex process of the variation in the pore structure of the mortar with the CNT content quantitatively.
Gray Relational Analysis
The mechanical properties of mortar are determined according to its internal microstructure, and the pore structure is a significant part of the microstructure. Many parameters, which reflect the internal defects of mortar from different aspects and affect the mortar strength to a certain extent, can characterize the pore structure. However, each parameter can only reflect the change in the pore structure of mortar in a particular aspect; it is not the only factor influencing the mortar's mechanical properties. Accordingly, to analyze the influence of various parameters on the mechanical properties of mortar, the introduction of gray relational analysis (GRA) is necessary [31,[44][45][46].
To study the superficial and deep-seated relationship among the various factors in the system, the GRA uses the indeterminate system with small samples and inferior data as the study object. The main factors among the influencing factors are identified to comprehend the main characteristics of the system. In this study, the main parameters characterizing the pore structure of mortar include D v , D s , D d " and porosity; however, the amount of test data of each parameter is low. Therefore, GRA was used to study the effect of various parameters on mortar strength. The specific calculation method of the GRA is described below.
The data series of D v , D s , D d , and porosity of mortar are defined as a comparison series expressed in X i (k). The data series of mortar strength with different CNT contents are defined as a reference series expressed in Y(k). Considering the variation in the size and dimension of each series, the test data were normalized using Equations (16) and (17).
The absolute value of the difference between the reference series and comparison series is calculated and expressed as ∆ i (k) at point k: The gray coefficient is calculated using Equation (19).
where ∆ min = min i min k ∆ i (k), ∆ max = max i max k ∆ i (k), and ξ = 0.5. The gray grade is calculated using Equation (20).
where the gray grade is between 0 and 1, representing the numerical measure of the correlation between the reference series and comparison series. The gray grade approaches 1 if the degree of coincidence of the two sequences is high.
Strength Correlation Analysis with Multifractal Dimensions
The resulting fractal dimensions show that the pore volume and pore surface had multifractal dimension characteristics, indicating that pore structures of different size grades had different self-similarity in volume and surface characteristics. With the increase in pore diameter, D v and D s increased, and the corresponding pore structure became more complex. In this study, the gray grades between the double fractal dimensions of pore volume and strength, and those between the triple fractal dimensions of pore surface and strength are calculated.
The correlation analysis results between the multifractal dimensions of pore volume and strength are shown in Figure 11. For the compressive strength, the gray grade of regions I and II was approximately 0.74. For the flexural strength, the gray grades of regions I and II were 0.64 and 0.63, respectively. The correlation results were fundamentally the same. The outcomes indicate that D v and mortar strength had a close correlation, and the correlation between and compressive strength was higher than that between D v and flexural strength.
where the gray grade is between 0 and 1, representing the numerical measure of the correlation between the reference series and comparison series. The gray grade approaches 1 if the degree of coincidence of the two sequences is high.
Strength Correlation Analysis with Multifractal Dimensions
The resulting fractal dimensions show that the pore volume and pore surface had multifractal dimension characteristics, indicating that pore structures of different size grades had different self-similarity in volume and surface characteristics. With the increase in pore diameter, Dv and Ds increased, and the corresponding pore structure became more complex. In this study, the gray grades between the double fractal dimensions of pore volume and strength, and those between the triple fractal dimensions of pore surface and strength are calculated.
The correlation analysis results between the multifractal dimensions of pore volume and strength are shown in Figure 11. For the compressive strength, the gray grade of regions I and II was approximately 0.74. For the flexural strength, the gray grades of regions I and II were 0.64 and 0.63, respectively. The correlation results were fundamentally the same. The outcomes indicate that Dv and mortar strength had a close correlation, and the correlation between and compressive strength was higher than that between Dv and flexural strength. The GRA result between the multifractal dimensions of pore surface and strength is shown in Figure 12. Distinct variations can be observed. For the compressive strength, the gray correlation coefficients of the three regions (0.75, 0.71, and 0.63) decreased with the increase in pore size. For the flexural strength, the gray correlation coefficients of the three regions were 0.64, 0.75, and 0.58, respectively. The range of the chord length in regions I and II was 1-500 µm, indicating that the correlation between D s and strength was high when the pore size range was small. gions I and II was 1-500 μm, indicating that the correlation between Ds and strength was high when the pore size range was small.
Region Ⅰ Region Ⅱ Region Ⅲ Region Ⅰ Region Ⅱ Region Ⅲ 0 The change in pore size resulted in pore volume and pore surface with multifractal dimensions, showing that the pore structure had different complexities under different pore size grades. In this study, the fractal characteristics of harmful pores [47,48] (pore size: 1-1000 μm) were mainly characterized. For the CNT-modified mortar, the strong air-entraining property of the dispersant in CNT dispersion was the main reason for the change in pore structure. The correlation analysis of the multifractal characteristics of the pore structure of mortar revealed the correlations between Dv and strength and between Ds and strength under different pore size grades. The connections between the Dv values of the two regions and strength were basically the same. The connections between Ds of region I and compressive strength and between Ds of region II and flexural strength were the largest. According to the analysis results of the multifractal dimensions and other characteristic parameters (such as porosity), the gray correlation between strength and multiple parameters was calculated.
GRA of Strength with Pore Structural Features
The pore structure complexity can be accurately reflected by Dv, Ds, and Dd. They can be applied as parameters to characterize the complex process of the variation in the pore structure with the CNT content quantitatively. However, some fractal dimensions only represent certain sides of the pore structure. Mortar has different pore structures under different mix ratios, curing conditions, and working environments; hence, selecting a reasonable fractal dimension is critical to show the change in pore structure. Many studies [49][50][51][52][53] have demonstrated that porosity, which is a traditional parameter, is also a critical element influencing the mechanical properties of mortar. Therefore, mortar porosity was also included in the analysis.
The correlations between mortar strength and Dv, between mortar strength and Ds, between mortar strength and Dd, and between mortar strength and porosity were calculated and evaluated using GRA. The GRA calculation results of the compressive strength and parameters indicate that the correlation between compressive strength and P (porosity) was 0.676, but the correlations between compressive strength and Dv, between The change in pore size resulted in pore volume and pore surface with multifractal dimensions, showing that the pore structure had different complexities under different pore size grades. In this study, the fractal characteristics of harmful pores [47,48] (pore size: 1-1000 µm) were mainly characterized. For the CNT-modified mortar, the strong air-entraining property of the dispersant in CNT dispersion was the main reason for the change in pore structure. The correlation analysis of the multifractal characteristics of the pore structure of mortar revealed the correlations between D v and strength and between D s and strength under different pore size grades. The connections between the D v values of the two regions and strength were basically the same. The connections between D s of region I and compressive strength and between D s of region II and flexural strength were the largest. According to the analysis results of the multifractal dimensions and other characteristic parameters (such as porosity), the gray correlation between strength and multiple parameters was calculated.
GRA of Strength with Pore Structural Features
The pore structure complexity can be accurately reflected by D v , D s , and D d . They can be applied as parameters to characterize the complex process of the variation in the pore structure with the CNT content quantitatively. However, some fractal dimensions only represent certain sides of the pore structure. Mortar has different pore structures under different mix ratios, curing conditions, and working environments; hence, selecting a reasonable fractal dimension is critical to show the change in pore structure. Many studies [49][50][51][52][53] have demonstrated that porosity, which is a traditional parameter, is also a critical element influencing the mechanical properties of mortar. Therefore, mortar porosity was also included in the analysis.
The correlations between mortar strength and D v , between mortar strength and D s , between mortar strength and D d , and between mortar strength and porosity were calculated and evaluated using GRA. The GRA calculation results of the compressive strength and parameters indicate that the correlation between compressive strength and P (porosity) was 0.676, but the correlations between compressive strength and D v , between compressive strength and D s , and between compressive strength and D d were 0.955, 0.953, and 0.952, respectively, which are higher than that between compressive strength and P. The order was D v > D s > D d > P. The results indicate that porosity was not the central factor influencing the compressive strength of mortar. Moreover, the fractal dimension was the main parameter of the pore structure affecting the change in mortar strength, in which the relevance between D v and compressive strength was the strongest.
The GRA calculation results of flexural strength and parameters also Indicate the high correlations between flexural strength and D v , between flexural strength and D s , and between flexural strength and D d , and the gray correlation degrees were 0.962, 0.973, and 0.964, respectively. The order was D s > D d > D d > P (0.678). The results indicate that porosity was not the main factor influencing the flexural strength of mortar. Furthermore, fractal dimension was the main parameter of pore structure affecting the change in mortar strength, in which the relevance between D s and flexural strength was the strongest.
The above calculation and analysis show that the strongest correlations were between D v and compressive strength and between D v and flexural strength. Therefore, the pore volume distribution was the main factor influencing the compressive strength of mortar, and the roughness and irregularity of the pore internal surface were the main factors influencing the flexural strength of mortar.
Conclusions
The effect of CNT content on the macroscopic properties and microstructure of mortar was studied in terms of strength and pore structure using SEM. The compressive and flexural strength test results showed that a 0.05% CNT content could improve the mortar strength, whereas a 0.5% CNT content had an adverse effect. To explore the primary cause of the change in strength, a pore structure test was implemented on the mortar, and fractal theory was introduced to analyze the quantitative relationship between the pore structure and mortar strength. The conclusions of the study are as follows.
(1) The experimental results show that the strength of mortar was improved by adding 0.05% CNT, while a negative impact occurred with the addition content of CNTs up to 0.5%. The total porosity of mortar containing 0.05-0.5% CNTs was increased by 15-43% compared to that of the reference normal mortar. (2) The fractal dimensions of pore volume and pore surface, as well as the box-counting dimensions of mortar, were calculated using fractal theory. The pore volume and pore surface were found to have multifractal dimensions. The addition of CNTs changed the pore morphology characteristics of mortar and increased the pore volume and pore surface. The complexity of the pore structure distribution varied according to the pore size. The fractal dimension could accurately reflect the complexity of the pore structure and be used as a parameter to characterize the complex process of the variation in the pore structure with mortar CNT content quantitatively. (3) The gray correlation coefficient between the fractal dimensions of the pore structure and mortar strength exceeded 0.95. The strongest correlations were between the fractal dimensions of the pore volume and compressive strength and between the fractal dimensions of the pore surface and flexural strength. The fractal dimensions revealed the complexity, roughness, and irregularity of the pore structure. Compared with porosity, the fractal dimension was more suitable for establishing the relationship between mortar strength and pore structure. (4) There is no doubt that both the strength of the cement matrix and the porosity of the mortar increase with the addition of CNTs. However, the mortar strength is irregular under the combined effect of the microfiber reinforcement and mortar compactness. The strength decrease of the mortar with 0.5% CNTs was mainly due to the sharp increase in porosity, which may have been caused by the use of dispersant. Therefore, it is necessary to study the application method of CNTs to take advantage of the excellent improvement capability of CNTs toward cement matrix strength. Data Availability Statement: Not applicable.
Conflicts of Interest:
The authors declare no conflict of interest.
|
v3-fos-license
|
2017-04-02T00:36:00.906Z
|
2010-09-01T00:00:00.000
|
864378
|
{
"extfieldsofstudy": [
"Biology",
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "HYBRID",
"oa_url": "http://www.genetics.org/content/186/1/59.full.pdf",
"pdf_hash": "17ff61cf56a813fa37f69506395ebabf3aeca7fb",
"pdf_src": "Anansi",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44424",
"s2fieldsofstudy": [
"Agricultural and Food Sciences",
"Biology",
"Environmental Science"
],
"sha1": "dbbecf7e17c1ada28c40f554165dfcdbace31e45",
"year": 2010
}
|
pes2o/s2orc
|
Miniature Inverted-Repeat Transposable Elements of Stowaway Are Active in Potato
Miniature inverted-repeat transposable elements (MITEs) are dispersed in large numbers within the genomes of eukaryotes although almost all are thought to be inactive. Plants have two major groups of such MITEs: Tourist and Stowaway. Mobile MITEs have been reported previously in rice but no active MITEs have been found in dicotyledons. Here, we provide evidence that Stowaway MITEs can be mobilized in the potato and that one of them causes a change of tuber skin color as an obvious phenotypic variation. In an original red-skinned potato clone, the gene encoding for a flavonoid 3′,5′-hydroxylase, which is involved in purple anthocyanin synthesis, has been inactivated by the insertion of a Stowaway MITE named dTstu1 within the first exon. However, dTstu1 is absent from this gene in a purple somaclonal variant that was obtained as a regenerated plant from a protoplast culture of the red-skinned potato. The color change was attributed to reversion of flavonoid 3′,5′-hydroxylase function by removal of dTstu1 from the gene. In this purple variant another specific transposition event has occurred involving a MITE closely related to dTstu1. Instead of being fossil elements, Stowaway MITEs, therefore, still have the ability to become active under particular conditions as represented by tissue culturing.
C OLOR mutation or variegation of grain, flower petals, or fruit skin represents a suitable visual marker for the identification of genes for both pigment production and transposable elements (Clegg and Durbin 2000;Winkel-Shirley 2001;Kobayashi et al. 2004). Recent large-scale genome analyses have uncovered numerous transposable elements occupying large portions of eukaryotic genomes. Approximately 45% of the human genome is composed of sequences originating from .3 million copies of transposable elements (International Human Genome Sequencing Consortium 2001). Even in rice, a plant with a relatively small genome, 20% of the genomic sequence can be derived from transposable elements (Turcotte et al. 2001;Goff et al. 2002;Yu et al. 2002). Although almost all of these insertions are thought to be inactive, these elements are suggested to have influenced the evolution of genomes and individual genes. They can rearrange a genome through transposition, insertion, excision, chromosome breakage, or ectopic recombination (Bennetzen 2000). Moreover, some can contribute to the emergence of a novel gene by conveying a poly(A) signal, a transcription start site, a TATA box, a splicing site, or an intron (Oki et al. 2008).
Bioinformatic analyses using data of genome projects found a miniature inverted-repeat transposable element (MITE) Wessler 1992, 1994), the copy number of which reaches over thousands in a genome (Feschotte et al. 2002). Characteristically, a MITE is not .600 bp, does not contain any coding sequences, and has imperfect terminal inverted repeats (TIRs) at the end of the element and its target site is duplicated upon insertion. The majority of MITEs in plants are divided into two groups, Tourist and Stowaway, on the basis of the sequences of TIRs and their target sites, TAA and TA, respectively. Tourist MITEs are found in grasses while Stowaway is present not only in monocotyledonous but also in dicotyledonous plants Wessler 1992, 1994;Feschotte et al. 2002). Although huge numbers of MITEs of each family have been found since their discovery in silico, their dynamic features remain largely unknown. The first mobile MITE, mPing, was identified in rice and belongs to the Tourist family. Its movement was activated during long-term cell culture ( Jiang et al. 2003) and by anther culture (Kikuchi et al. 2003). When mPing was inserted into the gene for rice ubiquitin-related modifier-1 (Rurm1), its excision resulted in reversion of the mutable slender Supporting information is available online at http:/ /www.genetics.org/ cgi/content/full/genetics.110.117606/DC1. Sequence data from this article have been deposited with the DDBJ Data Libraries under accession nos. AB496976, AB496977, AB496978, AB496979, and AB496980. 1 glume phenotype to wild type (Nakazaki et al. 2003). The identification of an active element made it possible to discover that the transposable elements Ping and Pong supplied the transposase acting on mPing (Yang et al. 2007). Movement of Stowaway MITEs in rice was also reported recently. These were mobilized in yeast cells by transposases of Mariner-like elements (MLEs) (Yang et al. 2009). Active copies of MITEs have been found only in rice. In dicotyledons the only indication that they can be mobilized has come from insertional polymorphisms between accessions or cultivars (Macas et al. 2005;Menzel et al. 2006).
How a transposable element becomes active is an interesting question since it is potentially an endogenous mutagen and could represent a force for evolution through rearrangement of a genome or production of novel genes. Cell culture is known to activate transposable elements. For example, Ac and Spm/En of class II (DNA) elements were mobilized under such conditions (Peschke et al. 1987;Peschke and Phillips 1991) and tissue culturing resulted in a vast increase of copy number of retrotransposons belonging to class I (RNA) elements (Hirochika 1993). The activation of transposable elements by culture can cause genetic and phenotypic variation in clonal plants, which is one of the reasons for somaclonal variation (Lee and Phillips 1988;Kaeppler et al. 2000).
The active Stowaway MITEs reported here induced somaclonal variation and provide a tool to investigate how MITEs have propagated to become a major component of the plant genome and under which conditions they become active.
Pigment analysis: Pigment was extracted from tuber skin with 50 ml 50% (v/v) acetic acid. After filtration, 200 ml of water was added to the extract and this solution was passed over an ODS resin column (Wakosil 25C18, i.d. 15 3 100 mm; Wako Pure Chemical Industries, Osaka, Japan) equilibrated with aqueous 10% (v/v) acetic acid. The column was washed with 10% acetic acid, and the fraction with anthocyanins was eluted by methanol containing 0.1% hydrochloric acid. The eluate was dried and the residue was separated by mass TLC [TLC Cellulose (10 3 10 cm); Merck KGaA, Darmstadt, Germany] using t-buthanol:acetic acid:water (TBA) 3:1:1 as the solvent. The anthocyanins, migrating as a colored band, were cut out and extracted by methanol containing 0.1% hydrochloric acid. After evaporation of the solvents, the anthocyanin was dissolved in 1 ml of 1% hydrochloric acid. An equal volume of concentrated hydrochloric acid was added and the solution was heated at 100°for 20 min to release the anthocyanidins that were extracted by isoamyl alcohol. Anthocyanidins in the resulting isoamyl alcohol layer were identified by HPLC/MS analysis; HPLC/MS (1525 Binary HPLC Pump, 996 Photodiode Array Detector, 2767 Sample Manager, Micromass ZQ; Waters, Milford, MA) was equipped with a Synergi 4-m Fusion-RP 80-Å column (4.6 3 100 mm; Phenomenex, Torrance, CA) operated at 30°. The mobile phase consisted of 1% aqueous formic acid as solvent A and methanol as solvent B, and the gradient program was 20% B to 70% B (20 min) and 100% B isocratic (10 min) at a flow rate of 1 ml/min. Southern blot analysis: Genomic DNA was isolated from the leaves by a Nucleon Phytopure Genomic DNA extraction kit (GE Healthcare, Uppsala, Sweden). Approximately 10 mg of genomic DNA was digested with EcoRV and then separated by 1% agarose gel electrophoresis. The DNAs were transferred to Hybond N1 (GE Healthcare) and then hybridized to PCRamplified cDNA for F3959HTrev as a probe. Probe labeling and signal detection were carried out with AlkPhosDIRECT (GE Healthcare).
PCR primers and the reaction condition for cDNA and genomic DNA analyses: PCR primers used in this study are listed in supporting information, Table S1 with their approximate positions shown in Figure S1. Most PCR reactions were carried out nested, with two primer sets, to increase specificity and yield. Each PCR consisted of an initial denaturation step at 95°for 3 min, followed by 30 cycles at 95°for 30 sec, 56°for 30 sec, and extension at 72°for 2 or 5 min with a final 3 min extension at 72°. Gel-purified PCR products using MagExtractor (Toyobo, Shiga, Japan) were sequenced directly or after cloning into pCR 4-TOPO using the TOPO TA cloning kit (Invitrogen, Carlsbad, CA) on an ABI PRISM 310 genetic analyzer (Applied Biosystems, Foster City, CA).
Isolation and sequence determination of the cDNAs for the F3959H gene: Total RNA was isolated from $100 mg of tuber skin by using an RNeasy Plant Mini Kit (QIAGEN, Hilden, Germany). To obtain the sequence of the cDNA for the flavonoid 39,59-hydroxylase (F3959H) gene of JKP, a 59-RACE experiment was performed using a GeneRacer kit (Invitrogen) with supplied and gene-specific primers [no. 1 (59-AACATT TTTGTCAATAAAKCATCAAA-39) and no. 2 (59-CCTTGTAA ATCCATCCAAGCTA-39) for the first and the second amplifications, respectively] that anneal to two highly conserved regions among P450 or F3959H genes of S. melongena (GenBank accession no. X70824) (Toguri et al. 1993b) and Petunia hybrida (GenBank accession nos. Z22544, Z22545, and X71130) (Holton et al. 1993;Toguri et al. 1993a). The gene-specific primers for 39-RACE [no. 3 (59-CCGAATTCAAGCTTTATATT ATATCTTCGATTTT-39) for the first and no. 4 (59-GGCATTAC GTATTAGTGAGTTG-39) for the second amplification] were based on the sequence obtained by the 59-RACE experiment. The outcome of both RACE experiments enabled the design of primers [no. 5 (59-CCTTCTACTTCATTCTCACTCT-39) and no. 6 (59-AGCAAATATGTTGCACTATAAATG-39) for the first and nos. 3 and 6 for the second amplification] to amplify the full-length cDNAs for the F3959H gene by RT-PCR using first-strand cDNAs prepared from 72218, JKR, and JKP as templates. The extension time for all PCRs was 2 min.
Isolation and sequence determination of the genomic DNA for F3959H genes: Genomic DNA was isolated from $100 mg of leaves as described previously (Walbot and Warren 1988). Genomic DNA of the F3959H gene was amplified (using a 5min extension time) with primer nos. 5 and 6. The methods for the isolation of the other F3959H pseudogenes, f3959h2 and f3959h3, are described in File S1.
MITE display: Transposon display was carried out using primers designed from the sequence of dTstu1 and dTstu1-2 according to the procedure of Casa et al. (2000). Approximately 250 ng of genomic DNA was digested with MseI and ligated to an adaptor. Aliquots of the reactions were diluted 4fold with 0.1 3 TE. Preselective amplification was performed with a primer complementary to the adapter [Mse 1 0 (59-GACGATGAGTCCTGAGTAA-39)] and another primer complementary to an internal dTstu1and dTstu1-2 sequence [no. 31 (59-CATTCTTTTTGGGACTGACTA-39)]. PCR consisted of 25 cycles at 94°for 30 sec, 56°for 30 sec, and extension at 72°for 1 min with a final 5-min extension at 72°. Aliquots of the reactions were diluted 20-fold with 0.1 3 TE. Selective amplification was carried out with a selective primer [Mse 1 N (59-GACGATGAGTCCTGAGTAA1N-39)] and another primer specific for the TIR and target site duplication (TSD) sequence of dTstu1 and dTstu1-2 [no. 32 (59-ATAAAWTGG GACRGAGGGAGTA-39)]. The latter primer was labeled at the 59 end with 6-FAM. Temperature cycling conditions were 94°f or 5 min; 10 touchdown cycles of 94°for 30 sec, 66°for 30 sec (À1°each cycle), and extension at 72°for 1 min; followed by 25 cycles of 94°for 30 sec, 56°for 30 sec, and extension at 72°f or 1 min with a final 5-min extension at 72°. The products were analyzed on an ABI PRISM 310 genetic analyzer (Applied Biosystems, Foster City, CA).
RESULTS
Key enzyme of the color variation: JKP is a potato cultivar with purple tubers that was obtained as a somaclonal variant of skin color after selection from plants regenerated from leaf protoplasts of clone 72218 with red tubers ( Figure 1A) (Okamura 1991(Okamura , 1994. Analysis of the anthocyanin aglycones revealed that the crucial difference between these purple and red potatoes was the presence of petunidin in the tuber skin of JKP as one of the major anthocyanidins, whereas in 72218 this was pelargonidin. The difference between petunidin and pelargonidin is the number of hydroxyl and methoxyl groups at the B-ring of these molecules. Addition of two hydroxyl groups to dihydrokaempferol, which is the precursor of pelargonidin, produces dihydromyricetin, a precursor of petunidin. This reaction is catalyzed by flavonoid 39,59-hydroxylase (F3959H) ( Figure 1B). Therefore, the cause of the color variation from red (72218) to purple ( JKP) was attributed to gain of F3959H function in the tuber skin of JKP. Recovery of the F3959H gene itself would most likely explain the restoration of enzyme activity since genetic analysis had revealed that the dominant allele for F3959H in the P locus is solely responsible for determination of the purple color phenotype ( Jung et al. 2005).
Analysis of F3959H genes: The possibility that disruption of the F3959H gene of 72218 was involved in the coloration of its tuber skin was assessed by RT-PCR analysis of the F3959H transcript. Sequencing of the obtained cDNA product revealed the presence of a MITE belonging to Stowaway, named dTstu1. This element was absent from the F3959H transcript in JKP, which was analyzed in parallel ( Figure 2). In support of this, Southern blot analysis with F3959H cDNA from JKP as a probe demonstrated a reduction in size in JKP of a 5kb EcoRV fragment present in 72218 and JKR, which is a somaclonal cultivar with red tubers simultaneously obtained from the leaf protoplast culture of 72218 that yielded JKP (Okamura 1991(Okamura , 1994. Genomic sequence analysis of F3959H genes from 72218 and JKP revealed that the only difference between the full-length genes is the insertion of dTstu1 into the first exon of F3959H in 72218 (designated f3959hTdTstu1, DDBJ accession no. AB496977). This element was not present in F3959H of JKP (named F3959HTrev, DDBJ accession no. AB496976), which explained the size difference observed in Southern blot analysis (Figure 3, A and B). As the result of a stop codon within dTstu1, f3959hTdTstu1 should produce a truncated protein of only 24 amino acid residues in 72218, whereas F3959HTrev codes for a functional full-length protein of 510 amino acid residues, one residue longer than predicted for the wild type that was reported as a functional F3959H gene of diploid potato clone W5281.2 (GenBank accession no. AY675558) ( Jung et al. 2005).
At most, three copies of F3959H were deduced to exist in 72218 and JKP on the basis of the results of Southern blot and genomic sequence analyses. Apart from the full-length F3959H, the triploid 72218 and JKP possess two truncated copies of this gene ( f3959h2 and f3959h3, DDBJ accession nos. AB496978 and AB496979) ( Figure 3B). The sequences of each pseudogene were completely identical between 72218 and JKP. Both f3959hTdTstu1 and F3959HTrev have an EcoRV recognition site at the middle of the gene, which is absent in 7.8 kb of determined f3959h2 sequence. Therefore, the largest band in Figure 3A represents f3959h2, while the 6.3-kb fragment is derived from the third allele, f3959h3, which contains only the latter half of the third exon, encoding the P450 signature motif conserved among all known plant F3959H genes. This motif is lacking in f3959h2, which strongly suggests that transcripts of this copy do not function properly. Triploid red 72218 has only pseudocopies of the gene, f3959hTdTstu1, f3959h2, and f3959h3. Its purple somaclonal variant, JKP, has three copies of the gene, F3959hTrev, f3959h2, and f3959h3.
As F3959HTrev is the only allele able to produce a fulllength, nondefective protein, we conclude that excision of dTstu1 from f3959hTdTstu1 during the establishment of JKP is the major reason for the color change from red to purple.
An active Stowaway MITE, dTstu1: The sequence of dTstu1 is short (239 bp), A/T rich (67%), and marked by TIRs corresponding to the consensus CTCCCTCYGTC and a duplication of the TA target sequence at the insertion site, all characteristics of Stowaway MITEs (Bureau and Wessler 1994). The formation of DNA secondary structure is predicted for this element as well ( Figure 3C). Database searches retrieved sequences similar to dTstu1 not only in genomes of Solanum but also in the other Solanaceae plants, for example, Capsicum, Petunia, or Nicotiana (GenBank accession nos. DQ309518, AY136628, and AF277455).
Comparison of the wild-type F3959H gene with that of JKP confirmed the addition of one amino acid residue (valine) generated by a three-nucleotide insertion, GTA, in F3959HTrev ( Figure 3C). These nucleotides could be traced to consist of one base (G) derived from dTstu1 and two (TA) from the duplicated target site. This duplication was also present in the disrupted f3959hTdTstu1 of 72218 and leading to the observed size difference of 238 bp between the transcripts derived from these genes. Therefore, the presence of these three nucleotides in F3959HTrev of JKP strongly supports that the 239-bp dTstu1 was excised from f3959hTdTstu1 in 72218 as a transposable element leaving a footprint that is normally associated with transposase-mediated excision. We conclude that the F3959H gene in 72218 (red) had become functionless as a result of dTstu1 insertion and then reverted in JKP (purple), presumably by transposition of dTstu1 during culturing.
Another active dTstu1-like Stowaway MITE, dTstu1-2: Excision of dTstu1 from the F3959H gene during culturing of leaf protoplasts derived from 72218 raised the possibility that other dTstu1-like Stowaway MITEs had undergone transposition under these conditions. In support of this, we isolated an extra dTstu1-like element specific for JKP by use of a DNA-fingerprinting technique adapted from a method with which inter-MITE polymorphisms were detected. With this method, multiple regions between MITEs had been amplified by PCR using a primer annealing to TIRs in the outer direction (Chang et al. 2001). By using primers specific for the dTstu1 internal sequence (instead of the TIR sequences), we obtained a product for JKP not observed for 72218 that contained an element almost identical to dTstu1, named dTstu1-2 (DDBJ accession no. AB496980). After identification of the flanking regions, PCR amplification of the region containing the site of integration of dTstu1-2 in JKP produced in 72218 a fragment of one size, not containing the transposable element. In JKP, however, two fragments, one with and the other without dTstu1-2, were detected ( Figure 4A), suggesting that no alleles of the locus carried the transposable element in 72218 and that dTstu1-2 had been newly inserted in an allele. Comparison of the sequence surrounding the insertion site confirmed the presence of a duplicated TA dinucleotide, which is the target sequence of Stowaway MITEs ( Figure 4B). Compared to dTstu1, dTstu1-2 had a similar length, 239 bp, but contained four base changes, two of which were in the TIRs ( Figure 5). These changes made the TIRs of dTstu1-2 more complementary to each other than in the case of dTstu1. Therefore, in view of a comparable propensity for transposition, this Stowaway MITE conceivably was mobilized under the same conditions that caused dTstu1 to be excised from the F3959H gene. If this is the case, activation of transposition of these MITEs was induced by culturing.
To survey the active MITE copies related to dTstu1, we carried out MITE display using primers designed from the sequences of dTstu1 and dTstu1-2. More than 50 peaks were detected but slight differences existed among 72218, JKR, and JKP. JKR revealed 3 new peaks and JKP exhibited 3 new peaks and a missing peak as compared with 72218 when using a primer with selective nucleotide T ( Figure S2). The insertion of dTstu1-2 in JKP was visualized as a new peak at the expected position of 315 bases in size but the excision of dTstu1 in JKP was not detected at the expected position of 50 bases due to the signal of the other putative insertion at the same position. Although most of the peaks were identical, a few polymorphisms were detected among the three clones.
DISCUSSION
In this study we found the first active Stowaway MITEs in dicotyledons and presented the evidence of their movement. Excision of dTstu1 caused a somaclonal variation of skin color in potato tubers. Insertion of dTstu1-2 was observed at another locus in the genome of the same somaclonal variant, JKP. It became obvious that two major groups of MITEs, Stowaway and Tourist, have the potential to transpose in plants. Movement of MITEs was not proved for a long time because most of them are not inserted into genes (Oki et al. 2008) with the possibility to cause an altered phenotype and because the high copy number of MITEs in the genome precludes analysis of their individual movements. ''Fingerprints'' of MITE abundance, obtained by Southern hybridization with MITE DNA probes (Naito et al. 2006), showed differences among strains, which suggested movement of MITEs but did not provide direct evidence for their transposition. Previously, a case in which MITE transposition resulted in a phenotypic (A) Southern blot analysis of genomic DNA digested with EcoRV and probed with a labeled RT-PCR product of F3959HTrev. Approximate sizes are given on the left. The largest band represents f3959h2 since the EcoRV recognition site is absent in 7.8 kb of determined sequence. The 6.3-kb fragment is derived from f3959h3. The rest of the bands represent f3959hT dTstu1 or F3959HTrev since both f3959hTdTstu1 and F3959HTrev have an EcoRV recognition site at the middle of the gene. (B) Structure comparison of F3959H genes. Both f3959h2 and f3959h3 are incomplete genes, f3959h2 lacks the latter half of the third exon, and f3959h3 contains only the latter half of the third exon. Triploid red 72218 has only pseudogenes, f3959hTdTstu1, f3959h2, and f3959h3. Triploid purple JKP, a somaclonal variant of 72218, has F3959hTrev, f3959h2, and f3959h3. Coding regions (shaded boxes) are separated by introns (lines) with the dTstu1 insertion depicted by a solid bar. Arrows indicate the EcoRV recognition site in f3959HTdTstu1 and F3959HTrev. (C) Structure of dTstu1 and the nucleotide and amino acid sequences of F3959H genes proximal to the dTstu1 insertion site. Wild type is the previously reported functional F3959H gene ( Jung et al. 2005). A pair of vertical sequences shows the TIRs where complementary sequences are hyphened. An asterisk indicates a stop codon present in f3959hTdTstu1. The footprint remaining after dTstu1 excision (including the duplicated TA target site) is underlined. change was reported. A MITE named mPing, belonging to Tourist, was found to be inserted in the rice Rurm1 gene causing the slender glume phenotype that reverted to wild type by excision of the mobile element (Nakazaki et al. 2003). We present in this report another rare case of a MITE giving rise to an altered phenotype, namely that of dTstu1 belonging to Stowaway. We found this MITE to disrupt the F3959H gene of a potato clone (72218), resulting in a red tuber color. Due to the excision of dTstu1 tuber color changed to purple in the somaclonal variant. Thus, in two cases, visible phenotypes, the grain shape for mPing and the tuber color for dTstu1, provided strong evidence for the movement of MITEs belonging to Tourist and Stowaway, respectively.
As described in this report, the duplication of the target sequence TA at the insertion site of dTstu1 was observed for the F3959H gene of 72218. The footprint left behind in F3959HTrev in JKP suggests that the excision is catalyzed by a transposase. By lack of any open reading frame, the short Stowaway MITEs of both dTstu1 and dTstu1-2 are not able to code for such a transposase, which has to originate from other, unrelated transposable elements as found in the case of mPing. This Tourist MITE was mobilized by transposases derived from the Ping and Pong transposable elements (Yang et al. 2007). Mobile dTstu1 and dTstu1-2 enable us to search for transposases that control Stowaway MITEs. The Mariner-like element (MLE) is one of the most widely distributed transposable elements in eukaryotes and its transposase can interact in vitro with TIRs of a Stowaway MITE (Feschotte et al. 2005). Using yeast cells, MLE transposases of rice were proved to actually activate transposition of Stowaway MITEs of rice (Yang et al. 2009). MLE is a good candidate for a source of transposase for dTstu1 movement.
Our results show that the activation of Stowaway MITEs not only involves a transposase but also appears to occur under particular conditions. MITE displays of regenerated plants from protoplasts indicated that most of the MITE insertion sites were maintained, although a few differences emerged during tissue culture. The observed differences in sequences and in the insertion sites between the silent copies and the active ones should be investigated further as these may reveal the factors for transposition. Tissue culturing causes the activation of various transposable elements (Peschke et al. 1987;Grandbastien et al. 1989;Peschke and Phillips 1991;Hirochika 1993;Jiang et al. 2003;Kikuchi et al. 2003). It was observed that the conditions under which dTstu1 (and possibly dTstu1-2) was excised, i.e., at some time during the culturing of leaf protoplasts isolated from 72218, caused 7% of the regenerated plants to bear purple tubers instead of the parental red potatoes (Okamura 1991). Furthermore, red tubers with small purple sectors were found in some regenerated plants that originated from cultured leaf protoplasts of 72218 ( Figure S3). Such chimeric tubers or purple tubers, however, have not been found in tuberpropagated 72218 plants, which are clonally reproduced as seed potatoes in the field. These facts also support the importance of cell culture conditions for the activation of dTstu1. It remains to be seen how tissue culturing confers the activation. Alteration of the epigenetic status by DNA demethylation of the element itself or of the genes encoding its transposase has been reported to activate a transposable element during tissue culture (Kaeppler et al. 2000;Cheng et al. 2006;Lisch 2009) and could therefore be part of the reason.
How MITEs have spread over various genomes and in such high numbers is still obscure but poses one of the important questions to be tackled to comprehend the evolution of the eukaryotic genome. Active MITEs, like dTstu1, can provide a tool for this investigation.
We thank Kazuyoshi Hosaka for 72218 tubers; Yoshio Itoh, Takayasu Hirosawa, Toshihiro Toguri, Noboru Onishi, Naoyuki Umemoto, and Masachika Okamura for discussions; and Chika Aoyama for assistance with experiments. We are grateful to Atsuko Momose for critical reading of the manuscript. This work was partly supported by a grant from the ''Technical Development Program for Making Agribusiness in the Form of Utilizing the Concentrated Know-how from the Private Sector'' of the Ministry of Agriculture, Forestry and Fisheries, Japan.
|
v3-fos-license
|
2023-03-25T05:06:24.796Z
|
2023-03-23T00:00:00.000
|
257715160
|
{
"extfieldsofstudy": [],
"oa_license": "CCBY",
"oa_status": "HYBRID",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s11240-023-02495-6.pdf",
"pdf_hash": "605d5cf9c0f6bc509b531d949859d3d0f545c63a",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44425",
"s2fieldsofstudy": [
"Biology"
],
"sha1": "605d5cf9c0f6bc509b531d949859d3d0f545c63a",
"year": 2023
}
|
pes2o/s2orc
|
A simplified protocol for Agrobacterium-mediated transformation of cell suspension cultures of the model species Medicago truncatula A17
This manuscript describes a unique protocol for the rapid transformation of Medicago truncatula A17 cell suspension cultures mediated by Agrobacterium tumefaciens. Medicago cells were collected on day 7 of the growth curve, which corresponded to the beginning of the exponential phase. They were then co-cultured with Agrobacterium for 3 days before being spread onto a petri dish with appropriate antibiotic selection. The Receptor Binding Domain of the Spike protein of SARS-CoV-2 was used as a model to develop this protocol. The presence of the transgene was assessed using PCR, and the integrity of the product was evaluated by SDS-PAGE and Western-blotting.
Introduction
Plant cell suspension cultures are a valuable tool in numerous areas of plant research, serving as models for both fundamental cell biology studies and more applied biotechnology, such as recombinant protein production. Effective transformation protocols for cell suspension cultures are essential for various areas, namely the availability of rapid processes that allow a swift answer to the problem under study. Each plant species presents its own specificities, particularly the higher or lower tendency to form aggregates in liquid culture, making it challenging to implement standardized transformation protocols.
The widely used tobacco BY-2 cell culture line was the first to be transformed by Agrobacterium, during the 1980's (An 1985). Since then, many other commonly used species have been transformed by similar methods, such as suspension cultures of tomato (McCormick et al. 1986), rice (Baba et al. 1986), soybean (Baldes et al. 1987), carrot (Wurtele and Bulka 1989) and grapevine (Martínez et al. 2015). However, these procedures often involved a first step of protoplast preparation, which is both laborious and time consuming. There are not many reports of straightforward transformation protocols and the ones available often require many steps, including multiple washing procedures that increase the chance for unwanted contamination. Medicago truncatula cell cultures are highly versatile and utilized for a variety of purposes. The standard genotype for this species is A17 which has been fully sequenced (Young et al. 2011) and displays a wide array of available genetic tools. Compared with the widely used tobacco BY-2 cell line, Medicago truncatula cell cultures have significantly lower proteolytic content (Santos et al. 2018), making them an attractive option for the production of recombinant proteins and other applications.
In this report, we introduce a simple and rapid method, named suspension culture-based (SB) protocol, which allows the generation of transformed liquid cultures in about 12 weeks. Importantly, this method is suitable for less experienced researchers and does not require specialized equipment, such as a desiccator or vacuum pump. Furthermore, it is more economical and involves fewer steps than Santos et al. (2019), hereinafter referred to as calli-based (CB) protocol, making it easier to maintain sterility throughout the process and avoid possible contamination. Finally, it can be easily adapted to other plant species.
Biological material
Cell suspension cultures of M. truncatula cv. Jemalong line A17 were generated from seeds following the procedure by Sello et al. (2017) and maintained as outlined below: 1. Medicago seeds were scarified with concentrated sulfuric acid (10') and washed 5 times with sterile distilled water. The seeds were sterilized by immersion in a solution of 5% (V/V) commercial bleach with 0.012% (V/V) Tween 20 for 5 min. Subsequently, they were washed with sterile distilled water, immersed in 70% (V/V) ethanol (2'), washed again with sterile distilled water, and let dry on a sterile Whatman filter paper (Kondorosi and Denarié 2001
Agrobacterium tumefaciens growth (3-4 days)
In this study we used the Receptor Binding Domain (RBD) of the SARS-CoV-2 coronavirus sequence codon optimized for Nicotiana tabacum and cloned into pTRA vector (pTRA_RBD) as described in Rebelo et al. (2022).
Transformation of the Medicago liquid culture (about 12 weeks until obtaining stable positive liquid cultures)
Step-by-step illustrated guide of Medicago transformation with SB protocol is depicted in Fig. 1.
1. At day 7 of growth, 4 mL of Medicago liquid culture were transferred to three petri dishes (55 mm Ø). Then, 100 µL of Agrobacterium suspension with a final OD 600nm of 0.5, 0.6 or 1 were added to each petri dish and gently mixed. CS: use wide bore tips or cut at least 1 cm off normal tips for transferring Medicago cells. 2. The petri dish was sealed with cling film and incubated at 24 °C in the dark for 60-72 h. 3. 4 mL of CIM medium were added to the co-cultures and spread onto petri dishes (92 mm Ø) prepared with CIM medium, 0.4% (w/V) Gelrite [Duchefa], 500 mg/L ticarcillin disodium/clavulanate potassium [Timentin, Duchefa] for Agrobacterium elimination and 100 mg/L Kanamycin for transformant selection. 4. The petri dishes were placed in the dark at RT until the first calli appeared. CS: Growing micro-calli were isolated and subcultured every 2 weeks, to fresh plates containing the selection antibiotic and a 50% stepwise decrease of Timentin concentration. 5. After three rounds of selection they were moved to liquid culture by placing a fragment of the calli in liquid CIM medium supplemented with Kanamycin and dispersed with the help of a sterile blade or disposable loop. Medicago suspension cultures were subcultured every 10-15 days to fresh medium with 20% inoculum. 6. In parallel, the transformation of Medicago calli was carried out following the CB protocol (Santos et al. 2019).
Assessment of the RBD gene presence in transgenic Medicago lines
1. At day 10 of growth, Medicago transgenic or wild-type cultures were paper-filtered, cells were collected and macerated in liquid nitrogen, using a mortar and pestle. 2. DNA was extracted using the NZY Plant/Fungi gDNA Isolation Kit [NZYTech], following the manufacturer's instructions.
3. Quality and integrity of the extracted DNA was assessed by agarose gel electrophoresis and spectrophotometric analysis. 4. PCR amplification was performed to the extracted DNA samples, following the GoTaq® DNA Polymerase [Promega] supplier instructions and with the following primers 5'-ATC CTT CGC AAG ACC CTT CCTCT-3' and 5'-AGA GAG AGA TAG ATT TGT AGAGA-3'. 5. PCR products were separated by agarose gel electrophoresis to evaluate gene amplification.
Quantification of RBD protein in Medicago culture medium
1. Spent medium samples (A17 WT, CB1, CB2, CB3, SB6, SB7 and SB9) were concentrated fivefold, as described in the previous sub-section. 2. The samples were resolved in 12.5% SDS-PAGE polyacrylamide gels, and the proteins were stained using BlueSafe Reagent [NZYTech]. 3. A standard curve with BSA at concentrations of 2, 3, 5, 10 and 20 mg/L was built and used to determine the relative amount of secreted RBD. 4. Recombinant RBD protein bands were quantified using Image Lab Software v6.1 [Bio-Rad].
Results
Medicago truncatula A17 transgenic lines were generated using two different methodologies. The first one is a callibased protocol previously developed in our laboratory (Santos et al. 2019). The second method is a shorter, simpler Agrobacterium-mediated transformation protocol resulting from the combination of previously described procedures (An 1985;Rademacher et al. 2019). Medicago liquid cultures were used instead of calli, which reduced the time needed to obtain a sufficient number of cells for transformation. Agrobacterium containing the pTRA_RBD vector, confirmed by colony PCR (Fig. 2a), was incubated with acetosyringone to induce virulence genes and co-cultivated with Medicago cells for 3 days. After co-cultivation, microcalli were grown on solid CIM media, with proper antibiotic selection, and subcultured every 2-4 weeks onto the same medium to maintain continuous production. Our goal was to establish a fast and straightforward transformation protocol. To assess the success of the new protocol, we used the RBD protein as a model. First, we evaluated the RBD production in calli obtained with the SB protocol, where bacterial suspensions with varying OD 600nm were tested (Fig. 2b, c). All three ODs tested were found to be very efficacious, yielding numerous micro-calli that grew under antibiotic selection. Western blot analysis with anti-RBD antibody detected several bands, corresponding to the non-glycosylated RBD (25 kDa) and putative glycosylated forms (around 35 kDa). This pattern showed some variation among lines; however, this analysis was performed on calli which are clusters of cells that may be heterogeneous in terms of total soluble protein (TSP). After establishing liquid cultures using both protocols, we extracted genomic DNA from one line of each protocol (CB1 and SB9) to confirm the presence and integrity of the transgene, as shown in Fig. 2d, using specific primers for the 35S promoter and terminator respectively. We recommend that this assay is carried out prior to the selection of Medicago transgenic lines when there is no specific antibody available to detect the recombinant product, as this will save time in the screening of the putative transformed lines. Western blot analysis was performed on total protein extract (Fig. 2e, f) and spent medium (Fig. 2g, h). All RBD isoforms were detected, as previously shown in Fig. 2c, except for the higher molecular weight glycoform of RBD which was not present in the spent medium. Quantification of the secreted RBD was performed relative to a BSA standard curve, using normalized samples for TSP or volume. We recommend that quantifications are performed in view of the specific objective of the work; for a biotechnological application, it is more important to determine the amount of recombinant product that is obtained per volume of culture, regardless of the number of cells or the total protein found per liter of culture. In this study, we did not evaluate a sufficient number of cell lines generated by each protocol to perform statistical analysis, but we were able to determine that the amount of secreted RDB in lines originating with the calli-based method was more heterogeneous ranging from 0.5 to 2.8 mg/L while the suspension culture-based protocol yielded around 1.6 mg/L, as depicted in Fig. 2i, j. Recombinant RBD protein is secreted to the culture medium of Medicago, from which it can be purified, reinforcing the positive outcome of applying this optimized transformation protocol. Our results demonstrated that both protocols are suitable to generate Medicago A17 cell lines producing and secreting the model protein.
Discussion
In this work, we report an optimized transformation protocol for Medicago cell suspension cultures that requires only a flow chamber to maintain sterility, with no other specialized equipment needed. Our previous transformation protocol for M. truncatula cells (described in Santos et al. 2019) required the plant material to be under vacuum to promote the entry of Agrobacterium in Medicago cells.
If not properly carried out, the transformation would not be effective. With this new protocol, we have mitigated this critical step since the co-culture of plant cells with the bacterial suspension is sufficient to carry out Agrobacterium gene transfer. Although the previous transformation protocol (Santos et al. 2019) is efficient, it involves several steps that carry a higher risk of contamination. The calli must be filtered and two additional steps using filter paper are necessary before transferring them to solid CIM medium. Importantly, this protocol requires using calli as starting material, which can take up to four weeks to reach the proper size, thereby extending the overall procedure time. There is a protocol for the generation of transgenic plants of M. truncatula cv. Jemalong 2HA which employs liquid cultures and the procedure is similar to our protocol (Iantcheva et al. 2014;Iantcheva and Revalska 2019). In this work, Medicago cells were co-cultured with , d), NZYColour Protein Marker II [NZYTech] (b, c, e-j); C + : pTRA_RBD vector; C − : PCR negative control (water); WT: wild-type; Medicago transgenic lines transformed with Agrobacterium OD 600 0.5 (SB1-SB3), OD 600 0.6 (SB4-SB5) and OD 600 1 (SB6-SB9); A1-A5: Loading of 80, 120, 200, 400 and 800 ng of BSA corresponding to 2, 3, 5, 10 and 20 mg/L, respectively Agrobacterium and positive transformants were screened based on GFP or GUS activity. However, their goal was the regeneration of fertile plants and not the establishment of liquid cultures. Other reported protocols for the transformation of liquid cultures of other species include additional steps such as protoplast preparation, removal of bacterial cells by washing steps, or placing the cells onto a sterile paper prior to the subculture in the solid medium. These procedures increase the difficulty to transfer the cells in an individualized way (e.g. Wu et al. 1998;Moniruzzaman et al. 2021;Badim et al. 2022).
To validate the effectiveness of the new transformation protocol, we selected a recombinant protein with a high impact in current worldwide research, the Receptor Binding Domain of the SARS-CoV-2 virus (Rebelo et al. 2022). Other ongoing projects in the laboratory, in which we have used this new protocol, indicate that it is faster and simpler than previous methods (Rebelo et al. in preparation; Vieira et al. in preparation). The vector containing the RBD gene was used in parallel for transformation of Medicago cell cultures using the two methods developed in our lab. The quantification of the secreted recombinant protein showed no relevant differences between the transgenic lines obtained by the two protocols, but we did not evaluate a sufficient number of lines to apply statistical analysis. Furthermore, we did not assess the number of copies inserted in each transgenic line or the site of transgene integration in the host genome. These features would be interesting to investigate in a broader population of transformed cell lines, in particular, it would be important to assess if the use of suspension cells vs calli as starting material impacts on the number of transformation events. Within the small sample that we evaluated, we detected a higher heterogeneity among lines derived from the calli method, with respect to the amount of recombinant product found in the spent culture. This could mean that more independent transformation events took place, which can ultimately result in less predictability of the cells´ behavior in terms of production. Further studies are necessary to assess this possibility, and this will be useful to further improve transformation protocols for plant cells, independently of the species under study.
The methodology presented in this report is a straightforward Agrobacterium-mediated transformation process that may be implemented for other plant cell suspension cultures of different species to rapidly obtain transgenic cultures.
|
v3-fos-license
|
2017-10-25T07:07:41.039Z
|
2017-10-10T00:00:00.000
|
9079261
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://bmcpregnancychildbirth.biomedcentral.com/track/pdf/10.1186/s12884-017-1540-0",
"pdf_hash": "34f42285ba716dbc7f79723f070ba4d903a52fb6",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44427",
"s2fieldsofstudy": [
"Medicine"
],
"sha1": "34f42285ba716dbc7f79723f070ba4d903a52fb6",
"year": 2017
}
|
pes2o/s2orc
|
Association between very advanced maternal age and adverse pregnancy outcomes: a cross sectional Japanese study
Background While several studies have demonstrated the increased risk of pregnancy complications for women of advanced age, few studies have focused on women with very advanced age (≥ 45), despite the increasing rate of pregnancy among such women. Furthermore, how such risks of increase in age differ by maternal characteristics are also poorly understood. Thus, we aimed to clarify pregnant outcomes among women with very advanced age and how the effect of age differs by method of conception and parity. Methods We used the national multicenter Japan Society of Obstetrics and Gynecology perinatal database, including 365,417 women aged 30 years or older who delivered a singleton between 2005 and 2011. We divided women into four groups based on age (years): 30–34, 35–39, 40–44, and ≥45, and compared risk of adverse birth outcomes between the groups using Poisson regression. Effect modification by parity and use of assisted reproductive technology (ART) was also evaluated. Results: Compared with women aged 30–34 years, women aged 45 or older had higher risk of emergency cesarean delivery [adjusted risk ratio (aRR): 1.77, 95% confidence interval (95% CI): 1.58–1.99], preeclampsia (aRR: 1.86, 95% CI: 1.43–2.42), severe preeclampsia (aRR: 2.03, 95% CI: 1.31–3.13), placenta previa (aRR: 2.17, 95% CI: 1.60–2.95), and preterm birth (aRR: 1.20, 95% CI: 1.04–1.39). The effect of older age on risk of emergency cesarean section, preeclampsia, and preterm birth were significantly greater among those who conceived naturally compared to those who conceived by ART. The effect on emergency cesarean section was stronger among primiparous women, whereas the risk of preeclampsia associated with older age was significantly greater among multiparous women. Conclusions Very advanced maternal age (≥ 45) was related to greater risk for adverse birth outcomes compared to younger women, especially for maternal complications including cesarean section, preeclampsia, severe preeclampsia, and placenta previa. The magnitude of the influence of age also differed by conception method and by parity.
Background
Pregnancy at advanced maternal age (over 35 years) has increased in many high income countries over the past several decades, [1][2][3] with recent rates reported to be as high as 9.1% in the US, [4] and 28.1% in Japan [5]. In Japan, one of the Asian countries which has experienced a considerable increase in average age at pregnancy, the number of births from women of very advanced age, such as [40][41][42][43][44][45] and ≥45 years of age, has also surged, with recent numbers in 2015 to be 52,557 (5.2%) and 1038 (0.1%), respectively [5]. Such increase in average maternal age has also been observed in many parts of Asia, such as Korea, [6] China, [7] and Taiwan, [8] which may be attributable to the increase in women's participation in society in these countries.
A number of studies have demonstrated that pregnancy among women of advanced age is associated with increased risk of pregnancy complications and adverse perinatal outcomes, such as gestational diabetes mellitus, preeclampsia, placenta previa, cesarean section, preterm birth, low birthweight, maternal mortality, and perinatal mortality [9][10][11][12][13]. However, most studies have focused on adverse outcomes among women aged ≥35, or ≥40, [14] and the few which have studied birth outcomes of pregnancies of older women (i.e., over 45 years of age) suffer from limitations. For example, most studies were conducted nearly 20 years ago, [15][16][17] when such women were likely to be multiparous and have conceived naturally without artificial reproductive technologies (ART), unlike the women who conceive at similar ages today [18][19][20]. Furthermore, it is conceivable that the effect of older age on the risk of adverse birth outcomes, such as cesarean delivery and preeclampsia, may significantly differ by method of conception and parity because women who conceive by ART have a higher risk for a number of adverse perinatal outcomes, and parity also has a significant effect on the risk of cesarean section and preeclampsia [21][22][23]. However, only one recent study conducted in 217 women in Australia [19] considered this potential effect modification on a limited number of birth outcomes.
In order to address the yet unanswered questions related to this association, we used the Japanese national multicenter-based delivery registry, which includes a relatively large group of women of advanced age, and evaluated the association between adverse birth outcomes and very advanced maternal age, and whether this association differed by maternal characteristics, namely parity and method of conception. Such information would be useful to clinicians when providing antenatal counseling to women of very advanced age.
Study population
This cross sectional study was conducted based on the Japan Society of Obstetrics and Gynecology Perinatal Database (JSOG-DB), an ongoing registry based currently on 149 Japanese tertiary hospitals and covers nearly a tenth of all births in Japan, with over a hundred thousand births registered each year [24]. For this database, maternal demographics, pregnancy complications and birth outcomes were transcribed from medical charts in each hospital using a standardized format.
Multiple pregnancies are at higher risk of adverse outcomes compared to singleton pregnancy, and conception with ART is associated with both older age and multiple pregnancies. [25] Therefore, to differentiate the direct effect of maternal age on birth outcome from any indirect effect mediated by multiple pregnancies, [26] we included only women with singleton pregnancies. Similarly, we excluded women carrying a fetus with congenital abnormalities, as these women have a higher risk of adverse outcomes. Also, as the risk of adverse pregnancy outcomes in women of younger age is strongly related to social risk factors, [27] we restricted our sample to 370,964 women aged 30 years or older who gave birth to singletons with no congenital anomaly between April 2005 and December 2011. From this population, we excluded 5547 women with missing data on either gestational age (n = 207), birthweight (n = 2023), mode of delivery (n = 2210), and those with unreliable combination of birthweight and gestational age using the criteria proposed by Alexander et al. [28] (n = 1107). Among the other variables, smoking status, maternal height, pre-pregnancy body mass index (BMI) and gestational weight gain were missing in a large number of women. An additional 4393 had extreme values (> + 4SD or <−4SD) of height, BMI or gestational weight gain, thus we considered these data to be unreliable. To address these issues while maximizing our sample size to maintain the potential for a generalizable and robust analysis, we performed multiple imputation on the missing and unreliable data and pursued the main analysis on 365,417 women. These results were subsequently confirmed in a sensitivity analysis on the subset 183,084 women after excluding those with missing or unreliable data on height, BMI or gestational weight gain, and including "missing" as a smoking status (yes, no, missing).
For multiple imputation, we replaced missing or unreliable data with 30 sets of imputations for the following variables: maternal height (n = 157,767), maternal BMI (n = 120,257), maternal gestational weight gain during pregnancy (n = 134,122) and smoking (n = 153,652). For imputation, we used multivariate imputation by chained equations, which does not require the assumption of a multivariate normal distribution, and uses a series of regression models where each variable with missing data is modeled conditional upon the other variables in the data.
Variables of interest
The primary exposure of interest was maternal age. Pregnant women were categorized into 4 categories: 30-34 years of age, 35-39 years of age, 40-44 years of age, and 45 years of age and older. Women 30-34 years of age was considered the reference group [19].
We considered a variety of adverse birth outcomes captured in our database: preterm birth, very preterm birth, extremely preterm birth, small for gestational age (SGA), perinatal death, cesarean section, emergency cesarean section, pre-eclampsia, severe preeclampsia, placenta previa, placental abruption, low Apgar score at 5 min, and low pH of umbilical cord artery. We defined SGA as birthweight below 10th percentile for gestational age on the birthweight reference, [29] preterm birth as less than 37 completed weeks of gestation, very preterm birth as less than 32 completed weeks of gestation, and extremely preterm birth as less than 28 completed weeks of gestation [30]. Preeclampsia and severe preeclampsia were diagnosed clinically by obstetricians at each hospital according to the national guideline as systolic/ diastolic blood pressure over 140/90 mmHg and 160/ 110 mmHg that emerges after 20 weeks' gestation with significant proteinuria (≥300 mg/day), respectively [31]. We defined perinatal death as stillbirth and early neonatal death before day 7 or discharge whichever came first, low Apgar score at 5 min as below 7, and low pH of umbilical cord artery as below 7.1. As a previous study suggested the association between age and birth outcomes may differ by conception method and parity, [19] we considered these as effect modifiers. We categorized conception method into natural and any ART, and parity as primiparous and multiparous.
Statistical analysis
First, we compared baseline demographics among the four categories of maternal age using test for trend. Next we used Poisson regression to estimate the effect of maternal age on the risk of adverse birth outcomes considering women aged 30-34 as the reference, as well as tested for the trend of the association. Each result was presented as a risk ratio and 95% confidence interval (CI). To confirm our results which were derived based on partially imputed data, we conducted sensitivity analyses restricting the population to only those who had complete data (n = 183,084).
Next, using this subset of women with complete data, we examined potential effect modification by parity (primiparous or multiparous) and conception method (conception by ART, conception without ART) on the association between maternal age and risk of adverse birth outcomes. We tested for interaction by including two-way multiplicative interaction terms into the Poisson regression model. Subsequently, Poisson regression analyses were performed stratified by parity and by conception method.
Analyses were adjusted for the following: pre-pregnancy BMI, maternal height, gestational weight gain, conception method, maternal smoking during pregnancy, parity, preexisting hypertension, and abnormal glucose tolerance according to known risk factors for various adverse outcomes based on previous studies [19,32,33]. The stratified analysis did not include conception method when stratifying by conception method, or parity when stratifying by parity.
All descriptive and statistical analyses were performed using STATA version 13 (STATA Corp, College Station, TX). Statistical significance was set under 0.05 (including test for interaction), and all statistical tests were two-tailed. The protocol for this study was approved by the Institutional Review Board of the National Center for Child Health and Development on Apr. 18, 2017 (No 1448).
The associations between maternal age and birth outcomes stratified dichotomously by maternal parity and conception method are shown in Table 3. The effect of advanced age on increased risk of cesarean section and emergency cesarean section were significantly greater among primiparous women than among multiparous women. Conversely, the effect of age on increased risk of preeclampsia, and severe preeclampsia were significant greater among multiparous women. Evidence for a heterogeneous effect of advanced age on the risk of preterm birth, very preterm birth, extremely preterm birth, low birthweight, placental abruption, placental previa, low Apgar score, low pH of umbilical cord artery and perinatal death between primiparous and multiparous women was not observed.
The effect of advanced age on increased risk of emergency cesarean section, preeclampsia, placental previa, preterm birth, very preterm birth, extremely preterm birth, low birthweight, and low Apgar score were significantly greater among women who conceived without ART than among those who conceived with ART. Women who conceived by ART did not show stronger effects of advanced age on risk of adverse outcomes than among women who conceived without ART.
Discussion
Using a large nation-wide obstetrics database, we showed pregnant women aged 45 years and older had a 1.5-2 fold greater risk of experiencing maternal morbidities compared to younger women (age 30-34), including risk of cesarean section, preeclampsia, severe preeclampsia, and placenta previa. The risk of neonatal outcomes such as preterm birth, low birthweight, SGA and low pH of umbilical cord artery were relatively smaller (3-20%) or even null. Furthermore, we found that the effect of advanced age differed by conception method and parity. The effect of age on increased pregnancy/birth outcome risks were generally smaller among women who conceived with ART than those without ART. Regarding parity, the effect of age on the risk of cesarean section and emergency cesarean section were significantly greater among primiparous women, while its effect on preeclampsia risk was significantly greater among multiparous women. Consistent with several previous studies, [8,9,12,13,19] our study showed positive association between risk of preterm delivery and maternal age. Although we demonstrated the estimated risk to be largest for those women aged 45 and older, the difference in risk compared to women aged 30-34 was still relatively small. Interestingly, we found that this effect of age differed by conception method. That is, while the risk increased with age in women who conceived without ART, it appeared to decrease in women who conceived with ART. These findings were similar to an Australian study which showed increased maternal age was associated with increased risk of preterm birth only in women who conceived without ART [19]. As for very preterm and extremely preterm births specifically, similar effect modification by ART were observed, but showed less precision due to the smaller numbers of these outcomes. These finding may suggest, that while there is a positive association between maternal age and risk of preterm birth, younger women who conceived through ART may have higher risk of preterm birth compared to those who conceive through ART at older ages.
A similar pattern of effect modification by conception method was observed for risk of placenta previa. The effect of maternal age was stronger among those who conceived without ART. As the proportion of women conceiving with ART also increases with age, these results may also reflect the increased clinical risk of adverse birth outcomes among young women who needed ART to conceive.
In our study older maternal age was significantly associated with risk of cesarean section, including emergency cesarean section, where women aged 45 and older had the highest risk, consistent with findings from previous studies [9,13,19,32,33]. We found that the effects of increased age were significantly greater among primiparous women than multiparous women, consistent with two previous studies [19,33]. This effect modification may be due to higher prevalence of elective cesarean section by request among primiparous women of advanced age [32,34]. It could also be due to primiparous women having a greater increase in risk of prolonged labor or non-reassuring fetal status requiring emergency cesarean section with increasing age. However as the JSOG-DB lacked detailed information on indication for cesarean section, we could not verify this hypothesis in our study.
The risk of preeclampsia, including severe preeclampsia, was also increased among women of advanced age, Adjusted by pre-pregnancy BMI, maternal height, gestational weight gain, conception method, maternal smoking during pregnancy, preexisting hypertension, and preexisting diabetes or GDM especially among those 45 years and over in our study. The effect of maternal age on preeclampsia and severe preeclampsia were greater among multiparous women than primiparous women, which is in contrast to a previous study of 1404 US women which reported a similar effect of age on risk of preeclampsia in both primiparous and multiparous women [35]. One possible explanation for the smaller effect of age observed in primiparous women in our study is the recent use of low dose aspirin in women with high risk of preeclampsia, [36] which was likely more common in the current study than the US study conducted 20 years ago. As both primiparity and advanced age are considered strong risk factors for preeclampsia, [37] primiparous women of advanced age may be more likely to receive such medication compared to multiparous women. If this is the case, our study suggests that such practice should be considered for not only primiparous women, but also for multiparous women of advanced age. Our success in assembling high quality data on a large cohort of pregnant women and births led to many strengths, including the ability to assess rare outcomes such as severe preeclampsia or placental abruption, as well as conduct detailed analyses stratified by method of conception and parity.
Nonetheless, we acknowledge several limitations of our study. First, data on conception method were from records at the delivery hospitals, which in some cases could have been based on self-report from the mother, leading to underreporting of ART usage. Although previous studies demonstrated high positive predictive value of self-reported conception method on actual method, we cannot exclude the possibility of misclassification bias [38,39]. Second, because our database was based on tertiary hospitals, our study population likely comprised a higher proportion of high risk pregnancies leading to potential underestimation of the effect of advanced age on adverse outcomes compared to the general population. To reduce this bias, we excluded women with higher risk such as multiple pregnancy, fetal anomaly; maternal characteristics associated with advanced age pregnancies and risk of adverse pregnancy outcomes were adjusted for in the multivariate analysis, such as preexisting hypertension and abnormal glucose tolerance. Furthermore, we confirmed that our results did not change after adjusting for institution (data not shown). However, further population-based studies should be performed for replication and to clarify the generalizability of our findings. Third, while oocyte donation is one method for conception more popular among women of very advanced age, [40] and women who conceived by oocyte donation are reported to have higher risk of adverse birth outcomes, [41] our database did not include information on the type of ART. As ART is becoming more popular, and the choice of ART method is becoming more complex, future studies using more detailed information of oocyte donation and ART are needed. Finally, our analyses were unable to take into account social economic status (SES), as our database did not collect relevant information. As pregnant women of advanced age would be more likely to be multiparous and conceive without ART if of lower SES, it is possible SES would have biased our findings. Future studies that have adequate measurements of SES should be conducted to check whether our findings can be replicated.
Conclusions
In conclusion, women of advanced age, especially those aged 45 years and older have an elevated risk of adverse outcomes such as cesarean section, preeclampsia, placenta previa, preterm birth, and low birthweight. However, the magnitude of association between age and adverse outcomes differed by parity and conception method. Such findings should be taken into account when conducting antenatal counseling in clinical settings for women with very advanced women.
|
v3-fos-license
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.