added
string
created
string
id
string
metadata
dict
source
string
text
string
version
string
2016-05-12T22:15:10.714Z
2009-02-04T00:00:00.000
6200408
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://bmcoralhealth.biomedcentral.com/track/pdf/10.1186/1472-6831-9-5", "pdf_hash": "9bd512e1f803f73c3782016ed159b16e74fbaaef", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:1563", "s2fieldsofstudy": [ "Medicine" ], "sha1": "9bd512e1f803f73c3782016ed159b16e74fbaaef", "year": 2009 }
pes2o/s2orc
Pit and fissure sealants in dental public health – application criteria and general policy in Finland Background Pit and fissure sealants (sealants) are widely used as a non-operative preventive method in public dental health in Finland. Most children under 19 years of age attend the community-organized dental health services free of charge. The aims of this study were to find out to what extent sealants were applied, what the attitudes of dental professionals towards sealant application were, and whether any existing sealant policies could be detected among the health centres or among the respondents in general. The study evaluated changes that had taken place in the policies used during a ten year period (1991–2001). Methods A questionnaire was mailed to each chief dental officer (CDO) of the 265 public dental health centres in Finland, and to a group of general dentists (GDP) applying sealants in these health centres, giving a total of 434 questionnaires with 22 questions. The response rate was 80% (N = 342). Results A majority of the respondents reported to application of sealants on a systematic basis for children with increased caries risk. The criteria for applying sealants and the actual strategies seemed to vary locally between the dentists within the health centres and between the health centres nationwide. The majority of respondents believed sealants had short- and long-term effects. The overall use of sealants decreased towards the end of the ten year period. The health centres (N = 28) choosing criteria to seal over detected or suspected enamel caries lesion had a DMFT value of 1.0 (SD ± 0.49) at age 12 (year 2000) compared to a value of 1.2 (SD ± 0.47) for those health centres (N = 177) applying sealants by alternative criteria (t-test, p < 0.05). Conclusion There seems to be a need for defined guidelines for sealant application criteria and policy both locally and nationwide. Occlusal caries management may be improved by shifting the sealant policy from the traditional approach of prevention to interception, i.e. applying the sealants over detected or suspected enamel caries lesions instead of sealing sound teeth. Background Dental care in Finland is provided both by private dentists and by community-organized public health centres. The public dental health centres are publicly funded and community-based, providing dental services to all age groups. About half of Finland's 5.2 million inhabitants use private dental services while most children under 19 years of age use the public service. All age groups up to age 19 receive dental services free of charge while other groups pay subsidised fees for treatment. The public health services in Finland are regionally distributed according to the population density of each area. All public dental health centres set their own health care criteria and strategies locally; however, the focus has been strongly on the non-operative preventive care for children. Pit and fissure sealants (sealants) are used by the public health service but neither national guidelines nor general sealant protocols have been published. Pits and fissures of permanent molars are vulnerable sites for caries lesions due to morphology and plaque accumulation [1,2]. Sealants applied to pits and fissures act as mechanical barriers between enamel surface and the biofilm, and if retained completely, have been shown to be very effective in restricting the growth of bacteria. The studies of Handelman [3,4] from over 30 years ago and some later studies by Mertz-Fairhurst et al. [5,6] have shown that when caries lesions are sealed, the lesion does not progress. Until the middle of 1980's sealants were generally applied in a preventive manner solely to intact, unstained fissures with no suspected enamel caries lesions [7,8]. The present recommendations for sealant application [9][10][11][12] relate back to several international consensus reports from the 1980's and 1990's where sealing over enamel lesions and questionable fissures was suggested [13][14][15][16][17][18]. The selected study period was particularly interesting since after 2001, a new national legislation in Finland changed the focus of public dental health care, extending the system to cover the whole population and thus limiting the resources available for younger age groups. The changed legislation implicated a veritable increase of costs for the publicly funded oral health care. The aim of this study was to find out to what extent the earlier guidelines and recommendations (published up to 1995) [13][14][15][16][17][18][19][20][21] were adopted by the dental professionals in the Finnish dental public health system. Attitudes towards sealants, as well as changes in the attitudes were recorded during the studied period from 1991 to 2001. Moreover, we wanted to determine the frequency of sealant use among other preventive or interceptive procedures in dental public health. Furthermore, our purpose was to determine whether uniform criteria and policies for sealing or locally agreed sealant strategies could be found. The specific aim was to find out whether a relationship between past caries experience and the sealant application protocol used could be found within the health centres. Methods A structured questionnaire was mailed to each of the 256 public dental health centres in Finland during the year 2001. The questionnaire covered demographic items, examination policies, sealant application protocol, changes in oral health practice over the studied period, attitudes towards sealant application and sealant efficacy, and the local DMFT index values of each health centre. For the present study, the data were categorized as follows: The questionnaire was initially piloted by three CDOs and was amended according to their suggestions before the study began. Public health centres collect data of the patients examined and treated: In Finland the DMFT index value is recorded from all patients at every examination and the DMFT of all patients monitored by age groups. As the population density varies greatly regionally, we categorized the health centres into subgroups of 'large' and 'small' in order to get a representative cluster sample among the dentists applying sealants at the health centres. Consequently, public dental health centres with less than 7 dentists were classified as 'small' while all other health centres were classified as 'large'. The number of dental hygienists or dental nurses did not affect this classification. In all cases the questionnaires were mailed to the chief dental officer (CDO). An additional questionnaire was mailed to every 7 th general practitioner (GDP) in the 'large' health centre-group. These additional questionnaires were addressed especially to dentists applying sealants; these dentists were identified locally by each CDO. Thus dental surgeons and orthodontists, for example, who do not apply sealants, were excluded from this sample. The questionnaire was simultaneously sent by e-mail so the respondent could choose the most convenient way to reply. The 'large' dental health centres (N = 77) received a total of 254 questionnaires (each health centre receiving 2-29 questionnaires) while the 180 health centres that fell into the 'small' -category received only one questionnaire each. A sample of 434 questionnaires was issued to the CDOs: 267 (62%) to be replied to by him/herself while 167 (38%) questionnaires were requested to be delivered forward to a GDP in that health centre. Sealant application protocol: criteria and policies In this study the systematic use of sealants was defined as follows: "Sealants are taken into consideration as a possible treatment mode and are usually applied to teeth according to particular criteria." Even though the final decision regarding sealant application was always made by the operator himself, information regarding any existing local agreement on the general guidelines for the criteria was requested from each respondent. Sealant application criteria and the policies used were recorded. Information on the treatment of choice was also requested in some specified situations, for example in the case of partially erupted molars at risk for dentin caries. Factors indicating low caries risk were scored in a question further evaluating the risk of dentin caries development in permanent molars. The type of sealant material used was recorded as well as changes in the material choices between 1991 and 2001. Reasons for totally abstaining from sealant application as well as the use of other preferred non-operative procedures in dentin caries prevention and management were recorded. Dental examination periods and the DMFT-values The check-up intervals, as well as the criteria for choosing either a fixed (annual) or an individual examination interval, were recorded (years 1991 and 2001 respectively). Based on the examinations of the 12-year old children in 1991 and 2000, the DMFT index values were collected from each participating health centre. The change (decrease or increase) in the DMFT index values between 1991 and 2000 of each dental health centre was evaluated as well as the relationship between the systematic sealing and the DMFT-value in 2000. To find out the impact of the interceptive policy (sealing over enamel caries) on the prevalence of dentinal caries, the DMFT rates in year 2000 of those health centres that reported to have applied this policy in 1991 were compared to the year 2000 DMFT rates of the rest of the health centres applying an alternative sealant policy. T-test was used for the statistical analysis. Profession of the operator The respondents were asked to indicate whether the sealants in the health centre were applied by dentists or by dental auxiliaries both in 1991 and 2001. In cases where dental hygienists or dental assistants applied sealants the respondents were asked who was responsible for the final treatment decision, the dentist or the auxiliary. Efficacy and of sealants in relation to the costs The respondents were asked to evaluate the outcome after sealant application (short-or long-term efficacy of sealants) as well as the costs implied from the sealant approach. The value of an intact tooth achieved by an efficient sealant program was estimated by a hypothetical question where the intact tooth was compared to an adequately restored one; the respondent was asked for his willingness to pay for the costs of sealant procedures. Results Of the 434 issued questionnaires, a total of 342 were returned after one re-issue to the non-responders, giving a response rate of 79%. The small health centres returned 85% (N = 153), CDOs of the large health centres 70% (N = 60) and the GDPs of the large health centres 77% (N = 129) of the questionnaires, respectively. For a cluster sample, where health centre makes up a cluster, the response rate was 85% (N = 219). Of all the responses given, CDO's replies comprised 58% (N = 199) and GDP's 42% (N = 143), respectively. Four replies given by dental hygienists were included in the GDP-group. Systematic use of sealants A majority of the CDOs (57%) reported systematic sealant application; among the GDPs this was the case in 48%. A total 44% of dentists working in small health centres, and 64% working in large health centres reported systematic sealant application. On the issue of systematic sealing there was inconsistency in the respondents' opinions within particular health centres. In the large health centres the opinions varied among the CDOs, among the GDPs and between these two groups. Only in five out of the 14 largest health centres did all respondents give a consistent answer to the question of whether sealants were used systematically ( Fig. 1). Among the respondents reporting systematic sealant use, 49% had agreed on a local sealant policy including criteria for when to apply sealants. In most cases this agreement was verbal; a written document on the intended sealant policy was only found in 15% of the small health centres and in 5% of the large ones. In majority of health centres the agreement had been taken into practise in 1990 or earlier; 32% of the respondents reported that the criteria had been amended afterwards. Sealant application criteria and protocol The respondents from small health centres applied sealants more extensively on suspected or detected enamel caries in 1991 than did those from large health centres. During the ten-year period, a distinct shift of sealing over on enamel caries lesions had taken place: the proportion of respondents using this criteria increased from 30% in 1991 to 37% in 2001, yet 44% preferred to seal only the sound fissures in 2001 (Table 1). In permanent molars with suspected or detected enamel caries lesions at the occlusal surface, the most common choice of treatment in 2001 was to open the fissure up at the enamel level and to apply a sealant. The preceding eradication of enamel caries before applying a sealant was Systematic sealant use and the distribution rate of opinions within health centres done almost as often as the application of topical fluoride to suspected occlusal surfaces. The simple sealant application procedure had further lost its popularity in 2001 (Table 2). In those 28 health centres reporting application of sealants on suspected or detected enamel caries lesions in 1991, the DMFT value in 2000 was 1.0 (SD ± 0.49) at age 12 compared to a value of 1.2 (SD ± 0.47) for those health centres (N = 177) applying sealants by alternative criteria (t-test, p < 0.05). Most of the respondents applied sealants to both the first and second permanent molars in 1991 and 2001. The tendency not to choose selected target teeth for sealant application increased towards the end of the study period (Table 3). Erupting molars in caries risk A total of 41% of the respondents reported not to have used any specific treatment policy for erupting molars at risk for caries in 1991. By 2001 the proportion of respondents lacking any special policy in such cases had increased to 52%. Of the respondents, 29% in 1991 and 25% in 2001, respectively, applied topical fluoride once to the fissure as a treatment. About one-fifth of the respondents sealed the visible part of the fissure in both 1991 (22%) and 2001 (19%). In such cases the preferred maintenance period was not changed by the majority of respondents. Re-scheduling the following examination to an earlier appointment was chosen by 42% of the respondents both in 1991 and 2001, respectively. Examination policies Most of the respondents (91%) reported examining children annually in 1991 irrespective of their dental status. In 2001 this was the case 17% of the time while 78% reported individually determined intervals for dental examinations. In some health centres a dental hygienist first examined the patient but had the opportunity to consult the dentist before making the decision on whether to seal or not to seal. If a previously applied sealant was found defective at examination, this did not usually lead to further maintenance. Re-evaluation of sealants and necessary resealing was reported by 26% (1991) and 8% (2001) of the respondents -this was the second choice treatment in 1991 and in 2001. Maintenance and re-maintenance of sealants and the sealed teeth was considered unnecessary by 29% of the respondents in 1991 and 37% in 2001, respectively. Evaluation of dentin caries risk The most obvious predictor reported to indicate low risk for dentin caries at fissures was the intact dentition of the Total 100 100 The intended treatment of choice for suspected or detected enamel caries lesions on molar occlusal surfaces. child (52%). The second predictor of choice was a good observed level of dental hygiene (44%) and the third factor reported (41%) was the observation of gently sloping cusps of molar teeth (shallow fissures). Over one-third of the respondents thought that the absence of initial (enamel) caries was a good indicator and over one-fifth that the absence of visible plaque or gingival bleeding would indicate low dentin caries risk. The reported indicators of least importance were gender, lack of visible calculus, lack of use of dental floss and the overall caries decline among children. Sealant material The Reasons for refraining from the use of sealants Ten percent of the respondents refrained totally from applying sealants. Those dentists or health centres that did not apply sealants at all gave several reasons for this. The main reason was that sealants were thought to have low cost-effectiveness (30%). One-fourth of the respondents (26%) thought that there was no further need to seal fis-sures since the local DMFT values had decreased to the low levels they were then. Nearly one-fifth (17%) shared the opinion that sealants were ineffective or that other methods were more effective than sealants in arresting enamel caries lesions at occlusal surfaces. Application of sealants by dental auxiliaries The estimated number of appointments with dental hygienists increased both with respect to independent decision-making and to the actual procedure of sealant application. In 1991 the majority of sealants were applied by dentists, but by 2001 the dentists were outnumbered by dental hygienists (Fig. 2). In health centres where sealants were applied by dental auxiliaries, the dentist examined and set the initial diagnosis in 69% of the cases in 1991 and in 47% in 2001, respectively. A small minority of health centres reported dental assistants as the main group applying sealants. Role of sealants in caries prevention and management Most of the respondents estimated that sealants had both long-and short-term effects on dentin caries development (Table 4). When asked the hypothetical question of what should be done if the treatment would concern their own child, one-third of the respondents (N = 98) were willing to pay whatever was needed to cover the costs to ensure intact teeth rather than receiving a filling free of charge. Of all the appointments for children up to age 19, the estimated proportion of appointments where sealants were applied decreased from 16% in 1991 to 10% in 2001. A vast majority (76%) of the respondents estimated that Discussion Even though sealants have been widely used in Finland since 1970, neither uniform criteria for sealant application nor a trend regarding sealant policy could be found among the responses in this present study. Only a few health centres had defined the criteria and a policy for sealant application by local agreement. Definitions best describing the short-and long-term effectiveness of sealants % N Score In some cases sealants can prevent the development of dentin caries for a lifetime 70 209 1 In most cases sealants can prevent the development of dentin caries for some years 42 145 2 In most cases sealants can prevent the development of dentin caries for a lifetime 21 71 3 In some cases sealants can prevent the development of dentin caries for some years 18 63 4 Sealants have more short-than long-term effects on caries development on the occlusal surfaces 16 56 5 Sealants have more long-than short-term effects on caries development on the occlusal surfaces 12 40 6 Other opinion 3 10 7 Sealants do not have long-term effects on caries development on the occlusal surfaces This study gives information about the gradual changes that have taken place in dental public health in Finland during the 10-year period. A questionnaire study has its limits since most of the responses are self-reported and do not give exact information. As some of the questions go back to the situation over ten years ago, the information should be interpreted with caution. However, it can be assumed that the trends of the attempted sealant approach will stay in mind even if the details are forgotten, thus this study demonstrates attitudes towards sealant application and describes the sealant policies in practice. With susceptible fissures, sealing on enamel caries was the expected treatment of choice since it has been shown to efficiently restrict the growth of bacteria in the occlusal lesion [1,[3][4][5][6]22]. Although one-third of the respondents included suspected or detected enamel caries in the application criteria for sealants (Table 1), only a minority of them intended to place the sealant in an interceptive manner. On the contrary, a vast majority of those respondents reporting sealing as an option for management of suspected or detected enamel caries ( Table 2) would have applied a sealant only after first cleaning and widening the fissure, and thus eliminating the initial lesion with a rotary instrument. The procedure resembles application of preventive resin restoration (PRR) as was first suggested by Simonsen [23]: the susceptible fissures at occlusal surfaces are opened up with a small tapered fissure bur prior to restoring the cavity with diluted composite. RB sealant is then applied over the edges of the filled cavity, covering also the other remaining pits and fissures at the occlusal surface. In terms of resource requirements, this treatment modality (PRR) is almost as time-consuming and personneldemanding as a sealant restoration extending to the dentin [24], and is thus not comparable with interceptive sealant application. Moreover, opening the susceptible fissure is no longer considered necessary, since sealants have been shown to be effective when placed in a cariostatic manner, thus arresting the progression of the eventual enamel lesion [3][4][5]11,18]. Early caries management by sealant application is recommended in recent consensus statements [9,12]. Only cases where the caries lesion has with certainty progressed to dentin is restorative treatment advocated, preferably in the form of sealant restoration [16,18]. During the ten-year period a definite shift from annual examinations towards fixed individual examination intervals was found. The number of dental auxiliaries, mainly dental hygienists, participating in both independent decision-making and the actual procedure of sealant application increased. We believe that this is a continuing trend which allocates more of the preventive and interceptive procedures to dental hygienists. A majority of the respondents applied sealants to both the first and second permanent molars. This is in line with the earlier studies of Bohannan et al. [1], who showed that the permanent first and second molars are several times more susceptible to decay than the premolars. Most respondents did not have any treatment policy for erupting molars at risk for caries even though the erupting molars are vulnerable to develop dentin caries due to plaque accumulation, as was reported by Carvalho et al. [2] with erupting first molars. Re-examination at six-month intervals was shown in the studies of Vehkalahti et al. [25] to be most beneficial to the erupting first permanent molars since the fissures at risk could be sealed soon after eruption. Even a self-diagnosed high dentin caries risk of the erupting molars did not change the intended maintenance period for most of the respondents in this study. With erupting molars at risk for caries, the respondents frequently applied topical fluoride to the occlusal surfaces even though fluoride varnish applied topically to the fissure did not markedly reduce the rate of caries in the studies of Holm et al. [26]. Bravo et al. [27] compared fissure sealing and fluoride varnish application on first permanent molars and found sealant application more effective in dentin caries prevention even though the fissures were sound prior to sealing. This is also in line with the meta-analysis of Hiiri et al. [28], who found pit and fissure sealants superior to topical fluoride. Increased caries risk led a majority of the respondents to consider sealant application to molar teeth, which is in line with the conclusions of Beauchamp et al. [12], who found that sealant application to high risk individuals was effective. They also recommended periodic reconsideration of caries risk status. Kumar et al. [29] targeted sealant application to high-risk first molars on a school-based program and found this approach effective when compared to unsealed low-risk first molars. Targeting preventive procedures according to individual risk-assessment has been criticized as being impractical and thus inefficient in dental public health by Burt [30]; he concluded that, as the risk assessment methods are imprecise, persons at risk cannot be adequately identified. Unacceptable precision in caries prediction in general was also found in the studies of Alanen et al., where the dental clinicians tried to identify the high-risk subjects. They concluded, though, that some experienced clinicians were able to predict caries risk with high specifity and sensitivity levels [31]. Nevertheless, shifting the sealant policy towards a more interceptive procedure in arresting clearly observed changes (suspected or detected enamel caries lesion in the fissure) would at least partly overcome this dilemma. GIC remained the material of choice for 14% of the respondents in 2001, even though several studies have found the retention rate of GIC sealants lower than that of RB sealants [22,[32][33][34]. It is concluded that sealants are very effective in preventing dentin caries if completely retained on the tooth surface. To remain the sealant integrity, recall and maintenance of sealants and sealed teeth is necessary [9,10,12]. Re-examinations and resealing are also suggested in the studies of Whyte et al. [35] who had clinical success rates of 97-99% with low resealing rate in their sealant study. They found that dentin caries formation occurred every year and re-examinations and resealing was suggested for children at risk for dentin caries. There was an overall decrease in the DMFT values of the 12-year old children from 1.5 to 1.2 during the period studied. Low DMFT-index values were found more often in the health centres where sealants were applied over suspected or detected enamel caries. Therefore, caries management of incipient enamel lesions at the occlusal surfaces seems to be more effective than caries prevention at these sites. One reason why dentists are afraid of applying sealants is probably the fact that they are not willing to leave carious tissue under a sealant or they lack the knowledge of arresting incipient caries lesions by simple sealant application. With defined criteria and a protocol for sealant application, the dentists in public health centres can probably be encouraged to use sealants more often and to concentrate on the arresting of lesions. The results of the present study may be modified by the fact that in year 2000 the children were no longer examined at the age of 12 as a total age-group. As caries shows a skewed distribution, the children considered to be in the high-risk group are probably examined and treated more often than others not at risk. Even though the DMFT values (in years 1991 and 2000) are thus not comparable with each other, they do reflect the general trend and the changes found in each health centre. The present study showed vast variation in the adopted sealant policies; the DMFT data was in line with earlier recommendations favouring the interceptive approach. Rather than caries prevention of sound teeth, the use of sealants should be restricted to the non-cavitated enamel lesion in order to arrest the growth of the bacteria and thus to prevent the initial lesions from progressing into dentin caries. As substantial amount of resources, time and effort are required for the preventive/interceptive dental procedures, it is important to use those resources as effectively as possible. Conclusion The present study suggests that the appropriate sealant policy may have an impact on the dentin caries decline. The use of sealants declined during the studied period; however, majority of the respondents still applied sealants in 2001 but not in an interceptive manner even though this was suggested by the evidence based recommendations since 1980's.
v3-fos-license
2018-04-03T03:32:59.745Z
2017-07-26T00:00:00.000
6714592
{ "extfieldsofstudy": [ "Psychology", "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0179145&type=printable", "pdf_hash": "4a570f686a0b877e57ffe1214a5b0d2bd182d222", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:1567", "s2fieldsofstudy": [ "Psychology" ], "sha1": "4a570f686a0b877e57ffe1214a5b0d2bd182d222", "year": 2017 }
pes2o/s2orc
Expressive intent, ambiguity, and aesthetic experiences of music and poetry A growing number of studies are investigating the way that aesthetic experiences are generated across different media. Empathy with a perceived human artist has been suggested as a common mechanism [1]. In this study, people heard 30 s excerpts of ambiguous music and poetry preceded by neutral, positively valenced, or negatively valenced information about the composer's or author’s intent. The information influenced their perception of the excerpts—excerpts paired with positive intent information were perceived as happier and excerpts paired with negative intent information were perceived as sadder (although across intent conditions, musical excerpts were perceived as happier than poetry excerpts). Moreover, the information modulated the aesthetic experience of the excerpts in different ways for the different excerpt types: positive intent information increased enjoyment and the degree to which people found the musical excerpts to be moving, but negative intent information increased these qualities for poetry. Additionally, positive intent information was judged to better match musical excerpts and negative intent information to better match poetic excerpts. These results suggest that empathy with a perceived human artist is indeed an important shared factor across experiences of music and poetry, but that other mechanisms distinguish the generation of aesthetic appreciation between these two media. Introduction Ambiguity, or the capacity to sustain multiple interpretations, has been identified as a central characteristic of art [2]- [5]. Studies in the visual domain have produced contradictory findings, some suggesting that ambiguity elevates artistic appreciation [6], others suggesting that a moderate level of ambiguity is preferred [7], and still others suggesting that artistic appreciation increases when ambiguity is reduced or eliminated [8], [9]. But these studies have used elements like referential titles and stylistic statements to disambiguate, targeting the cognitive underpinnings of aesthetic appreciation. Aesthetic appreciation also depends on expressive interpretation-suppositions about the artist's emotional and communicative intent [10], [11]. For many artistic domains, such as a1111111111 a1111111111 a1111111111 a1111111111 a1111111111 The psychology and philosophy of aesthetics have often distinguished between perceived and felt emotions-that is, the recognition that a particular work is (for example) expressive of sadness versus the actual induction of this emotion in the listener, viewer, or reader [39]. Drawing attention to the expressive intent of the person who created the artwork might make it likelier for a perceiver to adopt an empathetic stance, resulting in emotions that are felt in addition to merely perceived [40]. The degree to which this experience of felt rather than merely perceived emotions is pleasant might depend on whether the emotions are sad or happy. People have puzzled for centuries over the question of why people like to listen to sad music or read sad poetry [41]. Research on the enjoyment of sad music is summarized by Sachs, Damasio, and Habibi [42]. Building on data from Taruffi and Koelsch [43], Schubert [44] developed a theory of the enjoyment of sadness in aesthetic contexts. A recent paper by Brattico et al. [28] examined the neural underpinnings of this phenomenon, and Menninghaus et al. [45] investigated the enjoyment of sad literature. Aesthetic framing can affect the perceived valence of experiences of disgust [46] and anger [47],and the experience of being moved can affect the valence of experiences of sadness during film viewing [48], [49] and music listening [50]. Most laboratory studies show that people prefer happy to sad music [51], [52], although this preference disappears in certain circumstances, such as when the music is presented incidentally to a difficult task [53]. Given the potential overlap between affective mechanisms in music and literature, the question arises of whether happy artworks might also be preferred in literary domains such as poetry. The research reported here tackles three areas of interest at once: comparative aesthetics; the relationship between extrinsic information and aesthetic experience; and responses to aesthetic ambiguity. Participants listened to 30 s excerpts of music or poetry previously categorized as expressively ambiguous-that is, excerpts that could be understood as positively or negatively valenced. They were told that information existed about the composer or poet's intentions for each excerpt. This information was presented on screen before each excerpt. One-third of the excerpts were prefaced by intent information that was negative valenced. One-third were prefaced by intent information that was positively valenced. One-third were prefaced by information that was neutral in valence. Which participant heard which excerpt paired with which description was systematically varied using a Latin Squares design. The same descriptions used to preface the musical excerpts for one half of participants were used to preface the poetry excerpts for the other half, and vice versa. After each excerpt, participants reported how happy each excerpt was, how sad it was, how much they enjoyed it, how moving they found it, and how well the excerpt conveyed the composer or author's intention. The construct of being moved has been well-investigated by Kühnast et al. [54] and Menninghaus et al. [55]. From the perspective of comparative aesthetics, this study seeks to understand whether extrinsic information about the artist's intent affects aesthetic appreciation similarly for musical and poetic excerpts. Whereas poetry uses words with semantic meaning as material, music's semantic resonances are famously vague [3]. If expressive disambiguation affects aesthetic appreciation similarly for the two artistic media, it would suggest that the relationship between aesthetic appreciation and perceived expressive valence operates in a domain-general way, not dependent on the nature of the semantics employed by the medium. If it affects aesthetic appreciation differently in poetry and music, it would suggest that the material of the medium influences this relationship. From the perspective of investigating the relationship between extrinsic information and aesthetic experiences, this study asks whether information about an artist's expressive intent can influence the way a piece of music or poetry is processed affectively. If positively valenced information leads participants to experience excerpts as happier, and negatively valenced information leads participants to experience excerpts as sadder, it would suggest that people can integrate verbal information provided before an aesthetic experience into their emotional processing of the art. If the presentation of positively or negatively valenced information impacts the evaluation of the excerpt's enjoyability or movingness, then it would suggest that aesthetic experiences can be at least in part a function of extrinsic information about an excerpt's emotional tenor, demonstrating a role for cultural messaging beyond the intrinsic content of a work of art. From the perspective of the relationship between ambiguity and aesthetic appreciation, this study investigates whether people prefer the rich multiplicity of meanings in an expressively ambiguous excerpt, or the direct communication of an excerpt that is unambiguously positive or negative. By varying the description paired with individual excerpts, this study manipulates the ambiguity of music and poetry while controlling for the actual content of the excerpts. If people prefer and are more moved by excerpts when they are prefaced by neutral intent information, it would suggest that people value the presence of expressive ambiguity in aesthetic experiences. If people prefer and are more moved by the excerpts when they are prefaced by a positive or negative description, it would suggest that people value aesthetic experiences that arise out of excerpts with a single expressive cast. Participants The participants in this study were 118 students (37 male) recruited from general psychology classes at the University of Arkansas. Their mean age was 19.5 (SD = 3.1). Four were music majors, and one was an English major. They reported listening to an average of 16.4 hours of music each week (SD = 15.7), and reading poetry for an average of 2.1 hours each week (SD = 5.3). They volunteered to participate in exchange for partial fulfillment of a course research requirement. Materials & apparatus Expressively ambiguous excerpts of music and poetry were selected as the primary stimuli of interest for this study. A smaller number of expressively unambiguous excerpts (clearly positively or negatively valenced) were selected to enhance believability of the description-excerpt pairings. The music excerpts, listed in S1 Appendix, were drawn from the stimuli used by Hunter, Schellenberg, and Schimmack [15]. Their stimuli were excerpts of approximately 30 s that either straightforwardly conveyed positive or negative affect via consistent structural cues (major mode and fast tempo for positive; minor mode and slow tempo for negative) or conveyed ambiguous affect by mixing structural cues (major mode and slow tempo or minor mode and fast tempo). These stimuli spanned a variety of musical styles, but all were instrumental excerpts featuring no lyrics or vocal part. Of Hunter et al.'s excerpts, we used four positive, four negative, and 18 ambiguous excerpts, selected as likely to be unfamiliar to a population of college students. Like the music excerpts, the poetry excerpts, listed in S2 Appendix, were selected from both classics (e.g., Walt Whitman) and contemporary sources (e.g., The New Yorker), and were edited to last approximately 30s. These excerpts were read and recorded by a professional actor instructed to speak with a neutral, affectively uninflected tone. Forty candidate recorded poetry excerpts were presented to a group of n = 28 participants who did not participate in the main study. They were asked to rate positivity, negativity, ambiguity (between positivity and negativity), familiarity, and enjoyment for each; of the excerpts people reported to be most unfamiliar, we selected the four most-positive, four most-negative, and 18 most-ambiguous to use in the main study. Descriptions, listed in S3 Appendix, were written for the music and poetry excerpts to convey positive intentions (e.g., The author/composer wrote this poem to express his passion and devotion for his love), negative intentions (e.g., The author/composer wrote this piece to express mourning over the death of a family member), and neutral intentions (e.g., The author/composer wrote this poem to experiment with different writing techniques). In the principal manipulation of interest, ambiguous excerpts were paired with positive, negative, or neutral intent descriptions. To preserve believability, positive excerpts were paired only with positive or neutral intent descriptions, and negative excerpts were paired only with negative or neutral descriptions. These excerpts were included to enhance believability of the description-excerpt pairings, not as comparisons of interest. Six different lists of descriptionexcerpt pairings for the ambiguous excerpts were created using a Latin square design. For each group of participants, half the descriptions were paired with poetry excerpts, and half with music excerpts. For the next group of participants, the pairings were reversed, that is, the descriptions that had been paired with poetry excerpts for the previous group were paired with music excerpts, and vice versa. Across the entire experiment, then, the same descriptions were used for both poem and music excerpts, but for no single participant was the same description used twice. Design Each participant experienced positive, negative, and neutral music and poetic excerpts, along with positive, negative, and neutral intention descriptions. Each participant was randomly assigned to one of the six lists of stimulus pairings. For the critical ambiguous excerpts, the design was a 2 (excerpt type: music, poetry) × 3 (intention description: positive, negative, neutral) repeated-measures study. Procedure Participants were tested individually in a 4' × 4' booth (WhisperRoom Sound Isolation Enclosure; MDL 4848E/ENV). MediaLab software [56] was used to present instructions and intention descriptions visually on a 22" Dell P2212H monitor, and to present excerpts auditorily over Sennsheisser HD 600 open-air, around-ear headphones, as well as to collect responses via a computer keyboard. This study was approved by the University of Arkansas Institutional Review Board (protocol #15-04-664). Before each session, participants provided written informed consent by signing a form. Once they proceeded to the experimental session, participants first answered a set of demographic questions. Then they were presented with a block of poetry excerpts or a block of music excerpts. The order of presentation of these two blocks was randomized for each participant. At the start of each block participants were told that they would hear a number of poetry (or music) excerpts. Then they were told that these poems (or pieces of music) "are special, because in each case we know a fact about the author's (or composer's) intent or circumstances while writing them. For each excerpt, we'll tell you this fact before presenting the poem (or piece)." They were informed that they should try to pay as close attention as possible, and that questions would follow each excerpt. Next, they performed a full practice trial. Each of the 52 experimental trials started with the onscreen presentation of the intent description. Next, the recording of the poetry or music excerpt was played while the description remains onscreen. Finally, five questions were presented in the same order, each requiring the participant to select a response along a 7-point scale (1 = not at all; 7 = maximally): • How happy did this excerpt seem? • How moving did this excerpt seem? • How sad did this excerpt seem? • Did the excerpt match the intent or circumstances of the composer? • How much did you enjoy this excerpt? Within each experimental block (poetry or music), the individual trials were presented in random order. Data exclusion Due to an error in preparing the experiment, data for one of the positive music stimulus excerpts were not recorded. Modeling & analytic details Linear mixed modeling of dependent measures was carried out with the R [57] package lme4 [58] using maximum likelihood estimation. Both participants and stimuli were treated as random-effects variables. We first fit models with maximal random-effects structure that included random slopes for each of the fixed factors within each participant and stimulus [59]. If the maximal model failed to converge, the random-effects structure was simplified incrementally by removing one random slope at a time, the one that explained the least variance in the model that did not converge. Where p-values are reported, they are based on df estimated using Satterthwaite's approximation implemented by the lmerTest package [60], and reported rounded to the nearest tenth. Intent description match for positive & negative excerpts. Verifying that the positive and negative excerpts respectively matched positive and negative intention descriptions better than ambiguous excepts matched any kind of intention description, match ratings between positive excepts and intentions (M = 5.21, SE = 0.14) and between negative excerpts and intentions (M = 5.46, SE = 0.15) were much higher than for ambiguous excerpts and any kind of intention description (M = 3.95, SE = 0.09); for positive vs ambiguous, t(93.6) = 8.67, p < .001, and for negative vs ambiguous, t(74.7) = 10.05, p < .001. These match effects did significantly interact with excerpt type (i.e., poetry vs music), F(2, 83.7) = 3.37, p = .04, reflecting that negative poetry excerpts paired with negative intentions were rated as especially well-matched relative to ambiguous excerpts. There was no significant difference between music and poetry on match ratings overall, confirming that the intent descriptions were equally well-suited to the music and poetry samples. Intent description match for ambiguous excerpts. Finally, there was significant variability in how well the intention descriptions were perceived to match the critical ambiguous excerpts. The difference in match across intention descriptions was significant, F(2, 176.5) = 3.40, p = .04, with slightly higher match ratings for positive intention descriptions (M = 4.05, SE = 0.10) than for negative intention descriptions (M = 3.94, SE = 0.10) and for neutral intention descriptions (M = 3.89, SE = 0.10). Match scores for music (M = 4.10, SE = 0.12) were slightly higher than for poetry (M = 3.82, SE = 0.12), F(1, 44.0) = 4.03, p = .051, although this difference was not significant. The interaction between intention description type and excerpt type (i.e., poetry vs music) was very strong, F(2, 125.0) = 132.39, p < .001, reflecting that for music, positive and neutral intention descriptions are better matched than negative descriptions, but the opposite pattern emerges for poetry. Because of this interaction, after presenting the effects of the critical factors of intention description and excerpt type on the aesthetic outcome measures of enjoyment, happiness, sadness, and movingness, we consider the possibility that perceived match between intention description and stimulus mediates aesthetic experience. Aesthetic experience For the analyses presented in this section, only ratings provided for ambiguous excerpts are analyzed. They were examined as a function of intention description and excerpt type, as well as their interaction. Enjoyment. Enjoyment ratings appear in Fig 1. There was no significant effect of intention description type on enjoyment, F(2, 66.9) = 1.93, p = .15, but there was a clear interaction of intention and excerpt type, F(2, 54.9) = 15.51, p < .001; negative intentions increased enjoyment of poetry relative to the neutral descriptions, but decreased enjoyment of music relative to the neutral condition. There was also a large enjoyment advantage for music (M = 4.08, SE = 0.13) over poetry (M = 3.31, SE = 0.14), t(49.8) = 5.08, p < .001. Happiness. Happiness ratings appear in Fig 2. There was a clear, predictable effect of intentions, F(2, 154.0) = 199.46, p < .001, such that positive intentions led to an increase in happiness ratings relative to neutral descriptions, and negative intentions led to a decrease in happiness ratings relative to neutral descriptions. Music (M = 4.26, SE = 0.16) elicited higher happiness ratings than poetry overall (M = 2.75, SE = 0.16), t(43.2) = 6.47, p < .001. The two factors did not interact significantly (F < 1). Sadness. Sadness ratings mirror those of happiness, as depicted in Fig 3. Again, there was a clear, predictable effect of intentions, F(2, 71.4) = 106.19, p < .001, such that negative intentions led to an increase in sadness ratings relative to neutral descriptions, and positive intentions led to a decrease in sadness ratings relative to neutral descriptions. Poetry (M = 3.72, SE = 0.15) elicited higher sadness ratings than music overall (M = 2.50, SE = 0.15), t(38.5) = 5.31, p < .001. The two factors did not interact significantly (F % 1.5). Movingness. Movingness ratings showed a distinct difference between the way music and poetry were experienced depending on the intention description's valence (see Fig 4). There was a significant effect of intention description type on movingness, F(2, 57.5) = 16.82, p < .001; both negative (M = 3.73, SE = 0.11) and positive (M = 3.80, SE = 0.11) intentions led to higher movingness ratings than did neutral descriptions (M = 3.52, SE = 0.11). This pattern is qualified by an interaction of intention and excerpt type, F(2, 54.4) = 12.15, p < .001; this interaction reflects that positive intentions increased movingness for music not for poetry, and negative intentions increased movingness for poetry but not for music. Music (M = 3.89, SE = 0.13) was rated as more moving than poetry overall (M = 3.48, SE = 0.13), t(50.2) = 2.48, p = .02. Mediation of enjoyment by match. Following the Selig and Preacher's [61] Monte Carlo method for assessing mediation, we carried out analyses to test if the interaction of intention description with excerpt type on enjoyment was mediated by the perceived match between ambiguous excerpts and the intention description they were paired with. To do this, the analyses reported above for enjoyment were repeated with match as an additional predictor (i.e., covariate) in the regression model. Recall that the interaction of intention description with excerpt type on perceived match in ambiguous excerpts (see "Excerpt selection & description checks") has already been established, a critical step in conventional mediation analyses [62]. For enjoyment, the interaction of intention description with excerpt type is decomposed in Table 1. This table displays the unmediated effects of positive and negative intention descriptions relative to neutral descriptions separately for music and poetry. Relative to neutral intentions descriptions, negative descriptions reduced enjoyment for music but increased enjoyment for poetry. Controlling for match, the intention description by excerpt interactions for enjoyment is no longer significant, F(2, 183.5) = 1.82, p = .16, Discussion From these data, it is clear that the presentation of information about the artist's expressive intent influenced people's emotional experiences of both music and poetry. The presentation of positively valenced information caused people to experience excerpts as happier and less sad. The presentation of negatively valenced information, on the other hand, caused people to experience excerpts as sadder and less happy. These effects were robust and of a similar size for both music and poetry. Given that both domains seem susceptible to the impact of information about expressive intent, an interesting question arises about whether similarities exist between the mechanisms that give rise to aesthetic experiences in both domains. Menninghaus et al. [45], for example, suggests that parallelistic structure, a feature long identified as important to music [63] also shapes aesthetic response for poetry. The ability of musical and poetic excerpts to so easily take on the emotional tenor suggested by brief statements of authorial or compositional intent confirms prior suggestions [36] that empathy with a perceived human producer is an important part of the emotional experience of art across various media. It also suggests that people are able to integrate verbal information into the aesthetic experience with comparable success regardless of whether the materials of the medium are or are not verbal themselves-that is, people could integrate intent descriptions with experiences of a language-based art (poetry) as well as they could with experiences of a non-language-based art (music). The ability of information provided before an aesthetic experience to alter the way it is processed has been demonstrated by Kroger and Margulis [17] for information about quality, Margulis [18] for information about content, and Brattico et al. [64] for task-ERP evidence demonstrates that people apprehend tonal structure differently depending on whether they are tasked with making a cognitive judgment (whether a particular chord is correct or incorrect) or an affective one (whether they like the chord or not). Future work that further traced the timeline and mechanisms by which top-down information of this sort impacts the perception and evaluation of aesthetic entities would be especially welcome. Despite that people assimilated expressive information similarly for both media-positive and negative descriptions had similar effects on happy and sad ratings for music and poetrythe baseline happy and sad ratings for music and poetry were clearly different. Regardless of intent information type, musical excerpts were perceived as more happy and less sad, and poetry excerpts were perceived as more sad and less happy. Fascinatingly, this increase in perceptions of sadness when words are involved might extend even to music with lyrics. Brattico Expressive intent and aesthetic experience et al. [65] showed that happy music without lyrics was perceived as more positive than happy music with lyrics. Although stimulus properties could be an explanatory factor for the results, one possible implication of this difference is that people read more happiness into ambiguous music and more sadness into ambiguous poetry-future work could investigate whether this effect holds for more stimuli and more populations. If so, a possible explanation-given past work showing a general preference for happy over sad music [51], [52]-might be that people listen more frequently to happy music, leading them to use base rate information to assimilate ambiguous excerpts into their ordinary experience by assuming they are happy. Given other findings from this experiment suggesting that people prefer poetry when it has been disambiguated as sad, it might be that people more frequently read sad poetry, leading them to assimilate ambiguous excerpts into their ordinary experience by assuming they are sad. Neuroimaging work shows selective activation in parts of the auditory cortex when listening to happy rather than sad music, suggesting that people may pay more attention to the sensory signal when listening to music they identify as happy [66]. Although positively and negatively valenced intent information influenced emotional experience (happiness and sadness ratings) similarly for music and poetry, it influenced the aesthetic dimensions of the experience (enjoyment and being moved) differently. Positive intent information elevated enjoyment and movingness ratings for music, but negative intent information elevated enjoyment and movingness ratings for poetry. In other words, people's most powerful aesthetic experiences were reserved for music that had been expressively disambiguated as positive, but for poetry that had been expressively disambiguated as negative. In general, people reported enjoying the ambiguous musical excerpts more than the ambiguous poetry. They were also generally more moved by the music than by the poems, with the exception of excerpts prefaced by negative intent information-only in this case did people find the poems as moving as music. Together, these findings suggest that people want their music happy but their poetry sad. This difference may reflect a distinction in the typical social function of these art forms. Music is often listened to in a group setting, and its capacity to facilitate social bonding [67] has been identified as a key characteristic. Music can elicit a sense that boundaries have been dissolved and the listener is physically participating with the sound in some virtual, imagined way [68], [63]. Poetry, on the other hand, is often read in solitude with the goal not of euphorically transcending boundaries and syncing with a group (as may be the case for some music listening), but rather with the goal of achieving insight into human experience [69]. (Note, however that these are broad generalizations, and there are no doubt listeners who approach music in the personal and intimate manner that is more typical of poetry, and vice versa.) Also broadly speaking, people tend to view poetry as challenging and edifying, and may have perceived the excerpts preceded by negative intent information as more serious, and more capable of fulfilling this role. The participants in this study were drawn from students in a General Psychology class at an American state university, and were not selected for having any special interest or expertise in either music or poetry. Given the evidence for how differently experts in these domains process information in their area of expertise [70], and the different goals and criteria those kinds of listeners would bring to the experience, it would be interesting to run the same study using expert poets and musicians as participants. The positive intent information may have mapped more readily onto music's most widely presumed function: to elevate mood [71]. Contrastingly, the negative intent information may have positioned the poetry to serve as the kind of deep or thought-provoking artwork people expect from this genre [72], [73]. This interpretation is bolstered by the fact that people thought the positive intent information matched the musical excerpts best, but that the negative intent information matched the poetry excerpts best. For neither of the categories of artworks were the ambiguous excerpts preferred. On the contrary, disambiguation in the direction most associated with the genre (positive for music, negative for poetry) produced the strongest increases in enjoyment and movingness. Although theoretical approaches have often extolled the value of ambiguity in creating rich, relevant, and multiply-interpretable works of art, this benefit does not seem to extend to expressive ambiguity. Instead, people seem to report more satisfying aesthetic experiences in response to works of art whose primary expressive cast is clear. Since valenced information may make it easier to empathize with the author or composer, this finding supports theories that attribute aesthetic power in part to empathy with a perceived human creator [37], [40], [1]. The opposing roles of positive and negative intent information in music and poetry, however, suggests that the way empathy with a perceived human artist feeds into aesthetic appreciation differs across media. In the case of music, it may allow the listener to relax and experience a sense of shared subjectivity with an implied social group-a process that would likely be more difficult if the expressive tenor were negative, since negative emotions in group settings tend to raise anxiety that could interfere with percepts of successful bonding. In the case of poetry, it may allow the reader to formulate a sense that intimate sensibilities have been conveyed directly from one person to another-from the poet to the reader-without invoking an imagined larger group. Yet, because multiple modes of aesthetic attending are possible, this potential explanation requires further exploration. Together with other recent work, including [24], this study argues for the importance of further work on domain generality and domain specificity in aesthetic attending.
v3-fos-license
2016-05-15T11:50:03.031Z
2012-09-03T00:00:00.000
10373355
{ "extfieldsofstudy": [ "Biology", "Medicine" ], "oa_license": "CCBY", "oa_status": "HYBRID", "oa_url": "https://longevityandhealthspan.biomedcentral.com/track/pdf/10.1186/2046-2395-1-3", "pdf_hash": "9d10ee797bb8d60e00b47b43e719797b244e91dd", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:1569", "s2fieldsofstudy": [ "Medicine", "Biology" ], "sha1": "8353be93594d88a8a9fa0ea5dfaca854fd6ed908", "year": 2012 }
pes2o/s2orc
Male mice retain a metabolic memory of improved glucose tolerance induced during adult onset, short-term dietary restriction Background Chronic dietary restriction (DR) has been shown to have beneficial effects on glucose homeostasis and insulin sensitivity. These factors show rapid and robust improvements when rodents were crossed over from an ad libitum (AL) diet to DR in mid life. We aimed to determine whether the beneficial effects induced by short-term exposure to DR can be retained as a ‘metabolic memory’ when AL feeding is resumed (AL-DR-AL) and vice versa: whether the effects of long-term DR can be reversed by a period of AL feeding (DR-AL-DR). C57BL/6 male and female mice were used to examine sex differences (N = 10/sex/group). Mice were fed AL or DR from 3 until 15 months (baseline) and each dietary crossover lasted approximately 5 months. Results In females, body and fat mass were proportional to the changes in feeding regime and plasma insulin and glucose tolerance were unaffected by the crossovers. However, in male mice, glucose tolerance and plasma insulin levels were reversed within 6 to 12 weeks. When males returned to AL intake following 5 months DR (AL-DR-AL), body mass was maintained below baseline, proportional to changes in fat mass. Glucose tolerance was also significantly better compared to baseline. Conclusions Male mice retained a metabolic memory of 5 months of DR feeding in terms of reduced body mass and improved glucose tolerance. This implies that some of the beneficial effects induced by a period of DR in adult life may be beneficial, even when free feeding is resumed at least in males. However, under continuous DR, lifespan extension was more prominent in females than in males. Background In mammals, pancreatic β cells secrete insulin in proportion to the concentration of circulating glucose. Insulin then stimulates glucose uptake into skeletal muscle and adipose tissue and decreases hepatic glucose production. Defects in insulin secretion by the β cells can lead to hyperglycemia and the onset of type 2 diabetes [1]. Chronic dietary restriction (DR) in C57BL/6 inbred mice has been shown to have beneficial effects on glucose tolerance [2,3]. Additionally, improved insulin sensitivity and reductions in plasma insulin during DR have been linked to the life extending effects of DR in mice [4,5]. Although chronic DR is known to lead to improvements in glucose tolerance and insulin sensitivity, perhaps more relevant for humans is whether only a short period of DR has the same effects. Previous data show this is indeed the case with only a short period of DR (ad libitum (AL)-DR crossover), in people with type 2 diabetes [6] and in rodents [7][8][9]. However, very little is known about whether the beneficial effects induced by short-term exposure to DR can have a 'metabolic memory' when AL feeding is resumed (AL-DR-AL) and vice versa: whether the effects of long-term DR can be reversed by a period of AL feeding (DR-AL-DR). We performed these crossovers in laboratory mice to determine the effects of such switches in feeding regimes on body composition as well as glucose and insulin sensitivity. The majority of studies show that male rodents are more insulin resistant than females [10,11]. Therefore, we also aimed to determine whether there was sexual dimorphism in glucose tolerance and insulin sensitivity in response to the dietary crossovers. We show striking sexual dimorphism, whereby glucose tolerance and insulin sensitivity of female mice was relatively unperturbed by the crossovers. However, in males these parameters improved within 6 to 12 weeks of the crossover to DR and vice versa. Improved glucose tolerance and reduced body mass were retained in males after returning to AL feeding following 5 months DR, suggesting that several of the potentially beneficial effects of short period of DR were retained. Results Body mass and body composition C57BL/6 mice were randomly assigned to a DR or an AL group at 3 months of age (day 0 of the experiment). The majority of animals remained in their group until they were killed for experiments at predetermined time points or died naturally. In addition, ten mice per group were assigned to a double-crossover experiment with the first crossover at day 365 (15 months of age) and the second (reverse) crossover at day 505 (about 20 months of age). These mice were then killed at 25 months of age. We first compared body mass trajectories in the crossover groups to large single treatment control cohorts (ALonly or DR-only groups, Figure 1). Food intake in the crossover males under AL was higher than the average AL only group, resulting in higher body mass after 1 year of the experiments ( Figure 1B) and on average a higher degree of restriction (around 45%). However, rates of body mass changes before the first crossover were not significantly different between the groups selected for crossover and the Prior to the experiment starting (day −7), when mice were 3 months old there was no difference in body mass or food intake between the groups (P >0.05). A 40% food restriction was initiated in the dietary restricted group on day 0. Data represent means ± SEM from N = 6 to 10 mice/group in the crossover groups and from 280 (at start) to 48 (at the end) mice in the control groups. (C,D) Rates of body mass change with time under the indicated treatments in females (C) and males (D). Rates were calculated by linear regression for time points within an approximately linear range and are means ± SEM. Colors are as in (A) and (B). Asterisks denote significant differences between groups (*P <0.05, **P <0.001 assessed using one-way analysis of variance (ANOVA)/Holm-Sidak). control cohorts ( Figure 1C,D). The body weights of male AL control mice peaked slightly before that of female AL controls. Weight loss at high age was also seen in DR control mice, however, it was of lower magnitude and its onset was delayed by about 150 to 200 days ( Figure 1A,B). During the crossovers to AL, food intake initially increased (hyperphagia) in both sexes. However, it stabilized in females at baseline level, while it remained higher than baseline in males (P <0.001 [see Additional file 1: Figure S1]). Following the first crossover into AL, body mass increased over the 5 months (repeated measures: P <0.001) in both sexes. However, despite increased food intake in males after the DR-AL crossover, body mass increased more slowly after DR than in AL-only animals ( Figure 1D). This was not the case for females, where body mass gain after switch to AL was at the same rate as in AL-only controls ( Figure 1C). This difference between sexes was confirmed after the second crossover to AL feeding: Following 5 months of adult-onset DR, body mass of females increased even faster than in AL-only controls, while rates of increase remained low for males ( Figure 1C,D) despite increased food intake [see Additional file 1: Figure S1]. Accordingly, females returned to baseline body mass after 5 months while in males, body mass was maintained below baseline until the end of the experiment (P <0.001). Following the crossovers from AL to DR, males showed stronger responses in body weight than females, approaching the body weight of DR-only animals more closely (following the first crossover) or even losing weight below that level (after the second crossover). This might be due to the above-normal food intake in the male crossover mice during AL periods resulting in more severe dietary restriction. There were no depot-specific changes in fat mass following the first or second crossover in either sex. All changes in fat masses at any point in the experiment were fully proportional to the respective body mass changes. In long-term controls dissected aged 12, 15 or 24 months, the decreased mass of all the organs in DR mice was entirely on account of the reduced body mass. In males there were no significant differences in relative organ mass per total body mass induced by the crossovers. In females, the relative masses of the kidneys were significantly less in DR mice after the first (P = 0.003) and second (P = 0.011) crossover, and the liver (P = 0.044) only after the second crossover. In summary, males, but not females, maintained low body mass with slow mass gains after return to AL feeding from either early-onset or late-onset DR, despite increased food intake. Glucose tolerance We next assessed glucose tolerance immediately before and at different time points after first and second crossover. DR mice were more glucose tolerant than AL mice at baseline in both sexes ( Figure 2). In female mice, glucose tolerance responded only minimally to changes in the feeding regimen. Females in the DR-AL-DR group maintained the same glucose tolerance levels throughout the experiment (P = 0.245). However, females that were crossed over to DR at 15 months of age improved their glucose tolerance resulting in a significant difference (P = 0.022) by 12 weeks after the first crossover. No differences were detectable between the groups following the second crossover. Glucose tolerance in male mice was more responsive to changes in the feeding regime and showed highly significant changes over time within both groups (P <0.001). Following either the first or second crossover to DR, glucose tolerance improved fully within 6 weeks (P <0.001, Figure 2B). However, following the inverse crossover to AL, glucose tolerance in males reached AL baseline levels only at 12 weeks after the first crossover and remained significantly improved over baseline up to the end of the experiment after the second crossover (P = 0.001, Figure 2B). Together, these data show that adult-onset DR induced a lasting improvement in glucose tolerance in males, which paralleled the maintenance of low body mass. Conversely, a short-term reversion from DR did not result in longer lasting impairment in glucose tolerance. Glucose tolerance in females was much less influenced by feeding regime. Fasting glucose, insulin and insulin sensitivity In agreement with the sex differences seen in body mass maintenance and glucose tolerance, males and females also showed different responses in fed and fasting glucose concentrations and insulin levels to dietary change. Male DR mice had lower fed ( Figure 3B) and fasting glucose ( Figure 3D) and insulin ( Figure 3F) levels at baseline (all P <0.001) and all parameters were highly affected by the crossovers (P <0.001). With the exception of fasting insulin after the first crossover, all parameters were completely reverted from baseline AL levels within 6 weeks after the crossover to DR. When male mice were crossed back to AL from either long-term or short-term DR, changes in glucose were also completed within 6 weeks after the crossover. However, fasting insulin levels in male mice crossed over to AL after a period of DR remained below the AL baseline for at least 12 weeks, both after long-term and short-term DR. Insulin sensitivity as evaluated by the homeostasis model assessment 2 (HOMA2) protocol was reversed by 12 weeks (P <0.001) after the first cross and by 6 weeks (P = 0.008) after the second crossover in males. By the end of the experiment, males exposed to a (DR-AL-DR) regime had significant improvements in insulin sensitivity compared to baseline (P = 0.001) [see Additional file 2: Figure S2]. In males only, there was a significant positive correlation between fasting insulin and glucose (AL-DR-AL: R 2 = 0.075, P = 0.050; DR-AL-DR: R 2 = 0.177, P = 0.001) and between AUC of glucose clearance and insulin concentrations (AL-DR-AL: R 2 = 0.174, P = 0.001; DR-AL-DR: R 2 = 0.179, P = 0.003). In female mice, fed glucose levels were reduced following the first crossover to a DR regime and this was reverted by crossing back to AL ( Figure 3A). Fasting glucose ( Figure 3C) and insulin ( Figure 3E, borderline significance) were lower at baseline as expected. However, changes over time and differences between the groups in fasting glucose and insulin concentrations were generally too small in females to show consistent patterns with the available numbers of animals. In male mice, glucose tolerance and insulin sensitivity were strongly affected by the crossover regimes. To establish the impact of body mass on these effects, correlations between body mass and the measured parameters were calculated. Body mass was positively related to fasting glucose and insulin (glucose; AL-DR-AL: R 2 = 0.355, P <0.001; DR-AL-DR: R 2 = 0.380, P <0.001 and insulin; AL-DR-AL: R 2 = 0.186, P = 0.002; DR-AL-DR: R 2 = 0.270, P <0.001). There was therefore a significant negative correlation This was also significantly correlated in females, but only in the AL-DR-AL group (R 2 = 0.356, P <0.001) [see Additional file 3: Figure S3]. Together, these data show that even a short period of DR induces improvements of fasting insulin levels, glucose tolerance and body mass maintenance that can last considerably in males while they are of smaller magnitude and more quickly reversed in females. DR effects on longevity and tumor prevalence Lowering of circulating insulin levels and improvement of glucose tolerance are seen as important mediators of the lifespan-improving and health-improving effects of DR [4,5]. Given the sexual dimorphism in the response of these parameters to DR shown above, different degrees of healthrelated and lifespan-related effects of DR between males and females might be expected. The present study was not designed to analyze long-term health and lifespan effects after short-term DR, and frequencies of death occurring in the four crossover groups until the end of the experiment were not significantly different (data not shown). However, data on lifespan ( Figure 4) and tumor prevalence at death (Table 1) are available from the large AL and DR only control cohorts. Lifespans of male and female mice under AL feeding were not different from each other (P = 0.192). Median lifespans were 27 ± 0.61 months for AL males and 28 ± 0.41 months for AL females. DR improved survival in both sexes, but the extension was significantly greater in females (P = 0.0163). Median lifespan increased by about 26% to 34 ± 0.78 months in males, and by at least 32% to >37 months in females. Under AL feeding, tumor prevalence increased sharply in both sexes after 17 months of age, but percentages of tumor-bearing mice remained lower in males than in females over their whole remaining lifespan (Table 1). DR strongly reduced tumor prevalence in females. In males, however, DR appeared to postpone tumor incidence but did not reduce the percentages of mice bearing neoplasms after 20 months of age (Table 1). Discussion and Conclusion This study addresses two related questions: firstly, is there a 'metabolic memory' if mice are switched between AL and DR feeding regimens? Data represent means ± SEM from N = 6 to 10 mice/group. Asterisks denote significant differences between groups (*P <0.05; **P <0.001), assessed using one-way analysis of variance (ANOVA). Work in flies and rats has shown that the switch between DR and AL feeding regimes can be a very dynamic process, particularly in terms of survival and metabolic status with reversal of feeding regimes resulting in a rapid change in ageing trajectory in Drosophila [12] and in rats [13]. Markers of oxidative damage were found to be reversed with the feeding regime in flies [14], and in the brains of mice [15]. In mice, gene expressions profiles in liver, muscle and hypothalamus shifted quickly in correspondence to a new feeding regime [16], including genes involved in metabolism and growth control. In rats, effects of short-term to medium-term early-onset DR were found to be obliterated by a later period of AL feeding [13]. We found that male mice retained a 'metabolic memory' , that is, improved body mass maintenance, glucose tolerance and fasting insulin levels for up to 5 months after a period of adult-onset DR. A similar experiment has been performed in the same strain of male mice whereby AL mice were crossed to DR feeding at 11 months of age and vice versa [17]. No second crossover was performed in this study, but follow-up time was for 10 months after crossover. This study also shows that in males crossed from DR to AL, body mass remained below long-term AL levels. Fat mass remained below control levels for at least 6 months after crossover to AL. Importantly, glucose tolerance did remain significantly improved compared to long-term AL controls for the whole observation period (10 months after the crossover). This reinforces the suggestion that metabolic memory of DR is retained in male mice in terms of improved glucose tolerance. The second question was, are sex-specific responses in insulin, glucose tolerance and body weight maintenance associated with the longevity effect of DR? In our study, AL females showed better glucose tolerance, fasting glucose and fasting insulin levels than AL males, and these were largely unaffected following shortterm or long-term DR. In contrast, glucose and insulin levels and glucose tolerance were more responsive to periods of DR in males confirming published data [9] and together with body mass maintenance, showed lasting improvements following a period of DR. Fasting glucose and insulin concentrations positively correlated with body mass, as previously reported [18], with a subsequent negative correlation between body mass and insulin sensitivity [7]. This suggests that the function of pancreatic β cells to secrete insulin was not impaired in AL mice. There is extensive evidence showing general sexual dimorphism in insulin sensitivity. Several factors could be responsible for this. One is the influence of sex hormones; testosterone has a direct effect upon pancreatic islet function by favoring insulin gene expression and insulin release [19]. Estrogen has beneficial effects on glucose tolerance and insulin resistance: for example, women are more likely to develop diabetes after menopause but hormone replacement therapy can ameliorate this tendency [20], and in ovariectomized mice, administration of estrogen protected against glucose intolerance induced by high fat diet in an estrogen receptor alpha dependent manner [21]. Furthermore, sexual dimorphism in adipokines that regulate insulin sensitivity such as resistin, leptin, adiponectin, retinol binding protein 4 (RBP4) and glucocorticoids has been shown [22][23][24][25][26]. Additional factors, such as the capacity of peripheral organs, primarily skeletal muscle, to uptake glucose might also influence gender differences in insulin sensitivity and glucose tolerance. There is also ample evidence for a sexual dimorphism in response to lifespan-extending manipulations. In mice, the effects of lifespan extension by deletion of insulin substrate receptor I [27], or feeding with the mammalian target of rapamycin (mTOR) inhibitor rapamycin [28] were found to be more robust in females than males. Similarly, reduction of the activity of the insulin-signaling pathway [29], or the mTOR pathway by deletion of S6K [30] extended lifespan only in female but not in male mice. Also in Drosophila, females tend to show enhanced responses to various lifespan extension manipulations as compared to their male counterparts. For example, dFOXO overexpression in fat body extended lifespan in females but not in males in Drosophila [31]. Female flies show greater extension of lifespan by DR than males [32]; the reason is not completely clear while the reduction in egg laying activity in female DR flies has been postulated to be one possible explanation. There is contradictory evidence regarding a sexual dimorphism in the lifespan response to DR in C57BL/6 mice. Blackwell [33] reported identical lifespan between the sexes in both AL control and DR mice. Using single-housed animals, Turturro et al. [34,35] showed larger lifespan extension under DR in females, however, this was driven by a shorter lifespan in AL females as compared to AL males. Group-housed animals in our cohort reached higher median ages already under AL, which were not different between sexes. However, lifespan was more extended by DR in females than in males. It had been suggested that the effects of DR on lifespan are mitigated through circulating insulin levels, which may reduce insulin signaling [4,5]. However, longer lifespan and better glucose tolerance are not always associated with each other. For example, insulin receptor substrate 1 null mice had extended lifespan together with lifelong mild glucose intolerance [27], and Harper et al. [36] reported that a long-lived mouse stock had impaired glucose tolerance compared to control mice. According to our data on AL females and males, the lower fasting insulin levels in AL females are not associated with longer lifespans, and tumors are, if anything, more frequent in AL females relative to AL males. In case of DR mice, lifelong DR results in greater extension of lifespan and more prominent tumor suppression in females than in males despite both sexes displaying indistinguishable glucose tolerance and insulin sensitivity. The interconnections between sexual dimorphisms in metabolic and lifespan regulation appear to be more complex than originally thought. Whether the retention of a 'metabolic memory' in male mice after a brief period of DR in mid life improves their health/lifespan beyond the period in which they maintain better glucose tolerance and lower body mass remains to be elucidated. Mice All mice were inbred C57BL/6 (Harlan, Blackthorn UK) and both males and females were used. Ethical approval was granted by the LERC Newcastle University, UK. The work was licensed by the UK Home Office (PPL 60/3864) and complied with the guiding principles for the care and use of laboratory animals. Mice were housed in same-sex cages in groups of 4 to 6 (56 × 38 × 18 cm, North Kent Plastics, Kent, UK) and individually identified by an ear notch. They were provided with sawdust, paper bedding and environmental enrichment (a plastic house). Mice were housed at 20 ± 2°C under a 12 h light/12 h dark photoperiod with lights on at 7.00 am. The diet used was standard rodent pelleted chow (CRM (P); Special Diets Services, Witham, UK) for AL-fed mice and the same diet, but as smaller pellets were offered to DR mice. The smaller pellet size reduced competition for food. DR mice were offered 60% of AL intake (calculated based on average food intake in 90 control AL mice between 5 and 12 months of age) as one ration at 9.30 am daily. All mice were fed AL until 3 months of age and then split into AL or DR groups, matched for body mass and food intake (N = 10/sex/group for crossover groups). At 15 months of age, mice were crossed over from DR to AL or AL to DR. After a further 140 days (about 20 months of age), these mice were returned to the original feeding regime for a further 160 days, until they were killed at an age of 25 months, resulting in four experimental groups: male AL-DR-AL, male DR-AL-DR, female AL-DR-AL and female DR-AL-DR. During the experiment three females and four males died or were killed from the AL-DR-AL group, and one female and four males from the DR-AL-DR group. Effects of DR on body mass, survival and tumor prevalence were monitored in long-term controls which were fed only DR or AL from 3 months of age, comprising 280 mice/sex/group in total. These were either killed at predetermined ages or left to die naturally. All mice were dissected and macroscopically examined for tumor prevalence at death. Body mass, body composition and food intake Body mass and food intake were measured at least once a month in AL mice and once a week in DR mice (± 0.01 g; Sartorius top-pan balance, Epsom, UK). Mean food intake of each AL cage was measured by weighing the contents of the food hopper on 2 consecutive days and this amount divided by the number of mice in the cage. Food intake in the double-crossover mice over the course of the experiment is shown in Additional file 1. Average food intake in the crossover males under AL was higher than in the AL only controls. However, during the last weeks before crossover, the degree of DR was not significantly different from 40% in either males or females. Full body dissection was performed at all endpoints and the organs weighed (Ohaus analytical balance, ± 0.0001 g; Ohaus Corp., NJ, USA): brain, heart, lungs, thymus, quadriceps, tail, caecum, liver, kidneys, spleen, gonads and pancreas. Also, large intestine and small intestine mass (after flushing with saline) and length were determined. To assess fat deposition, six fat depots were also fully dissected and weighed: retroperitoneal, gonadal, mesenteric, subcutaneous, subscapular and brown adipose tissue (BAT). Glucose tolerance test A glucose tolerance test (GTT) was performed on each individual in the crossover experiment at 15 months old (baseline) and then at 1, 3 and 12 weeks after the first crossover and 1, 3 and 12 weeks after the second crossover. The GTT was performed on fasting mice by removing all food from AL mice at 6.00 pm the evening before (15.5 h fasting) and withholding the daily food ration from DR mice until after the test. Drinking water was available throughout. A 20% glucose solution was prepared fresh each morning using D-glucose (G-5767, Sigma-Aldrich, St Louis, MO, USA) and sterile filtered water. A fasting blood sample was collected by placing each mouse in a restrainer and nicking the tail vein with a scalpel blade. A total of 200 μl of blood was collected from each animal in a microvette container lined with lithium-heparin (Microvette, Sarstedt AB, Landskrona, Sweden). Blood was centrifuged and the resultant plasma stored at −80°C. The fasting blood glucose level (mol/l) (time-point 0) was determined using a Glucometer (ACCU-CHEK Aviva Nano, Mannheim, Germany) from a further approximately 2 μl of blood. Then, mice were injected intraperitoneally with 2 g/kg body mass of the glucose solution. At 15, 30, 60 and 120 minutes post injection the blood glucose level was measured using blood from the tail vein on the Glucometer as above. At the end of the GTT, DR mice were fed their daily ration and food was replenished in AL food hoppers. Glucose tolerance was expressed as the area under the curve over the 120-minute test duration. On a separate occasion, at least 3 days before or after a GTT, fed glucose blood concentrations were measured using a drop of blood from the tail on the Glucometer as above, at 11.30 am to ensure mice were postprandial. Fasting plasma insulin levels and insulin sensitivity Using the fasting plasma collected prior to the GTT, insulin concentrations were measured using an ultrasensitive mouse insulin ELISA kit (CrystalChem Inc., Downers Grove, IL, USA). All samples were run in duplicate and a number of additional standards were included because the concentrations measured were close to the detection limit. Insulin sensitivity was estimated using the updated homeostatic model assessment (HOMA2) model which gives an estimate of insulin sensitivity using fasting plasma insulin and glucose concentrations [37]. This model can be used as a comparison between experimental groups as a measure of insulin sensitivity in rodents [38]. Statistical analysis All statistical analyses were performed using Minitab V. 16 (Minitab Inc., State College, PA, USA) and Sigmaplot V. 11.0 (SPSS, Chicago, IL, USA). Repeated measures analysis of variance (ANOVA) was used when analyzing changes in body mass and food intake data over time. Fat and organ mass co-vary with body mass, therefore mass was used as a covariate in a general linear model (GLM) to control for these effects. One-way ANOVA was used to find differences between groups. A Tukey comparison was included in the one-way ANOVA to determine differences between all the measured timepoints within the same group. Linear least squares regression was used to find significant correlations between two continuous factors. Kaplan-Meier survival curves were compared by logrank test. Differences were considered significant when P ≤0.05.
v3-fos-license
2017-08-03T02:43:24.326Z
2015-10-05T00:00:00.000
14039879
{ "extfieldsofstudy": [ "Biology", "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://stemcellres.biomedcentral.com/track/pdf/10.1186/s13287-015-0182-2", "pdf_hash": "fbb1f80743a4d4b59b8a8f7a42e4f2ebd4c1f90e", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:1570", "s2fieldsofstudy": [ "Engineering", "Medicine" ], "sha1": "3fb5b1410fc56c1c567ea268bce255e10cd684bf", "year": 2015 }
pes2o/s2orc
Rapid assay of stem cell functionality and potency using electric cell-substrate impedance sensing Regenerative medicine studies using autologous bone marrow mononuclear cells (BM-MNCs) have shown improved clinical outcomes that correlate to in vitro BM-MNC invasive capacity. The current Boyden-chamber assay for testing invasive capacity is labor-intensive, provides only a single time point, and takes 36 hours to collect data and results, which is not practical from a clinical cell delivery perspective. To develop a rapid, sensitive and reproducible invasion assay, we employed Electric Cell-substrate Impedance Sensing (ECIS) technology. Chemokine-directed BM-MNC cell invasion across a Matrigel-coated Transwell filter was measurable within minutes using the ECIS system we developed. This ECIS-Transwell chamber system provides a rapid and sensitive test of stem and progenitor cell invasive capacity for evaluation of stem cell functionality to provide timely clinical data for selection of patients likely to realize clinical benefit in regenerative medicine treatments. This device could also supply robust unambiguous, reproducible and cost effective data as a potency assay for cell product release and regulatory strategies. Introduction Measurement of a stem, progenitor, or stromal cell preparation's potency or functionality is important to the characterization of a potential cell therapy product [1]. Ideally, the assessment of a cell product's potency is based on a relevant cell function for the desired clinical outcome [2]. While valuable, assessments of cell phenotype (i.e., surface marker expression), viability, and colony growth are not considered adequate functionality tests for cells being studied in clinical applications because they do not reliably predict clinical responses to cell treatments [1][2][3][4]. For regenerative therapies, the therapeutic cell's ability to invade injured tissue in response to a chemotactic gradient is considered to be a critical cell function for the desired clinical outcome [5][6][7][8]. To assess the potential in vivo invasive capacity of a stem-cell preparation, an in vitro Transwell invasion assay is typically performed [9][10][11][12]. This assay is based upon the Boyden chamber, which is separated into upper and lower chambers by a Matrigel matrix-coated porous filter. The progenitor or stem cells are added to the top chamber and a chemoattractant agent is added to the bottom chamber to induce the cells to invade the Matrigel matrix and migrate through the porous filter to the bottom chamber. Eighteen to 24 hours later, the number of cells that have migrated to the underside of the filter or to the floor of the bottom chamber is quantified by 4′,6-diamidino-2phenylindole (DAPI) staining and then counting the migrated cells' nuclei [13]. Transwell assay measurement of bone marrow mononuclear cell (BM-MNC) invasion in response to stromal cell-derived factor-1 (SDF-1) was found to be the only in vitro assessment of BM-MNC preparations that demonstrated a positive correlation to the clinical outcome of patients treated with BM-MNCs for heart repair [14,15]. The SDF-1 Transwell invasion assay has also been used for testing the invasive function of other progenitor cell types such as mesenchymal stromal cells (MSCs) [16][17][18], endothelial progenitor cells (EPCs) [19][20][21], and peripheral blood mononuclear cells (PB-MNCs) [22][23][24]. While the standard Transwell invasion assay has been found to provide clinically important data on the functional capacity of stem cell preparations, limitations to the assay include the time required for measurable migration of cells, laborintensive methods required for quantifying the invasive cells, investigator inter-assay variability, and measurement of migration (a dynamic process) at only a single (for example, 18-24 hour) time point [25,26]. For autologous bone marrow cell therapy, the largest limitation of present cell function assays is that the results are not available until about 36 hours after the bone marrow harvest. Since many clinical applications of autologous bone marrow stem and progenitor cells involve the cells being administered within a few hours of the bone marrow harvest, it is not then possible to identify, prospectively, stem cell preparations with poor functional capacity. For clinical trials designed to determine the therapeutic potential of a stem cell therapy, the inclusion of suboptimal cell preparations reduces the statistical power of the study, obscuring the potential benefit of the therapy under assessment. Importantly, whether as part of a clinical trial or an accepted treatment protocol, administration of suboptimal cell preparations can result in patients being treated without a high likelihood of clinical benefit. This assay also addresses the need of the Food and Drug Administration (FDA) and other regulatory organizations for a reliable, low-cost, rapid assay of cell functionality as a cell potency test. Many patients have preexisting clinical conditions that can impact the functionality of their stem cells. For example, it is well documented that diabetes can impair BM-MNC functionality [27][28][29][30], but whether such an existing clinical condition has impacted a patient's stem cell functionality to a degree that the patient should not undergo cell administration is presently difficult to assess in the hours between autologous stem cell harvest and administration. Another circumstance where a quick and sensitive cell migration assay for measuring cell functionality would be helpful is in the testing of stem cells from patient blood or bone marrow before and after radiotherapy or chemotherapy treatment [31][32][33]. Some of the undesired side effects from radiation therapy, chemotherapy, or treatment with bone marrow suppressive drugs are the reduction of peripheral blood stem cell viability and function [34]. In this regard, a cell potency invasion assay to measure the functionality of peripheral blood cells would be important in assessing the potential toxic effects of radiation therapy and chemotherapy. With the continued development of cell biosensor detection methods, traditional methods, such as the Boyden chamber for studying cell invasion, are being updated with newer analytical tools [35][36][37][38]. A cell invasion assay involves the cells first degrading an extracellular matrix barrier or cell monolayer, followed by the movement of the cells through the porous filter in response to the chemokine gradient [25,39,40]. In these studies, electric cellsubstrate impedance sensing (ECIS), previously used to detect the invasion of cells through a cell monolayer grown directly on an electrode array [41] or on a porous filter [42], is used to detect the invasion of cells through an extracellular matrix barrier on a porous filter. The goal of this study was to adapt the standard Transwell assay to a stem cell invasion assay using ECIS technology for use as a rapid, reliable stem cell functionality or potency assay. The objective of the study was to automate the measurement of cell invasion using resistance and impedance measurements that could detect SDF-1-directed cell invasion in minutes rather than 36 hours. We also sought to demonstrate early proof-of-principle results showing that sublethal, deleterious effects of cell functionality could be detected as a consequence of exposure of stem cells to common doses of radiation cancer therapy regimens. Here, we show that by translating the standard Transwell assay to an assay using ECIS technology, one can measure within minutes BM-MNC invasion in response to SDF-1. The BM-MNC invasion is dependent on specific signaling by SDF-1; BM-MNCs pretreated with SDF-1 or AMD3100 (a SDF-1 receptor blocker) do not invade the Matrigel matrix. We also demonstrate that ECIS measurement of SDF-1 stimulated PB-MNC invasion and that radiation-exposure damage of the PB-MNCs reduces their invasion. The results from our experiments demonstrate that the ECIS Transwell device and chamber provides a rapid, sensitive, and reproducible test of BM-MNC and PB-MNC invasion capacity, making it a potential diagnostic tool for testing stem cell functionality in regenerative medicine studies. Methods Bone-marrow harvest and purification pt?>All bone marrow samples were collected from 12month-old male domestic Sinclair miniswine (Sinclair Bio-resources, Columbia, MO, USA). All animal handling and care procedures were performed strictly in accordance with the 2004 National Research Council "Guide for the Care and Use of Laboratory Animals" following protocol approval by the Institutional Animal Care and Use Committee (IACUC) of the Legacy Clinical Research and Technology Center, Legacy Health System, Portland, OR, USA. Under local anesthesia, 40 ml porcine bone marrow was aspirated from either the donor's tibia or sternum into a syringe containing 5 ml heparin (1000 USP units/ml). The bone marrow was transferred into a 150 ml transfer bag (Baxter, Deerfield, IL, USA) containing 8 ml citrate-phosphate dextran (Sigma, St. Louis, MO, USA), and the bone marrow transfer bag was connected through a 40 μm Pall blood transfusion filter (Fisher Sci., 300 Industry Drive, Pittsburgh, PA, USA) to a SEPAX cartridge kit (#CS-900; Biosafe America, Houston, TX, USA). This kit contained a wash-buffer bag that was filled with Hanks' balanced salt solution containing cations (HBSS; Invitrogen, 3175 Staley Road, Grand Island, NY, USA) and a density gradient solution/waste bag that was filled with 100 ml Ficoll-Paque Premium-1077 (GE Health Care, Pittsburg, PA, USA). A 150 ml transfer bag (Baxter Health Care, One Baxter Parkway, Deerfield, IL, USA) was connected to receive the purified BM-MNCs. The completed kit was then placed into a SEPAX-2 (Biosafe America) automated cellprocessing device to process the bone marrow [29]. The final purified BM-MNC product was collected in HBSS and the BM-MNCs were counted with a Beckman Z2-Coulter Counter (Beckman Coulter, Brea, CA, USA). Isolation of porcine PB-MNCs Porcine peripheral blood was collected from the femoral artery of domestic Yorkshire swine and diluted 1:2 with HBSS (#14025-092; Gibco/Invitrogen, 3175 Staley Road, Grand Island, NY, USA). PB-MNCs were isolated by density gradient centrifugation. Specifically, a 4 ml aliquot of diluted blood was layered on top of 3 ml Ficoll-Paque Premium, density 1.077 (#17-5442-02; GE Healthcare, Pittsburgh, PA, USA) in a 15 ml centrifuge tube, and the tubes were centrifuged (400 × g, 40 minutes, room temperature, no brake). The recovered PB-MNCs were washed twice with HBSS (300 × g, 10 minutes, room temperature). After washing, the cells were resuspended in X-VIVO 15 media (#04-744Q; Lonza, Walkersville, MD USA), and an aliquot was taken for cell counting and viability assessment. The Jurkat cells were cultured in a 37°C carbon dioxide (CO 2 ) incubator, and were split 1:5 when the cells reached a concentration of 1 × 106 cells/ml. The cells for this study were split no more than 25 times relative to the original stock of cells. Radiation of PB-MNCs To show that our stem cell functionality assay could rapidly detect nonlethal deleterious changes in cell function in previously functioning cells, we exposed the cells to a radiation dose comparable with commonly prescribed doses for cancer radiotherapy in humans. The PB-MNCs in X-VIVO 15 media received 0 Gy or 2.15 Gy of X-ray irradiation at room temperature (1.365 Gy/minute, RS2000 Biological Research Irradiator; Rad Source, Suwanee, GA, USA). The dose of 2.15 Gy was chosen for the swine cells since it is comparable with a moderate radiation exposure of human cells [43]. After irradiation of the PB-MNCS the cells were not washed, because it has been shown previously with other cell types that there is no difference in the rate of apoptosis if the cells are kept in the original irradiated medium or switched to fresh medium [44]. Also, an additional wash (centrifugation) step has the potential to produce additional damage to cells [45], which would complicate the interpretation of the effects of radiation alone on the invasion assay results. Both the irradiated cells and the nonirradiated cells were then cultured in hydrophobic dishes (Nunc Hydrocell, #174912; ThermoFisher Sci., Waltham, MA, USA) for 24 hours in X-VIVO 15 media (37°C, 5 % CO 2 ), after which they were centrifuged (300 × g, 1 minute, room temperature) and resuspended in fresh X-VIVO 15 media. The final cell concentration was adjusted to 1.2 × 10 6 cells/300 μl X-VIVO 15 media for the ECIS Transwell invasion assay. ECIS Transwell invasion assay The invasive function of the BM-MNCs, PB-MNCs, and Jurkat cells in response to a chemokine gradient was quantified using a commercial ECIS-Zθ system (Applied BioPhysics, Inc., Troy, NY, USA) connected to a Transwell array device (ECIS Trans-Filter Adapter; Applied BioPhysics, Inc.) ( Fig. 1) employing a Matrigelcoated, 8 μm pore, Transwell filter insert (#354480; BD Fig. 1 Schematic of the ECIS Transwell array device developed and used for this study. Commercially available Matrigel-coated Transwell filters fit into the ECIS Transwell array device, which has individual reference electrodes embedded in the bottom chamber with independent sensing electrodes placed above the filters Biocoat, Fisher, Pittsburg, PA, USA). To minimize any inter-assay variation, the Transwell array station and transfilter adaptor were always prewarmed in a humidified 5 % CO 2 incubator for 2 hours, and then warm (37°C) media were always used for resuspending the cells and chemokines. BM-MNCs, Jurkat cells, or PB-MNCs (1.2 × 10 6 cells) were brought to a final volume of 300 μl in X-VIVO 15 media and added to the chamber above the Matrigel-coated Transwell filter insert. For experiments measuring BM-MNC or Jurkat cell invasion, SDF-1 (#350-NS; R&D Systems, Minneapolis, MN, USA) in X-VIVO 15 media (100 ng/ml in 625 μl medium) was added to the chamber below the Matrigel-coated Transwell filter. For the PB-MNC ECIS Transwell studies, SDF-1 (100 ng/ml) and MIP-1 (100ng/ml; PeproTech, Rocky Hill, NJ, USA) were used in combination because both chemoattractants have been shown to be involved in PB-MNC migration [46,47]. After the addition of the respective cells and chemokines, the ECIS Transwell device was placed in an incubator (37°C, 5 % CO 2 ) for 2 hours, during which time the impedance changes were recorded. The migratory action of SDF-1 is the result of SDF-1 binding to the receptor C-X-C chemokine receptor type 4 (CXCR4) [48,49]. For some control experiments, the BM-MNCs were pretreated for 30 minutes with 5 μg/ml AMD3100 (an inhibitor of the SDF-1 receptor, CXCR4 [50,51]), and then the cells were added to the top of the ECIS Transwell chamber and 100 ng/ml SDF-1 was added to the bottom chamber. In other control experiments, to distinguish between a directed chemotaxis versus a random chemokinesis response, 100 ng/ml SDF-1 was added with the cells to the top of the ECIS Transwell chamber and medium alone was added to the bottom chamber. ECIS data analysis and statistics Initial data analysis was performed using ECIS Software (version 1.2.123; Applied BioPhysics, Inc.). The actual filter resistance of each test or control well was calculated by subtracting from each the resistance of a blank Transwell filter without cells: The actual filter resistance values for replicate wells were then averaged and plotted as the mean ± standard error of the mean (SEM) over the 2-hour time course. Additional analysis was done by transferring the data to an Excel 2011 spreadsheet (Microsoft, Redmond, WA, USA) where the absolute relative change in resistance was calculated at specific times by subtracting the initial baseline resistance at time zero from its respective actual filter resistance value: The absolute relative resistance was plotted as the mean ± SEM [52]. Significant differences between the ECIS resistance changes of control and chemokine-treated groups were calculated using a one-way analysis of variance test and a probability of p <0.05. Graphs were plotted as the mean ± SEM using SigmaPlot-11 (Systat Software, Inc., Chicago, IL, USA). The use of "n" in our study was equal to the number of individual animals used for the isolation of the BM-MNCs and PB-MNCs, which was 4 and 2, respectively. For the Jurkat cells, "n" in our study was equal to the number of separate individual cell platings. ECIS measurement of chemotactic cell invasion Since the invasion of human cells from the Jurkat Tcell line to SDF-1 has been well characterized in standard Boyden chamber migration assays [26,[53][54][55], we used Jurkat T cells to characterize our ECIS invasion assay. The assays employed an ECIS Transwell device developed for this study (Fig. 1). The ECIS Transwell device holds a standard Matrigel-coated Transwell filter, typically used for invasion assays. Jurkat T cells were placed in the Transwell top chambers and the chemokine SDF-1 was added to the bottom chambers. We found a significant increase in SDF-1-stimulated Jurkat cell invasion of the Matrigel matrix measured by ECIS as increased resistance (Fig. 2). Jurkat cells placed in the top half of a chamber without SDF-1 in the bottom half of the chamber produced only a slight increase in filter resistance that stabilized within 45-60 minutes and did not increase further over the remaining 2-hour time course (Fig. 2a). The ECIS system continuously measures the resistance across the Transwell membrane over time. When the absolute relative changes in filter resistance from Jurkat invasion in chambers with and without SDF-1 in the bottom half of the chamber (test and control chambers, respectively) were plotted over time, we found that the change in filter resistance for chambers with SDF-1 versus without SDF-1 became significantly different (p <0.05) within 10 minutes after starting the assay (i.e., after the addition of SDF-1), and that the difference increased over the 2-hour observation period (Fig. 2b). For traditional Boyden chamber assays, cell migration or invasion across porous filters can be classified as either a random motion event in the absence of a chemokine gradient (i.e., chemokinesis) or directed migration in response to a chemokine gradient (i.e., chemotaxis) [26,[56][57][58]. For our assay we wanted to determine whether the SDF-1 change in filter resistance correlated to a directed chemotactic response. SDF-1 was added along with the Jurkat T cells to the top chamber and not to the bottom chamber of Transwells. As would be anticipated for a chemotactic response, there was no measurable increase in Transwell filter resistance when SDF-1 was added to the same chamber as the Jurkat T cells (i.e., the upper chamber) (Fig. 3), which is in contrast to the increased resistance measured when Jurkat T cells are added to the top chamber and SDF-1 is added to the bottom chamber of the Transwell (Fig. 2). SDF-1 stimulates cell invasion and migration as a result of it binding to the receptor CXCR4 [48,49]. For some control experiments, the BM-MNCs were pretreated for 30 minutes with 5 μg/ml AMD3100 (an inhibitor of the SDF-1 receptor, CXCR4 [50,51]), and then the cells were Fig. 3 Increased ECIS Transwell filter resistance is due to chemotaxis, not chemokinesis, and is specific to the chemokine (SDF-1). In these experiments, Jurkat T-cells were used without pretreatment or were pretreated with either 100 ng/ml SDF-1 or AMD3100 (an inhibitor of the SDF-1 receptor). Resistance was measured in wells with Jurkat T cells (without or with pretreatment) added to top chambers with and without SDF-1 in the bottom chambers. Arrow: time point of SDF-1 addition to test lower chambers. Each tracing represents the mean ± SEM of two separate experiments (n = 2) performed in duplicate. SDF-1 stromal cell derived factor-1 added to the top of the ECIS Transwell chamber and 100 ng/ml SDF-1 were added to the bottom chamber. In a conventional Boyden chamber assay, AMD3100 treatment of the test cells inhibits their SDF-1-directed chemotaxis [54,56,[59][60][61]. As shown in Fig. 3, there was no increase in filter resistance when the AMD3100 pretreated cells were added to the top chamber of the Transwell with SDF-1 in the lower chamber, demonstrating the SDF-1 specificity of the ECIS-measured chemotactic invasive Jurkat T-cell function. These results demonstrate that the primary mechanism for the ECIS-measured change in the Transwell filter resistance is an SDF-1-specific chemotactic cell invasion response. ECIS measurement of mononuclear cell invasive function Using the ECIS Transwell invasion assay characterized above, we examined whether it would provide a rapid measurement of the invasive function of BM-MNCs in response to the chemokine SDF-1. We found that within 10 minutes there was measurable unambiguous increased resistance across Matrigel-coated Transwell filters when SDF-1 was added to the lower Transwell chamber (Fig. 4a). The resistance continued to rise over the first 90 minutes and then plateaued for the remainder of the 2-hour assay (Fig. 4a). Without the chemotactic signal of SDF-1 added to the bottom chamber, BM-MNCs added to the upper chamber did not increase the resistance across the Matrigel-coated Transwell filter at any time point within the 2-hour assay (Fig. 4a). Increased resistance was confirmed to be associated with BM-MNC invasion of the Matrigel coating the filter by microscopic observation of BM-MNCs within the Matrigel and the presence of cells adherent to the underside of the filter and on the floor of the bottom chamber (results not shown). When the absolute relative changes in resistance were plotted over time, the change in filter resistance for Transwells with SDF-1 versus without SDF-1 in the lower chambers was significantly different (p <0.05) as rapidly as 5 minutes after initiating the ECIS measurements, and the difference increased over the 2-hour study period (Fig. 4b). We also examined whether the ECIS Transwell invasion assay could measure the nonlethal toxic effects of radiation. Porcine PB-MNCs were exposed to X-ray radiation at 0 Gy and 2.15 Gy. The dose of 2.15 Gy was chosen for the porcine PB-MNCs since it is comparable with that found as a moderate therapeutic radiation exposure for cancer therapies [43]. The control and irradiated cells were then cultured for 24 hours, the viable cells were recovered from culture, and their invasive capacity in response to combined SDF-1 and MIP-1 was measured by ECIS. Control (nonirradiated) PB-MNCs invaded the Matrigel in response to SDF-1 plus MIP-1, increasing the measured resistance across the Matrigel-coated Transwell membrane (Fig. 5a). In contrast, there was no significant increase in resistance when the PB-MNCs had been exposed to 2.15 Gy of X-rays (Fig. 5a). When the relative absolute changes in resistance were plotted over time, the change in filter resistance for Transwells with chemokines (SDF-1 plus MIP-1) versus without chemokines in the lower chambers was significantly different (p <0.05) as rapidly as 30 minutes after initiating the ECIS measurements, and the difference increased over the 2-hour study period (Fig. 5b) Discussion It is now being recognized that for stem and progenitor cells to be an effective regenerative therapy, characterization of their cell-surface markers and colonyforming unit (CFU) capacity might not be sufficient release criteria for establishing their cellular biological activity. Recent retrospective studies of results from clinical trials in cardiac regenerative cell therapy have shown that a measurement of therapeutic cell functional capacity is essential [62]. One of the principle assays for assessing progenitor or stem cell functionality is the in vitro cell invasion assay [1,63,64]. The standard in vitro cell invasion assay typically has a single endpoint measurement, measured 18-24 hours after the initiation of the assay. Since most autologous stem and progenitor cell therapy regimens administer cells within a few hours of their harvest, prospective evaluation of stem cell potency prior to cell treatments using this assay has not been possible. Retrospective evaluations using these assays have accurately identified patients who have a favorable benefit from cell therapy as well as patients who do not realize a clinical benefit. Thus, many patients have been treated with cells that were unlikely to provide clinical benefit, exposing patients to risks of the procedure with little benefit and reducing the statistical power of the study. To develop a rapid assessment of cell invasive capacity, this study investigated a new approach for the dynamic monitoring of cell invasion using ECIS technology and a modified Transwell chamber device. Using human Jurkat T cells, a cell line well characterized for its transmigration capacity in standard invasion assays, we found that there was a significant invasive response, as reflected by a change in ECIS Transwell resistance, that could easily be detected within 10 minutes after starting the assay (i.e., after the addition of SDF-1 to the lower Transwell chamber). This ECIS Transwell chemotactic response to SDF-1 was found to be specific to the chemokine in that the addition of AMD3100, a blocker of the SDF-1 receptor, abolished any change in the SDF-1-induced change in Transwell resistance. We also demonstrated that both bone marrow and PB-MNC invasion could be measured using this ECIS Transwell assay. Importantly, measurement by ECIS allows cell invasion to be quantified within minutes, making it possible to include a cell preparation's invasion and migration functional capacity as part of the release criteria for cell administration. The ECIS Transwell assay can thus be used to quantify rapidly the invasive and migratory function of cells, facilitating timely acquisition of cell function data. The ECIS Transwell invasion assay described provides quantitative results rapidly and continuously over time for the invasion and migration of cells through a Matrigel-coated Transwell filter. Table 1 compares the ECIS Transwell invasion assay with the traditional Boyden chamber Transwell assay. The traditional Boyden chamber Transwell assay has a single endpoint for each well, typically 18-24 hours following the addition of the cells to the well. For example, in a traditional Boyden ECIS electric cell-substrate impedance sensing chamber assay there was a significant delay within the first 90 minutes after the addition of SDF-1 before any significant Jurkat transmigration could be detected [26]. In contrast, with ECIS we measured significant BM-MNC and Jurkat invasion and transmigration by 5 and 10 minutes, respectively, after the addition of SDF-1. ECIS is a real-time, label-free, impedance-based method used to study the activities of cells in tissue culture [35]. The ECIS technique is highly sensitive to the changes in the electrical resistance of the porous Transwell filter, making this method a valuable tool for quickly assessing changes in cell transmigration. The rapid changes observed in the ECIS signal are probably due to the chemokine-stimulated cells at the top of the filter moving to and reaching the 8 μm diameter pores and attempting to transmigrate through the pores. As more stimulated cells on the filter surface crawl to the pore sites and attempt to transmigrate, the effective pore diameters are reduced, which increases the resistance across the filter. This phenomenon is described by Coulter's resistive pulse measurement theory [65]. We also found for both BM-MNCs and Jurkat T cells that the SDF-1-induced changes in ECIS filter resistance began to plateau between 1 and 2 hours after the initiation of the experiment. A partial explanation for this observation is that the chemotactic gradient starts to become dissipated within a few hours, resulting in the pores becoming occluded due to slow or nontransmigrating cells. Additional experiments are now underway to further clarify the relationship between changes in the ECIS signal and the corresponding physical transmigration cell morphology (CR, personal communication). Cell migration can continue to occur in the absence of a concentration gradient due to the random walk of cells or chemokinesis [26]. In fact, when the SDF-1 chemokine is placed with cells in the top portion of the Transwell filter, a random migratory cell movement walk can occur with some of the cells going through the filter to the bottom chamber [26,57,66]. In our studies, however, no change in filter resistance was detected when SDF-1 was added along with the cells to the top chamber over the 2-hour period. This indicates that when SDF-1 is added to the bottom chamber, cell chemotaxis, and not chemokinesis, is responsible for the increased resistance measurement of the cell invasion response. This is similar to the data from previous studies using the traditional Transwell assay with Jurkat T cells where only chemotaxis, and not chemokinesis, was found to be involved in SDF-1-stimulated Jurkat cell migration [56,67]. While the speed by which ECIS can be used to quantify cell invasion and migration makes it particularly desirable for assessing the functional capacity of therapeutic cell products, it may also be applicable for monitoring the function of a patient's cells in relation to a disease process or therapy. For example, radiotherapy is an important therapeutic treatment for a variety of cancers, but as the radiotherapy dosage increases it has major side effects including decreased function of the patient's PB-MNCs [34,68,69]. This radiation effect on cells can be critical since many tissue-committed stem/progenitor cells circulate in the peripheral blood and migrate to tissue-specific niches for healing and repair [70]. The optimal maximal radiotherapy dose with minimal side effects on PB-MNCs may also vary between patients. Monitoring the invasive and migratory function of PB-MNCs may help determine the optimal maximum radiation dosage for any given patient. As an example of how ECIS may be a more sensitive measurement of cell health and potential individual susceptibility to radiation toxicity than a standard viability assay, we show here that 24 hours after exposure of PB-MNCS to either no radiation (control cells) or 2.15 Gy of radiation the viability measurements for the two PB-MNC populations were similar, but the ECIS invasion measurements are significantly different for the nonirradiated PB-MNCs versus those irradiated with 2.15 Gy. Other studies have shown that radiation of mouse macrophages up to 2 Gy had no effect on cell viability using an Alamar Blue metabolic conversion assay [71]. However, both the Trypan blue and Alamar Blue dye assays have their limits regarding sensitivity and accuracy in detecting changes in cell viability, which could account for why we found no differences in viability between the untreated and irradiated PB-MNCs [72][73][74]. It is also possible that the low-dose X-ray radiation of the PB-MNCs could affect the cell signal transduction mechanisms [75], or the ability of the cells to properly adhere to the matrix coating on the filter [76], both of which could affect the cells' ability to transmigrate through the filter. It should be noted that the effects of radiation on rat PB-MNC migration have been reported using a conventional Transwell assay [77], but again this manual Transwell assay is laborious and time consuming, and can only measure one experimental time point at the end of several hours. As discussed, studies have shown that a BM-MNC product's invasive functional capacity correlates to its myocardial regenerative capacity, although the cell product's migratory and invasive function was assessed by standard techniques making the results available only after the cell product was delivered. ECIS measurement of BM-MNC migration and invasion could make that information available as part of the cell product's release criteria, potentially preventing the invasive delivery of cells in instances where they will not have regenerative efficacy. Future studies will confirm the utility of the method described here for ECIS Matrigel-coated Transwell invasion assays for assessing BM-MNC product functional capacity and PB-MNC function in association with disease and therapies. Conclusions The ECIS Transwell filter system developed and tested in this study was found to be rapid, accurate, and sensitive for measuring the functional invasion activity of BM-MNCs and PB-MNCs. Given the marked variability in stem and progenitor cell functionality, especially in older patients considered for autologous bone marrow stem and progenitor regenerative medicine treatments, this assay could be used to identify prospectively poorly functioning harvested and prepared stem cells whose use in patients is unlikely to result in significant benefit. Prospectively identifying functional stem cell preparations would reduce patient risks for procedures unlikely to provide clinical benefit as well as decrease the numbers of patients needed for clinical trials in regenerative medicine, This ECIS Transwell filter system has the potential to be used as an alternative diagnostic platform for any cell type where cell migration or invasion is being studied in conventional Boyden or Transwell filter assays using custom or commercially available filters. Finally, this ECIS Transwell device could also supply robust, unambiguous, reproducible, and cost-effective data as a potency assay for cell product release and FDA regulatory strategies. Competing interests CK and CR are employees and owners of Applied BioPhysics Inc., the company that develops and commercializes the ECIS technology. As such they stand to benefit financially from any successful applications of the technology. KWG, CRG, and MR declare a competing interest in that a patent application based upon the methodology of this study is being prepared for submission to the US Patent Office and will list Kenton Gregory and Michael Rutten as the inventors. The remaining authors declare that they have no competing interests. Authors' contributions MR designed and performed experiments and data acquisition, interpreted data, and prepared the manuscript. BL was involved in the preparation of the bone marrow mononuclear samples and discussion of experiments. CRG was involved in the preparation of the PB-MNC samples, discussion of experimental design, review of the data, and drafting and review of the manuscript. HX was involved in the preparation and analysis of the bone marrow mononuclear cell samples. CR was involved in the design of the study, data analysis, and drafting and review of the manuscript. CK was involved in the design of the experiments, contributed reagents and analytical tools, data analysis, and drafting and review of the manuscript the manuscript. KWG was involved in the overall conception and design of the study, review and interpretation of the data, and drafting and review of the manuscript. All authors read and approved the final manuscript.
v3-fos-license
2021-05-09T06:16:24.491Z
2021-05-01T00:00:00.000
233997971
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://doi.org/10.5435/jaaosglobal-d-20-00175", "pdf_hash": "3b50afc6b6d51ecb86fb8705b5121a8e363f7a50", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:1572", "s2fieldsofstudy": [ "Medicine" ], "sha1": "f52b363228585b7dfd2426d6eca59cc715a3c398", "year": 2021 }
pes2o/s2orc
The Competitive Orthopaedic Trauma Fellowship Applicant: A Program Director's Perspective Introduction: In 2018, orthopaedic trauma had the lowest match rate among orthopaedic subspecialties. The purpose of this study was to determine the importance of factors evaluated by orthopaedic trauma fellowship directors when ranking applicants after the interview. Methods: An electronic survey was submitted to fellowship directors and consisted of 16 factors included in a fellowship application. Respondents were asked to rate the importance of these factors for applicants they interviewed on a 1 to 5 Likert scale, with 1 being not at all important and 5 being critical. Results: Thirty-seven fellowship directors responded (63.8%). The highest-rated factor was the applicant interview (mean score 4.82), followed by the quality of letters of recommendation (4.69), personal connections made to the applicant (3.89), and potential to be leader (3.86). Fellowship directors at academic programs rated interest in an academic career (P = 0.003), research experience (P = 0.023), and exposure to well-known orthopaedic traumatologists (P = 0.003) higher than their counterparts at private institutions. Programs with more than one fellow rated potential to be a leader higher than programs with one fellow (P = 0.02). Discussion: Trainees may use this study when compiling an application to optimize their chances of matching at the program of their choice. O rthopaedic residency programs are intended to provide residents a well-rounded exposure to all aspects of the field. However, increasing pressure for subspecialization and duty hour restrictions has developed a heightened interest in trainees pursuing subspecialty training within orthopaedic surgery. A recent study has shown that up to 91% of orthopaedic residency graduates intend to complete additional subspecialty training, whereas a growing number of graduating residents are planning on completing multiple postgraduation fellowships. [1][2][3][4] Orthopaedic trauma is a subspecialty demonstrating an increased interest by orthopaedic residents, with nearly 20% of residents applying for a trauma fellowship position each year. 5 From 2015 to 2018, there was an increase from 71 applicants to 104 applicants, whereas the number of positions offered increased from 78 to 86. 5, 6 Predictably, with the increase in applicants, the match rate decreased from 86% to 78% ( Figure 1). 5, 6 For comparison, the available 2017 match rates for other orthopaedic subspecialties are as follows: spine 83%, foot and ankle 87%, shoulder and elbow 88%, pediatrics 89%, hand 90%, and sports medicine 97%. 5, 6 Because orthopaedic trauma has been the most competitive subspecialty for matching into a fellowship position over the past decade, the importance of a high-quality fellowship application is further emphasized. 5, 6 The fellowship application for orthopaedic trauma has many components, which may be used to evaluate applicants such as the curriculum vitae, letters of recommendation, research experience, and the interview. The relative importance of these components in determining the selection of a fellow remains unclear because no study has evaluated the factors involved in the ranking of applicants. The purpose of this study was to determine the relative importance of these components to orthopaedic trauma fellowship directors when ranking applicants. We hypothesized that orthopaedic trauma fellowship directors will prioritize certain components of the application when ranking their applicants. We believe that this information will be valuable to trainees planning careers in orthopaedic trauma given the increasing competition for securing a fellowship position in this field. Methods A complete list of orthopaedic trauma fellowships and fellowship director e-mails was obtained from the web-site of the Orthopaedic Trauma Association (OTA). 7 Of the 59 programs listed, 9 are accredited by the Accreditation Council for Graduate Medical Education and the remaining 48 are accredited by the OTA. The senior author, TSA, is a fellowship director and his participation from the study was excluded, leaving 58 fellowship directors eligible for participation. An electronic survey, based on a previously validated survey, was submitted to all 58 fellowship directors by e-mail using Google Forms (Mountain View, CA) 8 ( Figure 2). The survey was modified and consisted of a list of 16 factors included in the process for applying to orthopaedic trauma fellowship. The respondents were asked to rate the importance of these factors for applicants they interviewed. The question order was randomized, and all items were ranked on a 1 to 5 Likert scale, with 1 being not at all important and 5 being critical. The senior author contacted nonresponders through e-mail to encourage their participation. The scores for each factor were analyzed by calculating the mean Likert score and SD for each item surveyed. A two-sample t-test was used to analyze the data between two groups, and a one-way analysis of variance was used when comparing more than two groups, with a P value less than 0.05 considered significant. Results Of the 58 fellowship directors eligible for participation, 37 responded, a response rate of 63.8%. There were 23 programs with one fellow and 14 programs with greater than one fellow. Thirty programs were affiliated with residency programs, whereas 7 were community-based. Figure 1 Graph demonstrating the number of applicants (blue line), the number of positions offered (orange line), and the match rate (red line) by year. Three programs were affiliated with level-2 trauma centers, whereas the rest were level-1 trauma centers (Table 1). Of the 16 factors listed on the survey, the most important factor was the applicant interview (mean 4.82, SD 0.38). This was followed by the quality of letters of recommendation (4.69, 0.52), personal recommendations regarding the applicant (3.89, 0.89), potential to be a leader (3.86, 1.02), and the reputation of the residency program of the applicant (3.79, 1.0). The three lowestrated factors were extracurricular activities/hobbies (2.37, 1.10), United States Medical Licensing Examination (USMLE) scores (2.31, 1.00), and geographical ties to the city of the fellowship program (1.54, 0.87). The complete data set is illustrated in Figure 3. When comparing the mean ratings of the interview and the quality of letters of recommendation, fellowship directors feel that they are more important than personal recommendations and all other factors surveyed ( Graph demonstrating mean (blue circles) and the SD range (blue bars) of rated factors. differences were observed when analyzing programs by geographic location or Accreditation Council for Graduate Medical Education versus OTA accreditation. Discussion Our data demonstrate the differential relative importance of certain factors in the applications for orthopaedic trauma fellowship, as determined by fellowship directors. Uniformly, they find the interview to be the most important factor followed by quality of the letters of recommendation and personal recommendations regarding the applicant. Geographical ties to the program, USMLE scores, volunteer experience, and Orthopaedic In-Training Examination scores were much less important, however. This information is critical for trainees when prioritizing their efforts and drafting their application for securing a competitive fellowship position in orthopaedic trauma. Multiple studies in other subspecialties similarly found that the interview is the most important factor in ranking an applicant. [8][9][10] Grabowski and Walker, 11 in a survey of various orthopaedic fellowship directors across all subspecialties, also found that the interview was the most important factor in ranking an applicant. The interview was also found to be the most important factor when ranking medical students for orthopaedic surgery residency positions and was found to have the highest correlation with the final rank of the applicant. 12 Although the applicant may have the opportunity to interact with fellowship directors during informational sessions, courses, or site visits, the interview is the only formal time through the application process that the applicant has direct interaction with their potential fellowship program and can demonstrate their communication skills, maturity level, self-confidence, the ability to listen and articulate thoughts, and personality fit within the program. 10 Standardized tests have been shown to be very important in the selection of orthopaedic residents. 13 Given the low ratings given to Orthopaedic In-Training Examination and USMLE scores, it is our opinion that orthopaedic trauma fellowship directors do not feel that a standardized, multiple-choice examination is predictive of success within the field of orthopaedic trauma. This holds true among other orthopaedic subspecialties as well. 8,10,13 Orthopaedic trauma fellowship directors rated personal recommendations higher than other subspecialties. 8,10,13 In a similar survey, orthopaedic sports medicine fellowship directors rated personal recommendations as the fifth most important factor, whereas we found it to be the third most important factor surveyed. 8 Personal recommendations may hold greater importance in orthopaedic traumatology because it is a much smaller network than sports medicine. It has been suggested that when applicants have positive personal connections with a program either directly or through their mentors, this increases their likelihood to match at that particular fellowship program. 8 Another interesting finding was that publications and research experience were the seventh and ninth important factors ranked by fellowship directors, respectively. It should be noted, however, that academic programs did rank research experience markedly higher than their nonacademic counterparts. A previous study found that subspecialty research was one of the five most important factors in obtaining an interview to an orthopaedic fellowship program. 11 In that study, participation in research, irrespective of authorship, was found to be important by 90% of respondents and nearly 5% of fellowship directors felt that the applicant needed to be first author. 11 Our study suggests that orthopaedic trauma fellowship directors feel that there are more important factors than research, although most fellowship programs have a research requirement of at least one paper per year. 14 We found that fellowship directors at programs affiliated with academic institutions rated certain factors higher than their counterparts at programs that were not affiliated with an academic institution. This is a finding that has not been demonstrated in previous studies. We found that fellowship directors at academic programs placed higher value on an interest in pursuing an academic career, research experience, and exposure to wellknown traumatologists during residency. The major strength of our study is the response rate. A response rate of 63.8% is considered excellent for an electronic survey. 15 The limitations to our study are that of many survey studies. The surveys were not submitted anonymously, and this may have influenced participation in the study and the responses. In using the averages of Likert scores, we ordered the importance of items based on these averages. It is possible that our results may have been different if the respondents were asked to directly rank the topics presented. In addition, the number of items surveyed was based on a previously validated study but was not exhaustive. 8 However, we did allow fellowship directors to enter free-text responses for any other factors we did not include on the survey, but no consistent themes were available and only one program director inserted a comment. This is the first study to evaluate factors that orthopaedic trauma fellowship directors consider when ranking applicants. With the increasing numbers of applicants and competition for orthopaedic trauma fellowship positions, we believe that our study provides those interested in pursuing a career in orthopaedic trauma with useful information. Based on the results of this study, the ideal candidate for an academic trauma fellowship will have research experience, have had exposure to orthopaedic traumatologists during residency, and have an interest in an academic career. They must also have excellent letters of recommendation and demonstrate the potential to be a leader during the interview. Candidates for nonacademic orthopaedic trauma fellowships do not need extensive research experience and may come from a program that does not have prominent faculty. However, they must also interview well and have excellent letters of recommendation. Based on our results, residents who wish to subspecialize in orthopaedic trauma should build a relationship with orthopaedic trauma surgeons during their residency as this may assist them in securing a fellowship position as the surgeon may make a personal recommendation on their behalf. Overall, we believe that trainees may use this study to assist with compiling a strong application to optimize their chances of matching at the program of their choice in the increasingly competitive field of orthopaedic trauma.
v3-fos-license
2018-04-03T01:23:10.840Z
2017-10-01T00:00:00.000
3280363
{ "extfieldsofstudy": [ "Psychology", "Medicine" ], "oa_license": "CCBY", "oa_status": "HYBRID", "oa_url": "https://doi.org/10.1016/j.adolescence.2017.07.002", "pdf_hash": "87117ba1902708339a788c81701a52aecebc9e81", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:1574", "s2fieldsofstudy": [ "Psychology" ], "sha1": "496537ef67d610c2036be5efdaed22f67c89f71e", "year": 2017 }
pes2o/s2orc
Age-related differences in social influence on risk perception depend on the direction of influence Adolescents are particularly susceptible to social influence. Here, we investigated the effect of social influence on risk perception in 590 participants aged eight to fifty-nine-years tested in the United Kingdom. Participants rated the riskiness of everyday situations, were then informed about the rating of these situations from a (fictitious) social-influence group consisting of teenagers or adults, and then re-evaluated the situation. Our first aim was to attempt to replicate our previous finding that young adolescents are influenced more by teenagers than by adults. Second, we investigated the social-influence effect when the social-influence group's rating was more, or less, risky than the participants' own risk rating. Younger participants were more strongly influenced by teenagers than by adults, but only when teenagers rated a situation as more risky than did participants. This suggests that stereotypical characteristics of the social-influence group – risk-prone teenagers - interact with social influence on risk perception. Introduction Other people's beliefs and actions can have a significant impact on our own behaviour. A large body of work has shown that people change their behaviour in order to fit in with others (Berns, Capra, Moore, & Noussair, 2010;Zaki, Schirmer, & Mitchell, 2011). It has been suggested that people sometimes correct their behaviour and use others' actions as a guideline because they assume that other people's behaviour is more accurate or correct (informational conformity; Deutsch & Gerard, 1955). People also adjust their behaviour due to social norms or the pursuit of acceptance (normative conformity; Deutsch & Gerard, 1955). The degree of conformity is age-dependent, with children and young adolescents showing a higher susceptibility to social influence than adults (Costanzo & Shaw, 1966;Hoving, Hamm, & Galvin, 1969;Knoll, Magis-Weinberg, Speekenbrink, & Blakemore, 2015). Adolescence is a period of life during which we become less family-centred and spend more time with friends (Brown, 1990). The amount of time spent with same-sex peers increases between childhood and adolescence, until midadolescence (around age 14), when it appears to peak (Lam, McHale, & Crouter, 2014). What their peers think about them starts to have more influence on adolescents' (13e17 years) evaluation of their social and personal worth as compared with children aged 10e12 years (O'Brien & Bierman, 1988). Peers influence decision-making in adolescence: for example, young and mid-adolescents are more likely to engage in risky behaviour when with their peers than when alone (Dishion & Tipsord, 2011;Gardner & Steinberg, 2005). It has been proposed that this heightened peer influence is partly due to adolescents being hypersensitive to peer rejection (Peake, Dishion, Stormshak, Moore, & Pfeifer, 2013;Sebastian, Viding, Williams, & Blakemore, 2010;Somerville, 2013) and possibly also to social approval (Foulkes & Blakemore, 2016). This is proposed to lead adolescents to make decisions in the pursuit of social acceptance and avoidance of social exclusion (Blakemore & Mills, 2013;Knoll et al., 2015). In a previous study, we investigated age-related changes of social influence on risk perception from late childhood through adulthood in a large group of participants aged between 8 and 59 years (Knoll et al., 2015). Participants were asked to rate the riskiness of everyday situations and were then presented with (fictitious) risk ratings of the same situations from other people, either teenagers or adults. Participants were then asked to rate the riskiness of the situations again. The results showed that all age groups were influenced by other people's opinions: participants of all ages changed their initial risk ratings in the direction of other people's ratings, but this social-influence effect was highest in late childhood and decreased with age. The results also indicated that, while children and adults were more influenced by the opinions of adults, young adolescents (aged 12e14 years) changed their ratings more towards the ratings of teenagers than towards the ratings of adults. In the current study, we employed the paradigm used in Knoll et al. (2015) and asked a new cohort of 590 participants (aged 8e59 years and divided into five age groups, as in the previous study) to rate the riskiness of everyday situations before and after being shown risk ratings from adults or teenagers. Our first aim was to investigate whether our previous findings could be replicated in a new sample. Specifically, we predicted that the social-influence effect would decrease with age and that young adolescents would be more influenced by teenagers than by adults (peer influence hypothesis). In our previous study, we suggested that the social-influence effect was due to young adolescents wanting to be accepted by their peer group, rather than trusting the ratings of teenagers more than those of adults (Knoll et al., 2015). Other studies have indicated that adolescents' real risk-taking behaviour is affected by their perception of their peers' risk-taking behaviour. For example, a study by D' Amico and McCarthy (2006) demonstrated that perceived peer use of cannabis consumption predicted both onset and extent of cannabis use in young/mid-adolescents (aged 10e15 years). Furthermore, Helms et al. (2014) found that mid-adolescents (aged 16 years) often misperceive the degree of risk-taking behaviour of their peers and this misperception is suggested to predict adolescents' own risk behaviour. It could be that adolescents overestimate the risk-taking behaviour of their peers due to the idea that adolescents are generally more risk-taking than other age groups, which is a common stereotype of 'typical' adolescents. The second aim of the current study was to investigate whether the social-influence effect would depend on the direction of other people's risk ratings. Specifically, we analysed the extent to which participants' risk ratings are affected by whether other people (either teenagers or adults) rated the situations as less or more risky than did the participants. To this end, we separately analysed situations that were rated as less risky, or more risky, by the participants than by the social-influence groups. We were interested in three hypotheses: (i) the degree of social-influence would decrease with age in both directions i.e. when the provided rating of other people was higher than the participants' rating and when it was lower (directional social influence hypothesis); (ii) the directional social-influence effect would be different depending on the social influence group (teenagers or adults; directional peer influence hypothesis); and (iii) this directional peer influence would be different between participant age groups (age-dependent peer influence hypothesis). Specifically, we predicted that, because of stereotypes about teenage risk-taking, participants would be more likely to increase their risk ratings of a situation if teenagers rate the situation as more risky than they did. This might particularly be the case for young adolescents, who were previously found to be more influenced by teenagers than by adults (Knoll et al., 2015). Participants Participants were visitors to the Science Museum in London, UK, on nine days in May and June 2016. Participants were recruited through information screens around the museum publicising the study, and by researchers inviting visitors to take part. Data from 590 participants (mean age ¼ 22.4 years, SD ¼ 11.6, age range ¼ 8e59 years; 316 females, 274 males) were included in the analyses. Data from 24 additional participants were excluded because their responses were incomplete, they were interrupted by other visitors during the task or they volunteered information about being diagnosed with a developmental condition, such as autism or dyslexia. Data from 15 participants were excluded because their age was outside our chosen age range (the same age range as in the previous study by Knoll et al., 2015). Participants were divided into five age groups, as in the prior study by Knoll et al. (2015): 110 children (48 females, mean age: 9.6 years, age range: 8e11), 63 young adolescents (31 females, mean age: 12.8 years, age range: 12e14), 61 midadolescents (40 females, mean age: 16.8 years, age range: 15e18), 193 young adults (111 females, mean age: 21.6 years, age range: 19e25), and 163 adults (86 females, mean age: 37.9 years, age range: 26e59). Informed consent was obtained from participants older than 15 years old and from parents of participants under 16 years old. Participants were not compensated for taking part in the study; the study was advertised as an opportunity to volunteer in a real science experiment. The study was carried out in accordance with UCL Research Ethics Guidelines and approved by the University College London ethics committee. Risk perception task We used the risk perception task (Knoll et al., 2015) in which participants are presented with 12 risky scenarios, such as 'Riding a bike without a helmet' (see Supplemental Material available online for full list of scenarios). The scenarios were designed to be mildly to moderately risky and, critically, to plausibly result in wide variation in risk perception. Thus, each scenario was selected so that it would not be surprising to participants if it were rated as very risky by some people and not risky at all by others. For each stimulus, participants simultaneously read the scenario on the screen and also listened to it via a set of headphones. The auditory stimuli were spoken by an English female researcher and recorded in a soundproof chamber. After recording, stimuli were digitized (sampling rate ¼ 44.1 kHz; bit depth ¼ 16; monaural) and normalized. Statements were accompanied by an image depicting the situation without providing too much contextual information (e.g. a picture of a bicycle). Participants first read and listened to instructions about the risk perception task. On each subsequent trial, the participant was asked to imagine that someone was engaged in the activity presented. The participant was then asked to rate how risky they thought the scenario was, by using a computer mouse to move a slider to the left side (low risk) or to the right side (high risk) of a colourful visual analogue scale (see Fig. 1). The slider initially appeared at a random position on the scale on each trial to avoid any systematic anchoring bias. There was no time restriction for the first rating: participants had unlimited time to consider their answer and could freely move the slider while they decided, but once they had clicked the mouse the trial moved on. After making the first rating, the participant was shown a risk rating of the same situation by either adults or teenagers (the social-influence group: adult social-influence condition, teenager social-influence condition) for 2 s. Participants were told that these ratings were the average answers given by previous study participants. In fact, they were randomly generated and could be lower or higher than the initial rating. This minor deception was approved by the ethics committee. The factor direction was retrospectively determined. In 42% of all trials, the participant's first rating was lower than the provided rating of the social-influence group, and in 58% of all trials it was higher. After being presented with the provided rating from the social-influence group, the participant was asked to rate the same situation again (see Fig. 1). There was no time restriction for the second rating. The subsequent trial started one second after the second rating was provided. A total of 79 different situations were generated for the experiment; 12 (six per social-influence condition) were randomly selected for each participant. The paradigm employed in the current study was identical to that used in the previous study, with one difference. In the previous version of the task (Knoll et al., 2015), there was a third condition, in which participants saw their own first rating again (rather than a fictitious rating from teenagers or adults) and were asked to rate again. This condition was included previously to check that there was no systematic difference between the age groups in terms of remembering their first rating and to find out the degree to which the participants in different age groups shifted their answers under no social influence. As there were no significant differences between groups in this condition in the previous study, and in order to shorten the task duration, we did not include this control condition here. The task was programmed using Cogent 2000 (University College London Laboratory of Neurobiology; http://www.vislab. ucl.ac.uk/cogent_2000.php) and run in MATLAB (Version R2012b; MathWorks Inc., Natick MA). Participants were asked to perform the risk perception task after performing a task that assessed face processing, which is not included in this Participants were asked to imagine that someone was engaged in an activity (in this example, crossing the street on a red light). They then rated the activity's risk by using a computer mouse to move a slider on a visual analogue scale. After making this rating, participants were shown a risk rating of the same situation that was ostensibly provided by a group of either adults or teenagers (the social-influence group). The ratings from the socialinfluence group were actually randomly generated. Finally, participants were asked to rate the same situation again. Adapted and reprinted from (Knoll et al., 2015). (For interpretation of the references to colour in this figure legend, the reader is referred to the web version of this article.) manuscript. The entire set of tasks and instructions took around 12 min. The programme was designed so that all trials required a response, and therefore there were no missing data. The data were collected in the Live Science area at the back of one of the galleries in the Science Museum, London, UK. This is a secluded and quiet area in which experiments are routinely carried out by research scientists. Three or four researchers were present to oversee testing at all times and there were four laptops available to testing. Statistical analysis We employed a 2 Â 2 x 5 factorial design with the within-subjects factor social-influence group (teenagers vs. adults), direction of influence (lower vs. higher ratings) and the between-subjects factor age group (children, young adolescents, midadolescents, young adults, adults). All statistical models were estimated in R (R Core Team, 2004), using the lme4 (Bates, Maechler, & Bolker, 2013) and lmerTest (Kuznetsova, Brockhoff, & Christensen, 2013) packages. We ran two linear mixedeffect models to analyse our data, as explained below. Analysis 1: original study replication In order to replicate the results of the previous study (Knoll et al., 2015), we first used the same linear mixed-effects model to investigate how much participants changed their risk rating in the direction of others' ratings and whether this socialinfluence effect is dependent on the source of information (teenagers or adults) and the age of participants: second rating ¼ first rating þ Drating þ Drating x age group þ Drating x social-influence group þ Drating x social-influence group x age group. As in the original analysis, the current model included the distance between provided rating and first rating (Drating). Dummy coding was used for age groups. We ran the model twice using the young adults as the baseline group (as in our original model), and with young adolescents as the baseline, to test our hypothesis that young adolescents are more influenced by the social-influence group teenagers than adults. As these two linear mixed-effects models are sufficient to investigate whether we could replicate the key result of our previous study, it was not necessary to run any additional comparisons. As we were only interested in replicating our original results, we ran the model with these two groups as the baseline groups, and not with any other group as baseline. Analysis 2: direction factor We applied a different linear mixed-effects model to investigate the degree to which participants changed their risk ratings in the direction of other people's ratings when the social-influence group rated situations as more or less risky, and the extent to which this change depended on whether the social-influence group consisted of adults or teenagers. This model incorporated: (a) fixed effects that reflected average effects within and differences between the experimental conditions and (b) random effects that took into account individual variability in the effect of participants' first rating on their second rating and variability between scenarios. The linear mixed-effects model was used to assess the dependence of a participant's second rating on two main predictors: the first rating and the direction of the difference between the rating provided by the social-influence group and the participant's first rating. The difference could be positive (the social-influence group rated the situation as higher risk than the participant) or negative (the social-influence group rated the situation as lower risk than the participant). One trial from one participant was excluded from the analysis, as the difference was zero. The factor direction was retrospectively determined. Therefore, we compared likelihood of the full model with the fixed effects social influence group by direction by age group against reduced models with only factor age group and a model with factors age group and social influence group. Model comparisons indicated that the factor social-influence group significantly improved the model (c2 (5) ¼ 29.67, p < 0.001), but the best model was the full model with all three factors included (c2 (10) ¼ 1032.2, p < 0.001). To investigate the effect of gender, we applied a model including gender (male, female) as an additional factor. Comparing the models, we found that adding this factor improved the model fit (c2 (20) ¼ 35.578, p ¼ 0.017). However, we decided against a model including gender as a factor because adding gender to the model would double the number of planned comparisons and we had not planned a priori, and do not have sufficient power, to investigate gender differences. Using the full model with the three-way interaction age group by social influence group by direction, we were particularly interested in: i) whether the previously reported social-influence effect, which was found to decrease with age, would show the same decreasing pattern in trials in which the social-influence group ratings were higher or lower than the participant's first rating (directional social influence hypothesis); ii) the directional social-influence effect would be different depending on the social-influence group (directional peer influence hypothesis); and iii) this differences would be different between age groups (age-dependent peer influence hypothesis). To investigate these questions, the model included interactions between social-influence group, direction of provided rating and age group: second rating ¼ first rating þ age group x social influence group x direction. Fixed effects were included for all the main effects and interaction factors in the model. The model did not include an intercept, because an intercept not identical to zero would mean that participants' second rating always increased (or decreased). The effects of the fixed effects on the dependent variable were investigated using an omnibus Type III Wald c 2 test. Planned comparisons were performed to inspect changes in social influence between age groups and social-influence group using the lsmeans package (Lenth, 2016). In the first analysis, we investigated age group differences on the degree of social influence irrespective of the factor social-influence group. We analysed situations that were rated as less or more risky by the participants compared to the social-influence groups separately. The directional social-influence hypothesis was tested by comparing the degree of influence of the social-influence groups between the five age groups. Each age group's difference was compared with every other age group's difference for both directions, resulting in 20 tests. To investigate our directional peer-influence hypothesis, we looked at differences in the degree of influence of the social-influence groups (adults vs teenagers) within each age group for both higher and lower ratings. The difference for all five age groups was compared for both directions, resulting in 10 tests. The age-dependent peer-influence hypothesis was tested by comparing these differences in the degree of influence of the social-influence groups (adults vs teenagers) effects between the five age groups. Each age group's difference was compared with every other age group's difference for both directions, resulting in 20 tests. All reported results were Bonferroni-corrected for the multiple planned comparisons within each hypothesis. Analysis 1: original study replication We found a significant interaction between age group and Drating (difference between provided rating and first rating), indicating that participants changed their risk ratings in the direction of the provided ratings and this effect differed between age groups, replicating previous findings (Knoll et al., 2015). There were age differences between children and young adolescents (t(895) ¼ 2.63, p ¼ 0.009) and between young adolescents and mid-adolescents (t(785) ¼ À3.70, p < 0.001) and no age differences between mid-adolescents and young adults (t(737) ¼ 1.73, p ¼ 0.08), or between young adults and adults (t(717) ¼ À1.30, p ¼ 0.2). This partially replicates the previous findings, which found significant decreases in social influence between each successive pair of age groups (for table of model summaries see Supplementary Tables 1 and 2). Young adults were more influenced by the opinion of adults than teenagers (in young adults t(6494) ¼ 3.59, p < 0.001), replicating the previous results. In the previous study, 12e14 year olds were more influenced by teenagers than by adults and this was not the case for any other age group. In the current study, although the social-influence group effect in 12e14 year olds appears to be in the same direction as the previous findings ( Fig. 2 shows that overall, young adolescents were more influenced by teenagers than by adults), the direct comparison between social-influence group teenagers and adults was not significant for this age group (t(6499) ¼ À0.65, p ¼ 0.5). However, the interaction between age group (two levels: young adults versus young adolescents) and social-influence group (two levels: teenagers vs adults) was significant (t(6461) ¼ À2.32, p ¼ 0.020), as was the interaction between age group (young adults vs children) and social-influence group (two levels: teenagers vs adults; t(6537) ¼ À2.55, p ¼ 0.011). This shows that, whereas adults were more influenced by adults than by teenagers, the effect was reversed in the young adolescent age group, replicating our previous findings. Directional social-influence hypothesis We used a linear mixed-effects model that incorporated the direction of the social-influence group's ratings as a factor to investigate how much participants change their rating if the social-influence group rated a situation as lower risk or higher Fig. 2. The graph presents the difference between the two slopes for the average change in risk rating after seeing the ratings of the social-influence groups, predicted by using the estimates of the linear mixed-effect model, for five age groups. A positive difference indicates a greater degree of influence by the socialinfluence group adults than by teenagers. A negative difference indicates a greater degree of influence by the social-influence teenagers than by adults. Asterisks indicate significant main effect of peer-influence (adults vs teenagers) within age group and interaction on peer-influence effects between age groups (* p < 0.05). risk than did the participants. We observed a significant main effect of age (c 2 (5) ¼ 52.89, p < 0.001) and planned comparisons revealed that social influence decreased from childhood to adulthood both when the provided rating was higher than the participant's rating and when it was lower (see Fig. 3). However, not all comparisons survived alpha correction for the 20 multiple comparisons conducted to test this hypothesis (see Table 1). First, when the social-influence group rated situations as less risky: children were more influenced than young adolescents (z ¼ À3.52, p < 0.001), mid-adolescents (z ¼ À5.83, p < 0.001), young adults (z ¼ À10.38, p < 0.001) and adults (z ¼ À10.80, p < 0.001); and young adolescents were more influenced than young adults (z ¼ À4.83, p < 0.001) and adults (z ¼ À5.30, p < 0.001). Second, when the socialinfluence group rated situations as more risky: children were more influenced than young adolescents (z ¼ 4.60, p < 0.001), mid-adolescents (z ¼ 7.07, p < 0.001), young adults (z ¼ 10.86, p < 0.001) and adults (z ¼ 11.30, p < 0.001); and young adolescents were more influenced than young adults (z ¼ 3.87, p < 0.001) and adults (z ¼ 4.52, p < 0.001). While we did not find a continuous decrease throughout all five age groups, we found that children were more influenced than young adolescents and both age groups were significantly more influenced by other people's risk ratings compared to young adults and adults, in both directions. Directional peer-influence hypothesis The two-way interaction between age group and social-influence group was not significant (c 2 (4) ¼ 6.25, p ¼ 0.18). However, planned comparisons revealed that young adults and adults were more influenced by the social-influence group adults compared to teenagers when the provided risk ratings were lower than their own ratings. In contrast, children and young adolescents were more influenced by the social-influence group teenagers than adults when the provided risk ratings were higher than their own ratings. Only the effect in children (z ¼ À3.79 p ¼ 0.001), young adults (z ¼ À4.01, p ¼ 0.001) and Fig. 3. The graph shows the average differences in rating (ratings 2 minus rating 1) with standard error bars. Results are shown separately for the adult socialinfluence condition and the teenager social-influence condition, for each age group. Bars on the left ('Lower group ratings') were from the conditions in which the social-influence group rated risk lower than participants' initial rating. Bars on the right ('Higher group ratings') were from the conditions in which the socialinfluence group rated risk higher than participants' initial rating. Asterisks indicate significant main effect of peer-influence (adults vs teenagers) within age group and interaction on peer-influence effects between age groups (** p < 0.01, * p < 0.05, black: Bonferroni corrected result; grey: uncorrected result). adults (z ¼ À2.87,p ¼ 0.04) survived Bonferroni correction for the 10 comparisons conducted to test this hypothesis (for planned comparisons see Table 2). Age-dependent peer-influence hypothesis The three-way interaction between age group, direction and social-influence group was significant (c 2 (4) ¼ 18.88, p < 0.001), indicating that the different social-influence groups, teenagers or adults, and the direction of the provided rating, had a significant impact on the second rating, and this effect differed between age groups. Planned comparisons investigating the interactions between social-influence groups and age groups revealed that younger age groups were significantly more influenced by teenagers than older age groups. Significant interactions were found between children and young adults, between children and adults, between young adolescents and young adults as well as between young adolescents and adults, when the social-influence group rated situations as more risky than did participants. That is, younger age groups were significantly more influenced by teenagers than by adults in more risky conditions. Only planned comparisons in children vs young adults (z ¼ 3.51, p ¼ 0.008), and children vs adults (z ¼ 3.48, p ¼ 0.008), survived Bonferroni correction for the 20 comparisons conducted to test this hypothesis (for planned comparisons see Table 3). Discussion The current study investigated the social-influence effect on risk perception in adolescence. We were particularly interested in examining age-related differences in social-influence effects as a function of social influence group (teenagers or adults) as well as the direction of influence (more or less risky). Our first aim was to replicate the results of our previous study (Knoll et al., 2015) in a new sample of 590 participants, with a similar age and gender distribution. In the previous study, 55.6% of participants were female and the mean age was 23.4 years, whereas in the current study, 53.6% of participants were female and the mean age was 22.4 years. The current study partially replicated the previous findings in three ways. First, as in the previous study, the current study found that risk perception was influenced by the risk ratings of other people. Second, the social-influence effect decreased from late childhood to late adolescence. It has been suggested that participants adjust behaviour due to informational conformity, that is the pursuit of accuracy (Deutsch & Gerard, 1955). Participants might have taken into account risk perception of other people and adapted their ratings accordingly. As the degree of conformity was found to be dependent on age (Engelmann, Moore, Monica Capra, & Berns, 2012;Hoving et al., 1969), it is possible that older participants are more confident in their risk ratings as they have experienced these kinds of situation more often than younger age groups. In contrast to the previous study, we did not observe further significant decrease of social influence into adulthood. Thus, while our current results provide further evidence that social influence decreases from childhood to mid-adolescence, our previous results showing a decrease into adulthood should be interpreted with caution. Third, there was an interaction between age group and social-influence group such that young adults were more influenced by adults than by teenagers, whereas this peer-effect was reversed in young adolescents. Note that, in contrast to the previous study, this reversal was also present in children. In the previous study, we suggested that children might be more influenced by the social-influence group adults than by teenagers because adults represent authority figures, who have experienced the situations more often than teenagers. It could be that the group of children tested in this second study happened to be more frequently surrounded by teenagers, for example, they may have had more older siblings than those in the first study. Of course, this is only speculation and further studies would need to be conducted to investigate this idea. Previous research has shown that, when adolescents are with peers they are more likely to take risks, such as engaging in reckless behaviour and experimenting with drugs, alcohol and cigarettes, compared to when they are alone (Reniers et al., 2017). Lab experiments have shown that adolescents are more likely to take driving risks when with friends, compared with when alone; in contrast, adults' (24 years and over) driving risks are unaffected by peers (Gardner & Steinberg, 2005). This is mirrored by findings from data from car accidents, which indicate that the risk of accidents for young drivers is heightened when they have a passenger in the car (Chen, Baker, Braver, & Li, 2000). Studies in Hong Kong, for example, have shown that having friends who smoke or drink alcohol is the biggest predictor of adolescent smoking and drinking (Loke & Mak, 2013). A longitudinal study involving adolescents (aged 10e15 years at the start of the study) in California, showed that perceived peer cannabis use predicts onset and extent of the adolescent's own cannabis use over the next three years; a similar relationship was found for alcohol use (D'Amico & McCarthy, 2006). Many previous studies do not include a large age range and therefore cannot draw conclusions about age differences in social influence. Our findings extend this previous research by including a large age range and showing that young adolescents' risk perception is particularly influenced by the risk ratings of teenagers (Fig. 2). The second aim of the current study was to understand whether social influence is affected by the direction of other people's risk ratings and whether this differs as a function of the age of participants. This new analysis showed that risk perception was influenced by lower and higher risk ratings of the social-influence group and such influence decreased from childhood to adulthood and was in line with our directional social influence hypothesis. In addition, we found that the directional peer-influence effect was dependent on whether the social-influence group consisted of teenagers or adults (directional peer influence hypothesis). Specifically, children were more influenced by teenagers when teenagers rated the situation as more risky than the participants did. The same trend was found in young adolescents, who were more influenced Table 3 Planned contrasts showing differences of peer influence effect between age groups for lower and higher ratings, respectively. by teenagers when teenagers rated the situation as more risky than the participants' ratings (although note that planned comparisons in young adolescents did not survive Bonferroni correction). In contrast, young adults and adults were more influenced by adults than by teenagers when the social-influence group adults rated the situation as less risky than the participants (age-dependent peer influence hypothesis). One explanation for this opposite pattern is that it is due to shared beliefs about members of a particular group, which in this study were the social-influence groups teenagers and adults. Shared beliefs can be stereotypes, which are socially shared characteristics and expectations and are used to predict behaviour of members of a group (Stangor & Lange, 1994;Yzerbyt, 2016). This study was framed to participants as a study about risk-taking and it is possible that stereotypes about teenage behaviour, such as the notion that teenagers are prone to risk-taking (Buchanan & Holmbeck, 1998), drove the directional peer-influence effect. In this case, children and young adolescents were more influenced by teenagers than by adults when stereotyped 'risk-prone teenagers' rated situations as more risky than the participants themselves rated the situation. In other words, we are suggesting that children and young adolescents pay more attention when teenagers consider a situation as high risk as, due to stereotypes about teenage risk-taking, it might be surprising to them that teenagers rate a situation as very risky. Specifically, if an adolescent learns that his or her peers consider a situation very risky, he or she might pay heightened attention to that information because it might be especially surprising that other teenagers e a group they stereotypically expect to take risks e rate a situation as very risky. This surprising information might make the adolescent pay attention to this information and to adjust their own ratings accordingly. This interpretation is supported by a behavioural study that indicated that social influence could be driven by perception of social norms (Helms et al., 2014). Helms and colleagues found that mid-adolescents (aged 16 years) tend to overestimate the risk-taking behaviour of their peers, especially high status peers. As Helms and colleagues did not include young adolescents in their study, we do not know whether the same effect would be found in young adolescents. Similarly, older age groups in the current study were more likely to follow adult advice when the social-influence group adults rated situation as less risky. This might be due to the social-influence group adults being considered more experienced and trustworthy or because older age groups expected teenagers to underestimate risk. Limitations There were a number of limitations to our study. First, due to the restrictions imposed by the Science Museum, where testing was carried out, we were unable to collect information about participant characteristics such as socio-economic status and ethnicity. Based on information provided by the Science Museum, 3.22 million people visited the Museum in 2016/2017 and, of these, 58% visited as part of a family and 42% independent adults. Approximately 59% of independent adult visitors were tourists, so our sample is not likely to be representative of the British population in terms of culture or ethnicity. As cultural differences could have an impact on social influence, this should be explored in future studies. Second, we do not have information about the visitors who volunteered for our study compared with those who did not volunteer. Visitors were informed about the study via information screens throughout the museum and there was no way of knowing how many visitors saw the information screens, or were invited to take part but declined. It is possible that visitors who volunteered were a self-selecting group, and might differ from the average Science Museum visitor, perhaps in terms of motivation and language. Third, due to time constraints, we did not specifically ask about developmental conditions, such as dyslexia and autism spectrum conditions, and instead we excluded data from participants who volunteered this information. It is possible that individuals with developmental conditions were included in our sample, and this might have affected the data. Fourth, although the testing area was relatively quiet, calm and secluded, other visitors occasionally pass by and it is possible to hear noises from the nearby exhibits. We asked parents and other visitors to keep their distance from participants while they were taking part, and participants were also asked to wear noise-cancelling headphone to avoid distractions. Implications Understanding the effect of peer influence on risk-taking might help to improve public health interventions and specifically target younger age groups. Public health advertising aimed at young people's predilection for risky behaviours tends to focus on the health risks of these behaviours, but focusing on social norms and peer expectations might have more impact on adolescent behaviour. This was supported by a recent public health study that looked at the influence of social norms on bullying behaviour and conflict in schools (Paluck, Shepherd, & Aronow, 2016). Fifty-six middle schools (with children aged 11e16 years) in the state of New Jersey, USA, were included, with half of the schools being assigned at random to an antibullying programme. In this programme, a number of students in each year participated in an anti-conflict programme, which involved a researcher working with the students to understand the negative effects of bullying. The students on the programme were encouraged to lead grassroots anti-bullying campaigns in their schools and become the public face of opposition to bullying. Compared with control schools, in which no special anti-bullying programmes had been introduced, reports of student conflict at the schools that had received the student-led anti-bullying programme were reduced by 30%. Furthermore, when the anti-bullying campaign was led by more popular students, it had a greater positive effect on behaviour. The study reveals the power of peer influence in changing social norms of acceptable behaviour. Conclusion The results of the current study suggest that stereotypical characteristics of the social-influence group interact with socialinfluence on risk perception. The findings indicate that children and young adolescents place different values on the opinions of the social-influence groups than do older people, and attach more importance to heightened risk perception of teenagers. We propose that socially shared expectations of specific group members, in this study the social-influence group teenagers, affect the degree of social influence on risk perception. Future research should explore other key factors that drive and affect social influence and how these factors are valued across development.
v3-fos-license
2017-04-14T00:29:57.572Z
2017-03-31T00:00:00.000
14688760
{ "extfieldsofstudy": [ "Computer Science" ], "oa_license": "CCBY", "oa_status": "HYBRID", "oa_url": "https://www.aclweb.org/anthology/P17-1061.pdf", "pdf_hash": "f23c300290687024e79c8c2d1279b009672e1973", "pdf_src": "Arxiv", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:1578", "s2fieldsofstudy": [], "sha1": "58dfeb0bc41429393bf27ff882b7b679031f106c", "year": 2017 }
pes2o/s2orc
Learning Discourse-level Diversity for Neural Dialog Models using Conditional Variational Autoencoders While recent neural encoder-decoder models have shown great promise in modeling open-domain conversations, they often generate dull and generic responses. Unlike past work that has focused on diversifying the output of the decoder from word-level to alleviate this problem, we present a novel framework based on conditional variational autoencoders that capture the discourse-level diversity in the encoder. Our model uses latent variables to learn a distribution over potential conversational intents and generates diverse responses using only greedy decoders. We have further developed a novel variant that is integrated with linguistic prior knowledge for better performance. Finally, the training procedure is improved through introducing a bag-of-word loss. Our proposed models have been validated to generate significantly more diverse responses than baseline approaches and exhibit competence of discourse-level decision-making. Introduction The dialog manager is one of the key components of dialog systems, which is responsible for modeling the decision-making process. Specifically, it typically takes a new utterance and the dialog context as input, and generates discourse-level decisions (Bohus and Rudnicky, 2003;Williams and Young, 2007). Advanced dialog managers usually have a list of potential actions that enable them to have diverse behavior during a conversation, e.g. different strategies to recover from non-understanding . However, the conventional approach of designing a dialog manager (Williams and Young, 2007) does not scale well to open-domain conversation models because of the vast quantity of possible decisions. Thus, there has been a growing interest in applying encoder-decoder models (Sutskever et al., 2014) for modeling open-domain conversation (Vinyals and Le, 2015;Serban et al., 2016a). The basic approach treats a conversation as a transduction task, in which the dialog history is the source sequence and the next response is the target sequence. The model is then trained end-to-end on large conversation corpora using the maximum-likelihood estimation (MLE) objective without the need for manual crafting. However recent research has found that encoder-decoder models tend to generate generic and dull responses (e.g., I don't know), rather than meaningful and specific answers (Li et al., 2015;Serban et al., 2016b). There have been many attempts to explain and solve this limitation, and they can be broadly divided into two categories (see Section 2 for details): (1) the first category argues that the dialog history is only one of the factors that decide the next response. Other features should be extracted and provided to the models as conditionals in order to generate more specific responses (Xing et al., 2016;Li et al., 2016a); (2) the second category aims to improve the encoder-decoder model itself, including decoding with beam search and its variations (Wiseman and Rush, 2016), encouraging responses that have long-term payoff (Li et al., 2016b), etc. Building upon the past work in dialog managers and encoder-decoder models, the key idea of this paper is to model dialogs as a one-to-many problem at the discourse level. Previous studies indicate that there are many factors in open-domain dialogs that decide the next response, and it is nontrivial to extract all of them. Intuitively, given a similar dialog history (and other observed inputs), there may exist many valid responses (at the discourse-level), each corresponding to a certain configuration of the latent variables that are not presented in the input. To uncover the potential responses, we strive to model a probabilistic distribution over the distributed utterance embeddings of the potential responses using a latent variable ( Figure 1). This allows us to generate diverse responses by drawing samples from the learned distribution and reconstruct their words via a decoder neural network. Specifically, our contributions are three-fold: 1. We present a novel neural dialog model adapted from conditional variational autoencoders (CVAE) , which introduces a latent variable that can capture discourse-level variations as described above 2. We propose Knowledge-Guided CVAE (kgC-VAE), which enables easy integration of expert knowledge and results in performance improvement and model interpretability. 3. We develop a training method in addressing the difficulty of optimizing CVAE for natural language generation (Bowman et al., 2015). We evaluate our models on human-human conversation data and yield promising results in: (a) generating appropriate and discourse-level diverse responses, and (b) showing that the proposed training method is more effective than the previous techniques. Related Work Our work is related to both recent advancement in encoder-decoder dialog models and generative models based on CVAE. Encoder-decoder Dialog Models Since the emergence of the neural dialog model, the problem of output diversity has received much attention in the research community. Ideal output responses should be both coherent and diverse. However, most models end up with generic and dull responses. To tackle this problem, one line of research has focused on augmenting the in-put of encoder-decoder models with richer context information, in order to generate more specific responses. Li et al., (2016a) captured speakers' characteristics by encoding background information and speaking style into the distributed embeddings, which are used to re-rank the generated response from an encoder-decoder model. Xing et al., (2016) maintain topic encoding based on Latent Dirichlet Allocation (LDA) (Blei et al., 2003) of the conversation to encourage the model to output more topic coherent responses. On the other hand, many attempts have also been made to improve the architecture of encoderdecoder models. Li et al,. (2015) proposed to optimize the standard encoder-decoder by maximizing the mutual information between input and output, which in turn reduces generic responses. This approach penalized unconditionally high frequency responses, and favored responses that have high conditional probability given the input. Wiseman and Rush (2016) focused on improving the decoder network by alleviating the biases between training and testing. They introduced a searchbased loss that directly optimizes the networks for beam search decoding. The resulting model achieves better performance on word ordering, parsing and machine translation. Besides improving beam search, Li et al., (2016b) pointed out that the MLE objective of an encoder-decoder model is unable to approximate the real-world goal of the conversation. Thus, they initialized a encoderdecoder model with MLE objective and leveraged reinforcement learning to fine tune the model by optimizing three heuristic rewards functions: informativity, coherence, and ease of answering. Conditional Variational Autoencoder The variational autoencoder (VAE) (Kingma and Welling, 2013;Rezende et al., 2014) is one of the most popular frameworks for image generation. The basic idea of VAE is to encode the input x into a probability distribution z instead of a point encoding in the autoencoder. Then VAE applies a decoder network to reconstruct the original input using samples from z. To generate images, VAE first obtains a sample of z from the prior distribution, e.g. N (0, I), and then produces an image via the decoder network. A more advanced model, the conditional VAE (CVAE), is a recent modification of VAE to generate diverse images conditioned on certain attributes, e.g. generating different human faces given skin color . Inspired by CVAE, we view the dialog contexts as the conditional attributes and adapt CVAE to generate diverse responses instead of images. Although VAE/CVAE has achieved impressive results in image generation, adapting this to natural language generators is non-trivial. Bowman et al., (2015) have used VAE with Long-Short Term Memory (LSTM)-based recognition and decoder networks to generate sentences from a latent Gaussian variable. They showed that their model is able to generate diverse sentences with even a greedy LSTM decoder. They also reported the difficulty of training because the LSTM decoder tends to ignore the latent variable. We refer to this issue as the vanishing latent variable problem. Serban et al., (2016b) have applied a latent variable hierarchical encoder-decoder dialog model to introduce utterance-level variations and facilitate longer responses. To improve upon the past models, we firstly introduce a novel mechanism to leverage linguistic knowledge in training end-to-end neural dialog models, and we also propose a novel training technique that mitigates the vanishing latent variable problem. Conditional Variational Autoencoder (CVAE) for Dialog Generation Each dyadic conversation can be represented via three random variables: the dialog context c (context window size k − 1), the response utterance x (the k th utterance) and a latent variable z, which is used to capture the latent distribution over the valid responses. Further, c is composed of the dialog history: the preceding k-1 utterances; conversational floor (1 if the utterance is from the same speaker of x, otherwise 0) and meta features m (e.g. the topic). We then define the conditional distribution p(x, z|c) = p(x|z, c)p(z|c) and our goal is to use deep neural networks (parametrized by θ) to approximate p(z|c) and p(x|z, c). We refer to p θ (z|c) as the prior network and p θ (x, |z, c) as the response decoder. Then the generative process of x is (Figure 2 (a)): 1. Sample a latent variable z from the prior network p θ (z|c). 2. Generate x through the response decoder p θ (x|z, c). CVAE is trained to maximize the conditional log likelihood of x given c, which involves an intractable marginalization over the latent variable z. As proposed in , CVAE can be efficiently trained with the Stochastic Gradient Variational Bayes (SGVB) framework (Kingma and Welling, 2013) by maximizing the variational lower bound of the conditional log likelihood. We assume the z follows multivariate Gaussian distribution with a diagonal covariance matrix and introduce a recognition network q φ (z|x, c) to approximate the true posterior distribution p(z|x, c). have shown that the variational lower bound can be written as: Figure 3 demonstrates an overview of our model. The utterance encoder is a bidirectional recurrent neural network (BRNN) (Schuster and Paliwal, 1997) with a gated recurrent unit (GRU) (Chung et al., 2014) to encode each utterance into fixedsize vectors by concatenating the last hidden states of the forward and backward RNN u x is simply u k . The context encoder is a 1-layer GRU network that encodes the preceding k-1 utterances by taking u 1:k−1 and the corresponding conversation floor as inputs. The last hidden state h c of the context encoder is concatenated with meta features and c = [h c , m]. Since we assume z follows isotropic Gaussian distribution, the recognition network q φ (z|x, c) ∼ N (µ, σ 2 I) and the prior network p θ (z|c) ∼ N (µ , σ 2 I), and then we have: We then use the reparametrization trick (Kingma and Welling, 2013) to obtain samples of z either from N (z; µ, σ 2 I) predicted by the recognition network (training) or N (z; µ , σ 2 I) predicted by the prior network (testing). Finally, the response decoder is a 1-layer GRU network with initial state The response decoder then predicts the words in x sequentially. Knowledge-Guided CVAE (kgCVAE) In practice, training CVAE is a challenging optimization problem and often requires large amount of data. On the other hand, past research in spoken dialog systems and discourse analysis has suggested that many linguistic cues capture crucial features in representing natural conversation. For example, dialog acts (Poesio and Traum, 1998) have been widely used in the dialog managers (Litman and Allen, 1987;Raux et al., 2005;Zhao and Eskenazi, 2016) to represent the propositional function of the system. Therefore, we conjecture that it will be beneficial for the model to learn meaningful latent z if it is provided with explicitly extracted discourse features during the training. In order to incorporate the linguistic features into the basic CVAE model, we first denote the set of linguistic features as y. Then we assume that the generation of x depends on c, z and y. y relies on z and c as shown in Figure 2. Specifically, during training the initial state of the response decoder is s 0 = W i [z, c, y] + b i and the input at every step is [e t , y] where e t is the word embedding of t th word in x. In addition, there is an MLP to predict y = MLP y (z, c) based on z and c. In the testing stage, the predicted y is used by the re-sponse decoder instead of the oracle decoders. We denote the modified model as knowledge-guided CVAE (kgCVAE) and developers can add desired discourse features that they wish the latent variable z to capture. KgCVAE model is trained by maximizing: Since now the reconstruction of y is a part of the loss function, kgCVAE can more efficiently encode y-related information into z than discovering it only based on the surface-level x and c. Another advantage of kgCVAE is that it can output a highlevel label (e.g. dialog act) along with the wordlevel responses, which allows easier interpretation of the model's outputs. Optimization Challenges A straightforward VAE with RNN decoder fails to encode meaningful information in z due to the vanishing latent variable problem (Bowman et al., 2015). Bowman et al., (2015) proposed two solutions: (1) KL annealing: gradually increasing the weight of the KL term from 0 to 1 during training; (2) word drop decoding: setting a certain percentage of the target words to 0. We found that CVAE suffers from the same issue when the decoder is an RNN. Also we did not consider word drop decoding because Bowman et al,. (2015) have shown that it may hurt the performance when the drop rate is too high. As a result, we propose a simple yet novel technique to tackle the vanishing latent variable problem: bag-of-word loss. The idea is to introduce an auxiliary loss that requires the decoder network to predict the bag-of-words in the response x as shown in Figure 3(b). We decompose x into two variables: x o with word order and x bow without order, and assume that x o and x bow are conditionally independent given z and c: p(x, z|c) = p(x o |z, c)p(x bow |z, c)p(z|c). Due to the conditional independence assumption, the latent variable is forced to capture global information about the target response. Let f = MLP b (z, x) ∈ R V where V is vocabulary size, and we have: where |x| is the length of x and x t is the word index of t th word in x. The modified variational lower bound for CVAE with bag-of-word loss is (see Appendix A for kgCVAE): We will show that the bag-of-word loss in Equation 6 is very effective against the vanishing latent variable and it is also complementary to the KL annealing technique. Dataset We chose the Switchboard (SW) 1 Release 2 Corpus (Godfrey and Holliman, 1997) to evaluate the proposed models. SW has 2400 two-sided telephone conversations with manually transcribed speech and alignment. In the beginning of the call, a computer operator gave the callers recorded prompts that define the desired topic of discussion. There are 70 available topics. We randomly split the data into 2316/60/62 dialogs for train/validate/test. The pre-processing includes (1) tokenize using the NLTK tokenizer (Bird et al., 2009); (2) remove non-verbal symbols and repeated words due to false starts; (3) keep the top 10K frequent word types as the vocabulary. The final data have 207, 833/5, 225/5, 481 (c, x) pairs for train/validate/test. Furthermore, a subset of SW was manually labeled with dialog acts (Stolcke et al., 2000). We extracted dialog act labels based on the dialog act recognizer proposed in (Ribeiro et al., 2015). The features include the uni-gram and bi-gram of the utterance, and the contextual features of the last 3 utterances. We trained a Support Vector Machine (SVM) (Suykens and Vandewalle, 1999) with linear kernel on the subset of SW with human annotations. There are 42 types of dialog acts and the SVM achieved 77.3% accuracy on held-out data. Then the rest of SW data are labelled with dialog acts using the trained SVM dialog act recognizer. Training We trained with the following hyperparameters (according to the loss on the validate dataset): word embedding has size 200 and is shared across everywhere. We initialize the word embedding from Glove embedding pre-trained on Twitter (Pennington et al., 2014). The utterance encoder has a hidden size of 300 for each direction. The context encoder has a hidden size of 600 and the response decoder has a hidden size of 400. The prior network and the MLP for predicting y both have 1 hidden layer of size 400 and tanh non-linearity. The latent variable z has a size of 200. The context window k is 10. All the initial weights are sampled from a uniform distribution [-0.08, 0.08]. The mini-batch size is 30. The models are trained end-to-end using the Adam optimizer (Kingma and Ba, 2014) with a learning rate of 0.001 and gradient clipping at 5. We selected the best models based on the variational lower bound on the validate data. Finally, we use the BOW loss along with KL annealing of 10,000 batches to achieve the best performance. Section 5.4 gives a detailed argument for the importance of the BOW loss. Experiments Setup We compared three neural dialog models: a strong baseline model, CVAE, and kgCVAE. The baseline model is an encoder-decoder neural dialog model without latent variables similar to (Serban et al., 2016a). The baseline model's encoder uses the same context encoder to encode the dialog history and the meta features as shown in Figure 3. The encoded context c is directly fed into the decoder networks as the initial state. The hyperparameters of the baseline are the same as the ones reported in Section 4.2 and the baseline is trained to minimize the standard cross entropy loss of the decoder RNN model without any auxiliary loss. Also, to compare the diversity introduced by the stochasticity in the proposed latent variable versus the softmax of RNN at each decoding step, we generate N responses from the baseline by sampling from the softmax. For CVAE/kgCVAE, we sample N times from the latent z and only use greedy decoders so that the randomness comes entirely from the latent variable z. Quantitative Analysis Automatically evaluating an open-domain generative dialog model is an open research challenge . Following our one-tomany hypothesis, we propose the following metrics. We assume that for a given dialog context c, there exist M c reference responses r j , j ∈ [1, M c ]. Meanwhile a model can generate N hypothesis responses h i , i ∈ [1, N ]. The generalized responselevel precision/recall for a given dialog context is: where d(r j , h i ) is a distance function which lies between 0 to 1 and measures the similarities between r j and h i . The final score is averaged over the entire test dataset and we report the performance with 3 types of distance functions in order to evaluate the systems from various linguistic points of view: (Chen and Cherry, 2014): BLEU is a popular metric that measures the geometric mean of modified ngram precision with a length penalty (Papineni et al., 2002;Li et al., 2015). We use BLEU-1 to 4 as our lexical similarity metric and normalize the score to 0 to 1 scale. Cosine Distance of Bag-of-word Embedding: a simple method to obtain sentence embeddings is to take the average or extrema of all the word embeddings in the sentences (Forgues et al., 2014;Adi et al., 2016). The d(r j , h i ) is the cosine distance of the two embedding vectors. We used Glove embedding described in Section 4 and denote the average method as A-bow and extrema method as E-bow. The score is normalized to [0, 1]. 3. Dialog Act Match: to measure the similarity at the discourse level, the same dialogact tagger from 4.1 is applied to label all the generated responses of each model. We set d(r j , h i ) = 1 if r j and h i have the same dialog acts, otherwise d(r j , h i ) = 0. One challenge of using the above metrics is that there is only one, rather than multiple reference responses/contexts. This impacts reliability of our measures. Inspired by (Sordoni et al., 2015), we utilized information retrieval techniques (see Appendix A) to gather 10 extra candidate reference responses/context from other conversations with the same topics. Then the 10 candidate references are filtered by two experts, which serve as the ground truth to train the reference response classifier. The result is 6.69 extra references in average per context. The average number of distinct reference dialog acts is 4.2. The proposed models outperform the baseline in terms of recall in all the metrics with statistical significance. This confirms our hypothesis that generating responses with discourse-level diversity can lead to a more comprehensive coverage of the potential responses than promoting only word-level diversity. As for precision, we observed that the baseline has higher or similar scores than CVAE in all metrics, which is expected since the baseline tends to generate the mostly likely and safe responses repeatedly in the N hypotheses. However, kgCVAE is able to achieve the highest precision and recall in the 4 metrics at the same time (BLEU1-4, A-BOW). One reason for kgCVAE's good performance is that the predicted dialog act label in kgCVAE can regularize the generation process of its RNN decoder by forcing it to generate more coherent and precise words. We further analyze the precision/recall of BLEU-4 by looking at the average score versus the number of distinct reference dialog acts. A low number of distinct dialog acts represents the situation where the dialog context has a strong constraint on the range of the next response (low entropy), while a high number indicates the opposite (highentropy). Figure 4 shows that CVAE/kgCVAE achieves significantly higher recall than the baseline in higher entropy contexts. Also it shows that CVAE suffers from lower precision, especially in low entropy contexts. Finally, kgCVAE gets higher precision than both the baseline and CVAE in the full spectrum of context entropy. Table 2 shows the outputs generated from the baseline and kgCVAE. In example 1, caller A begins with an open-ended question. The kgCVAE model generated highly diverse answers that cover multiple plausible dialog acts. Further, we notice that the generated text exhibits similar dialog acts compared to the ones predicted separately by the model, implying the consistency of natural language generation based on y. On the contrary, the responses from the baseline model are limited to local n-gram variations and share a similar prefix, i.e. "I'm". Example 2 is a situation where caller A is telling B stories. The ground truth response is a back-channel and the range of valid answers is more constrained than example 1 since B is playing the role of a listener. The baseline successfully predicts "uh-huh". The kgCVAE model is also able to generate various ways of back-channeling. This implies that the latent z is able to capture context-sensitive variations, i.e. in low-entropy dialog contexts modeling lexical diversity while in high-entropy ones modeling discourse-level diversity. Moreover, kgCVAE is occasionally able to generate more sophisticated grounding (sample 4) beyond a simple back-channel, which is also an acceptable response given the dialog context. Qualitative Analysis In addition, past work (Kingma and Welling, 2013) has shown that the recognition network is able to learn to cluster high-dimension data, so we conjecture that posterior z outputted from the recognition network should cluster the responses into meaningful groups. Figure 5 visualizes the posterior z of responses in the test dataset in 2D space using t-SNE (Maaten and Hinton, 2008). We found that the learned latent space is highly correlated with the dialog act and length of responses, which confirms our assumption. Results for Bag-of-Word Loss Finally, we evaluate the effectiveness of bag-ofword (BOW) loss for training VAE/CVAE with the RNN decoder. To compare with past work (Bowman et al., 2015), we conducted the same language modelling (LM) task on Penn Treebank using VAE. The network architecture is same except we use GRU instead of LSTM. We compared four different training setups: (1) standard VAE without any heuristics; (2) VAE with KL annealing (KLA); (3) VAE with BOW loss; (4) VAE with both BOW loss and KLA. Intuitively, a well trained model should lead to a low reconstruction Table 2: Generated responses from the baselines and kgCVAE in two examples. KgCVAE also provides the predicted dialog act for each response. The context only shows the last utterance due to space limit (the actual context window size is 10). loss and small but non-trivial KL cost. For all models with KLA, the KL weight increases linearly from 0 to 1 in the first 5000 batches. Table 3 shows the reconstruction perplexity and the KL cost on the test dataset. The standard VAE fails to learn a meaningful latent variable by having a KL cost close to 0 and a reconstruction perplexity similar to a small LSTM LM (Zaremba et al., 2014). KLA helps to improve the reconstruction loss, but it requires early stopping since the models will fall back to the standard VAE after the KL weight becomes 1. At last, the models with BOW loss achieved significantly lower perplexity and larger KL cost. Figure 6 visualizes the evolution of the KL cost. We can see that for the standard model, the KL cost crashes to 0 at the beginning of training and never recovers. On the contrary, the model with only KLA learns to encode substantial information in latent z when the KL cost weight is small. However, after the KL weight is increased to 1 (after 5000 batch), the model once again decides to ignore the latent z and falls back to the naive implementation. The model with BOW loss, however, consistently converges to a non-trivial KL cost even without KLA, which confirms the im-portance of BOW loss for training latent variable models with the RNN decoder. Last but not least, our experiments showed that the conclusions drawn from LM using VAE also apply to training CVAE/kgCVAE, so we used BOW loss together with KLA for all previous experiments. Conclusion and Future Work In conclusion, we identified the one-to-many nature of open-domain conversation and proposed two novel models that show superior performance in generating diverse and appropriate responses at the discourse level. While the current paper addresses diversifying responses in respect to dialogue acts, this work is part of a larger research direction that targets leveraging both past linguistic findings and the learning power of deep neural networks to learn better representation of the latent factors in dialog. In turn, the output of this novel neural dialog model will be easier to explain and control by humans. In addition to dialog acts, we plan to apply our kgCVAE model to capture other different linguistic phenomena including sentiment, named entities,etc. Last but not least, the recognition network in our model will serve as the foundation for designing a datadriven dialog manager, which automatically discovers useful high-level intents. All of the above suggest a promising research direction. Acknowledgements This work was funded by NSF grant CNS-1512973. The opinions expressed in this paper do not necessarily reflect those of NSF.
v3-fos-license
2016-01-15T18:20:01.362Z
2013-12-25T00:00:00.000
600006
{ "extfieldsofstudy": [ "Computer Science", "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "http://downloads.hindawi.com/journals/tswj/2013/283852.pdf", "pdf_hash": "e727389c2b036387c2a5478fec080e745ddfbfef", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:1580", "s2fieldsofstudy": [ "Computer Science", "Engineering" ], "sha1": "37179c0c3b34ad49e764c99f45fe19e01f1cffde", "year": 2013 }
pes2o/s2orc
A Source-Initiated On-Demand Routing Algorithm Based on the Thorup-Zwick Theory for Mobile Wireless Sensor Networks The unreliability and dynamics of mobile wireless sensor networks make it hard to perform end-to-end communications. This paper presents a novel source-initiated on-demand routing mechanism for efficient data transmission in mobile wireless sensor networks. It explores the Thorup-Zwick theory to achieve source-initiated on-demand routing with time efficiency. It is able to find out shortest routing path between source and target in a network and transfer data in linear time. The algorithm is easy to be implemented and performed in resource-constrained mobile wireless sensor networks. We also evaluate the approach by analyzing its cost in detail. It can be seen that the approach is efficient to support data transmission in mobile wireless sensor networks. Introduction As a kind of wireless technology, wireless sensor networks (WSNs) [1,2] are systems that comprise large numbers (usually hundreds or thousands) of wirelessly connected heterogeneous sensor nodes that are spatially distributed across a large field of interest. There is a wide range of applications where the WSNs are extensively used, and their development in other applications is still growing. Mobile wireless sensor networks (MWSNs) are a particular class of WSN in which mobility plays a key role in the execution of the application. In many cases, MWSNs suffer from link breakages and frequent changes of network topology. For example, a sensor node with a limited battery life may sleep periodically in order to reserve energy. A sensor node may also be blocked by data packets from its neighbours at some time or jammed by malicious nodes. Hence, a normal sensor node will lead to denial of service in those situations. Moreover, intermediate nodes are often required to carry out end-to-end communications since the transmission range of sensor nodes is also limited. Therefore, the intrinsic features of MWSNs make it hard to perform end-to-end communications, especially for largescale data transmission [3,4]. In this paper, we propose a novel approach of sourceinitiated on-demand routing [5][6][7] for MWSNs. We explore the Thorup-Zwick theory [8] to achieve efficient end-to-end communications in MWSNs. The remaining of the paper is organized as follows. Section 2 illustrates the network model and problem statement for the approach. In Section 3, we present an efficient algorithm for source-initiated on-demand routing in MWSNs. We evaluate the algorithm from the point of cost and complexity in Section 4 and discuss the simulation results in detail. Section 5 gives an overview of the related works. Section 6 concludes the paper with an outlook to future research directions. Network Model and Problem Statement In this paper, we consider a relatively simple MWSN model. = {1, 2, . . . , } and assume a MWSN with nodes. We assume the whole network consists of three tiers (see Figure 1). In the bottom, there are a number of sensor nodes. Each node has a unique identity ( ∈ ) in the network. Each node in the network is battery-powered and has limited computation and wireless communication capabilities. We assume that the locations of the sensor nodes are relatively static, rather than moving. Without confusion, we will also use to denote the location of a sensor node , ∈ . There is a sink node in the top level. We assume that the sink is a center equipped with sufficient computation and storage capabilities. Although the sink is able to communicate with each sensor node directly, direct communication between the sink and a sensor node is time consuming and energy consuming. For example, if a sensor node sends a large file (e.g., video file) to the sink, the energy of the sensor node will soon be exhausted. Therefore, we will decrease this kind of direct communication. Instead, the sink will allocate information from the covering nodes timely. There are several covering nodes in the middle level, which are similar with cluster heads in clustering hierarchy or relay nodes in flat hierarchy. These nodes are only used to collect status information from sensor nodes, without any further processing or computation. Each covering node covers a part of the network with a number of sensor nodes. The placement of the covering nodes will ensure that all the sensor nodes in the network are covered. Also we will ensure that there are no more than hops between a covering node and a sensor node that it covers (usually ≤ 3). Covering nodes are able to communicate with the sink directly. Let and be two points in the Euclidean plane, then [ , ] denotes the line segment connecting and , and | , | denotes the Euclidean distance between and . Two sensor nodes and can communicate with each other if | , | < , where is the communication range of a sensor node in the MWSN. The major task of a sensor node in the network is to communicate with other nodes and transmit data to others by routing. As we have mentioned before, intermediate nodes are often required to carry out end-to-end communications. Therefore, routing in this network should provide a path from source to destination and the path itself should be as short as possible. In this work, we do not consider the situation when there are selfish nodes in the network. We assume that each node is willing to cooperate with its neighbours. Source-Initiated On-Demand Routing Algorithm We attempt to find a short path for a source node in a MWSN by using some routing algorithm. The major difficulty Algorithm 1: The source-initiated on-demand routing algorithm for MWSNs. of designing the routing algorithm is the cost in path construction. Due to the link breakages and frequent changes of network topology, a source node has to update its routing paths frequently, which is obviously time consuming. The situation becomes worse when the network grows up in size. In this work, we explore the Thorup-Zwick theory to solve the problem and achieve efficient routing in MWSNs. Algorithm Overview. An overview of the source-initiated on-demand routing algorithm is given in Algorithm 1. Given a time t, the network topology of a MWSN can be denoted by a weighted undirected graph = ( , ). is the sets of sensor nodes in the network. E is the sets of connection status among the nodes. When node attempts to send data to node v, it first checks its local cache. If there is no existing routing path between them, u will send a routing request to the sink (s). Then the sink will query its local database that contains the data structure. Here the data structure is generated by preprocessing . The sink will send back a notification to u, which contains the shortest path from to V. Finally, u sends data to V by using the feedback path. The algorithm is straightforward. The key point is how to perform path query from to V. We will give the detailed explanation in Sections 3.3 and 3.4. Status Allocation. As we have mentioned in Section 2, sensor nodes are not encouraged to communicate with the sink directly. However, the sink requires some basic information from sensor nodes in order to support sourceinitiated on-demand routing. This information contains the location of a sensor node as well as its connection status currently. The algorithm for this process of status allocation is illustrated in Algorithm 2. The algorithm in Algorithm 2 is trivial. Each sensor sends a status vector to its covering node. The status vector contains the factors that have impacts on data communication. The status vector of a sensor node can be formally represented by V = ⟨ , PRR , , ⟩, where is the value of available energy of , PRR denotes the packet reception ratio (PRR) at , which is a metric for evaluating link quality, is the load of , and denotes the connection status of (the direct neighbors of the node). For each window of Δ received packets at , PRR is computed as follows: where Num rp denotes the number of successfully received packets, while Num sp the number of transmitted packets. For a given timeframe Δ , the load is computed as follows: where Num rdp denotes the number of relayed data packets (not locally generated), while Num lgp the number of locally generated packets. After collecting the information, the covering node then forwards it to the sink together. Moreover, there are different cases for status allocation in a MWSN. If we set to 3, there are three kinds of cases for status allocation. Take the subnetwork shown in Figure 2 for example: (a) to the sensor nodes ( 3 and 5 ) directly adjacent to a covering node ( ), they are able to send status vectors to it. (b) To the sensor nodes ( 2 , 4 and 6 ) directly adjacent to the ones in the first case, they could send status vectors to the covering node by two hops. (c) To the rest of the sensor nodes ( 1 ), they have to send status vectors by three hops. Graph Construction. After allocating status from distributed sensor nodes, the sink is able to get the overall information of the network. Given a time , the network topology of a MWSN can be denoted by a weighted undirected graph = ( , ). Assume | | = and | | = . Each element in denotes a sensor node in the MWSN and each element in denotes a link between two nodes. For all ∈ , we have the following equation: It means that the distance between any two nodes in the graph is a weighted value. If the Euclidean distance between two nodes is greater than the communication rage , we just set the distance value to be ∞ in the graph. The key to the graph construction is to fix the weight values for each edge in the graph. Weight is formally defined as follows: The weight depends on several factors. denotes the energy status for the two nodes. The value of is calculated by where and are the values of available energy for and and is the energy required for an operation of data transmission. denotes the link quality between and . Here we try to use software-based link quality estimators [9][10][11][12][13] to evaluate the link quality. We integrate the ETX estimator [14] to get an estimate of the link quality. is calculated as follows: where PRR reflects the uplink quality from to , while PRR the downlink quality from to . denotes the load status for the two nodes. The value of is calculated by , , and are coefficients for the weight and we have + + = 1. Graph Preprocessing. In order to perform efficient path query in the graph for a MWSN, we need to preprocess the weighted undirected graph first. Assume | | = and | | = . Thorup and Zwick in [8] have proposed an approach of preprocessing in ( status information, the sink is able to get the topology of the MWSN. Therefore, we could use the Thorup-Zwick theory directly to preprocess the graph structure of the MWSN (see Algorithm 3). Path Query. After preprocessing the graph structure of the MWSN, the sink is able to answer a path query in linear time. The algorithm of path query is given in Algorithm 4. Here we make use of the Thorup-Zwick theory to perform path query in the database structure returned by the preprocessing algorithm. Evaluation In this section, we mainly evaluate the performance of the proposed algorithm by analyzing its complexity and cost, against existing routing algorithms for MWSNs. The cost for the proposed algorithm is mainly generated from four activities: status allocation, graph construction, graph preprocessing, and path query. The first three activities are preprocessing ones. We try to evaluate the cost for these four activities by analyzing the time complexity. We evaluate the cost for status allocation at first. According to Section 3.2, we have the following equation: Here 1 , 2 , and 3 denote the number of sensor nodes in the three cases in status allocation, respectively. Assume that the one hop (sensor node to sensor node or sensor node to covering node) cost for status allocation is Δ 1 , and the one hop between covering node and sink is Δ 2 . Then the total cost Δ for status allocation is as follows: We could reduce (9) into As | 3 − 1 | < , then we have: Therefore, the cost for status allocation is ( ). We have to compute the weight for each edge in graph construction. Therefore, the cost for graph construction is ( ). According to [8], the cost for graph preprocessing is ( 1/ ) and the approximate cost for answering path query is ( ). Finally, we can get the preprocessing cost as ( ) + ( ) + ( ) + ( 1/ ) = (2 + + 1/ ) = ( + + 1/ ) and the query cost as ( ). We can see that the activities in our algorithm have linear cost except graph preprocessing. If we set to be a large integer, then the cost for graph preprocessing is also not very high and acceptable to MWSNs. Related Works Generally, existing routing protocols for WSNs fall into two categories: table-driven and on-demand routing [7] based on when and how the routes are discovered. For the tabledriven routing protocols, consistent and up-to-date routing information for all the sensor nodes are maintained at each mobile host. It has been shown in [15] and stated in [16] that on-demand routing protocols can perform better than tabledriven protocols in WSNs. There have been many on-going research efforts in ondemand routing for WSNs or wireless networks. For example, the ad hoc on-demand distance vector routing (AODV) [17] is an improvement of the destination-sequenced distancevector (DSDV) algorithm. AODV minimizes the number of broadcasts by creating routes on-demand as opposed to the DSDV that maintains a list of all the routes. The dynamic source routing protocol (DSR) [18] is another on-demand routing protocol. A sensor node maintains the route caches containing the source routes that it is aware of. The mobile host updates the entries in the route cache as soon as it learns about new routes. The temporally ordered routing algorithm (TORA) [19] is a highly adaptive, efficient, and scalable distributed routing algorithm based on the concept of link reversal. TORA is proposed for highly dynamic mobile and multihop wireless networks. It is a source-initiated ondemand routing protocol. However, none of these algorithms take the case that how to perform efficient routing in a WSN with link breakages and frequent changes of network topology into consideration. Some routing algorithms just enhance the abovementioned ones with fault-tolerant or energy-balancing mechanism [20][21][22][23]. However, these algorithms are not able to provide short path for routing or they do not provide an efficient way for constructing short path. Moreover, there are also a few algorithms about shortest path routing [24,25]; however, these algorithms fall short of efficiency due to high cost or large complexity. Compared with existing works in this field, our approach uses a novel graph-based mechanism that makes full use of the Thorup-Zwick theory to improve the end-to-end communication in MWSNs. The algorithm is of time efficiency and the overhead is acceptable to large-scale MWSNs. The advantage of our approach is that we can still achieve efficient routing even the number of the nodes in a MWSN grows up. Conclusion In this study, we mainly present a novel source-initiated ondemand routing algorithm for efficient data transmission in MWSNs. We explore the Thorup-Zwick theory to achieve source-initiated on-demand routing with time efficiency. With this algorithm, we are able to find out shortest routing path between source and target in a network and transfer data in linear time. The algorithm is also easy to be implemented and performed in resource-constrained MWSNs. We also evaluate the algorithm by analyzing its time complexity in detail. It can be seen that the approach is efficient to support end-to-end data communication in MWSNs. Compared with existing works in this field, our approach is of time efficiency and the overhead is acceptable to large-scale MWSNs. The advantage of our approach is that we can still achieve efficient routing even the number of the nodes in a MWSN grows up. Future works may include: (1) improving the efficiency of the algorithms to reduce the operations of graph preprocessing; (2) considering a more complex MWSN model to implement and evaluate the approach; and (3) considering the security problem of routing in MWSNs.
v3-fos-license
2023-10-29T06:16:57.523Z
2023-10-27T00:00:00.000
264543440
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": null, "oa_url": null, "pdf_hash": "993a6b1b436dcb70577ee1324db06a2204ba3168", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:1582", "s2fieldsofstudy": [ "Medicine" ], "sha1": "3c34bf667d63244de8add14a9e19fe6787554c93", "year": 2023 }
pes2o/s2orc
Association between Cu/Zn/Iron/Ca/Mg levels and cerebral palsy: a pooled-analysis It was well documented that macro/trace elements were associated with the neurodevelopment. We aimed to investigate the relationship between copper (Cu)/zinc (Zn)/iron/calcium (Ca)/magnesium (Mg) levels and cerebral palsy (CP) by performing a meta-analysis. We searched the PubMed, Embase, Cochrane and Chinese WanFang databases from January 1985 to June 2022 to yield studies that met our predefined criteria. Standard mean differences (SMDs) of Cu/Zn/Iron/Ca/Mg levels between CP cases and healthy controls were calculated using the fixed-effects model or the random-effects model, in the presence of heterogeneity. 95% confidence intervals (CI) were also computed. Sensitivity analysis was performed by omitting each study in turn. A total of 19 studies were involved in our investigation. CP cases showed markedly lower Cu, Zn, iron and Ca levels than those in controls among overall populations (SMD =  − 2.156, 95% CI − 3.013 to − 1.299, P < 10−4; SMD =  − 2.223, 95% CI − 2.966 to − 1.480, P < 10−4; SMD =  − 1.092, 95% CI − 1.513 to − 0.672, P < 10−4; SMD =  − 0.757, 95% CI − 1.475 to − 0.040, P = 0.038) and Asians (SMD =  − 2.893, 95% CI − 3.977 to − 1.809, P < 10−4; SMD =  − 2.559, 95% CI − 3.436 to − 1.683, P < 10−4; SMD =  − 1.336, 95% CI − 1.807 to − 0.865, P < 10−4; SMD =  − 1.000, 95% CI − 1.950 to − 0.051, P = 0.039). CP cases showed markedly lower Zn level than that in controls among Caucasians (SMD =  − 0.462, 95% CI − 0.650 to − 0.274, P < 10−4). No significant differences of Cu, iron and Ca levels between CP cases and controls among Caucasians (SMD =  − 0.188, 95% CI − 0.412 to 0.037, P = 0.101; SMD =  − 0.004, 95% CI − 0.190 to 0.182, P = 0.968; SMD = 0.070, 95% CI − 0.116 to 0.257, P = 0.459) were observed. No marked difference of Mg level between CP cases and controls was noted among overall populations (SMD =  − 0.139, 95% CI − 0.504 to 0.226, P = 0.455), Asians (SMD =  − 0.131, 95% CI − 0.663 to 0.401, P = 0.629), and Caucasians (SMD =  − 0.074, 95% CI − 0.361 to 0.213, P = 0.614). Sensitivity analysis did not change the overall results significantly for Cu, Zn, iron and Mg. CP cases demonstrated significantly lower levels of Cu/Zn/iron/Ca than those in healthy controls, particularly in Asians. Decreasing trend of Cu/Zn/iron/Ca levels merit attention, particularly in the population with high susceptibility to CP. Frequent monitoring and early intervention may be needed. Trace elements status is closely associated with the immune system function via their effects on many biological Processes, while the well worked immune function required the micronutrients participating in cell metabolism and replication.For instance, leukocytes proliferation induced by acute infection was impaired by insufficient supply of trace elements, including iron, zinc, magnesium and manganese 4 .Trace elements also exerted effects on the cellular transfer and the levels of other important nutrients 5 .For example, iron was an important constituent of hemoglobin, which carried the oxygen and participated in the energy metabolism.It is also proved that certain trace elements affect the chemical synaptic transmission in the brain and peripheral central nervous system 6 .Cu and Zn play an important role in the activation of enzymes that are involved in catecholamine transmission.On the other hand, macro-elements, such as Ca and Mg play an important role in the physical development.Ca and Mg exert effects in the transmission of neural stimuli. Based on the fact that CP is essentially a neurological disorder, we speculated that macro/trace elements levels may be associated with CP.Some available evidence showed that certain trace elements, such as copper (Cu) deficiency were correlated with learning and behavior disorders 7 .Meanwhile, markedly lower level of zinc (Zn) was observed in severe CP compared with that in controls 8 .Previous review showed a high rate of malnutrition in the children with CP, while hypocalcemia, reduced serum levels of Zn, Cu and vitamin D being reported the most 9 .Mg sulfate given antenatally in threatened preterm labor has a reduction in the risk of CP at 2 years of age 10 .The administration of vitamin D and Ca produced a large, nonsignificant effect on bone mineral density in the lumbar spine 11 .To have an in-depth understanding of the relationship between alterations of macro/trace elements and CP is helpful for CP prevention and intervention. Meta-analysis is a good way to pool the available evidence from single study to produce a more comparatively robust result, which increases the statistical power significantly.Therefore, we conducted a meta-analysis with the aim of clarifying the differences of Cu/Zn/Iron/Ca/Mg levels between CP and healthy controls in children. Search strategy We performed the literature search in terms of the preferred reporting items for systematic reviews and metaanalysis guidelines 12 , we searched the papers that reported the levels of Cu/Zn/Iron/ Ca/Mg, both in CP and healthy controls from January 1985 to June 2022 by using PubMed, Embase, Cochrane and Chinese WanFang databases.We used the searched terms as follows: (1) macro/trace element, micronutrient, magnesium, Mg, calcium, Ca, iron, zinc, Zn, copper and Cu; (2) urine, serum and plasma; and (3) cerebral palsy, CP.We also reviewed the references of extracted literature.The paper with the larger number of participants was enrolled if the same subjects were recruited in more than one study.Our preprint of "Association between Cu/Zn/Iron/Ca/ Mg levels and Chinese children with cerebral palsy" (https:// doi.org/ 10. 21203/ rs.3.rs-703495/ v1) was stored in websites (researchgate.net/publication/353611059_Association_between_CuZnIronCaMglevels_and_Chinese_children_with_cerebral_palsy), (doc.taixueshu.com/search?sourceTye= all&keywordTyPe = 1&keyword = Association + between + Cu/Zn/Iron/Ca/Mg + levels + and + cerebral + palsy&resultSearch = 0).We cited this preprint.This preprint has not been published in whole or in part in any formal journal elsewhere. Data extraction We collected the data of mean and standard deviation (SD) of Cu/Zn/Iron/Ca/Mg levels.We also extracted the study characteristics from enrolled investigations.Data were recorded as the followings: first author's last name; year of publication; ethnicity; number of cases and controls; confounding factors and testing method of Cu/Zn/ Iron/Ca/Mg levels. Statistical analyses We used the standard mean difference (SMD) to test the differences of Cu/Zn/ Iron/Ca/Mg levels between CP cases and controls across studies.Heterogeneity of SMDs across studies was tested by using the Q statistic (significance level at P < 0.05).The I 2 statistic, a quantitative measure of inconsistency across studies, was also calculated.The SMDs were calculated using either fixed-effects models or, in the presence of heterogeneity, random-effects models (Q test, P < 0.05).Sensitivity analysis was performed by omitting each study in turn.Potential publication bias was assessed by Egger's test at the P < 0.05 level of significance if the number of recruited studies were more than 10.Trim and fill analysis was used to identify the funnel plot asymmetry caused by publication bias Study characteristics The characteristics of the nineteen enrolled studies are shown in Table 1.They were published between 1989 and 2022.Fourteen studies were about Cu, sixteen for Zn, twelve for iron, nine for Ca, and nine for Mg.Seventeen studies adjusted for confounding factors.The participants were from Asians and Caucasians. Cu level in CP and controls A total of 1394 CP cases and 1133 controls were included for testing the Cu level.Atomic absorption spectroscopy (AAS) was used in testing the Ca level in five studies with anodic stripping (AS) in two studies, and inductively coupled plasma atomic emission spectrometry (ICP-AES) in two studies.CP cases showed markedly lower Cu level than that in controls among overall populations (SMD = − 2.156, 95% CI − 3.013 to − 1.299, P < 10 −4 , Table 2, Fig. 2) and Asians (SMD = − 2.893, 95% CI − 3.977 to − 1.809, P < 10 −4 , Table 2, Fig. 2).No significant difference of Cu status between CP and controls among Caucasians (SMD = − 0.188, 95% CI − 0.412 to 0.037, P = 0.101, Table 2, Fig. 2) was observed.Sensitivity analysis did not change the overall results significantly (95% CI − 3.610 to − 0.531).Publication bias was observed (P < 10 −4 , funnel plot in Supplemental Material 1).Trim and fill analysis showed that addition of 4 virtual studies still yielded significant heterogeneity without changing overall result markedly. Iron level in CP and controls A total of 1292 CP cases and 1071 controls were included for testing the iron level.AAS was used in testing the iron level in three studies with AS in two studies, ICP-AES in one study, turbidimetric method (TM) in one study.CP cases showed markedly lower iron level than that in controls among overall populations (SMD = − 1.092, 95% CI − 1.513 to − 0.672, P < 10 − 4 , Fig. 4) and Asians (SMD = − 1.336, 95% CI − 1.807 to − 0.865, P < 10 −4 , Fig. 4). No significant difference of Iron status between CP and controls among Caucasians (SMD = − 0.004, 95% CI − 0.190 to 0.182, P = 0.968, Table 2, Fig. 4) was observed.Sensitivity analysis did not change the overall results significantly (95% CI − 1.666 to − 0.500).Publication bias was observed (P = 0.023, funnel plot in Supplemental Material 3).Trim and fill analysis showed that it did not need addition of virtual studies. Ca level in CP and controls A total of 1081 CP cases and 845 controls were included for testing the Ca level.AAS was used in testing the Ca level in three studies with AS in two studies.CP cases demonstrated significantly lower Ca level than that in controls among overall populations (SMD = − 0.757, 95% CI − 1.475 to − 0.040, P = 0.038, Fig. 5) and Asians (SMD = − 1.000, 95% CI − 1.950 to − 0.051, P = 0.039, Fig. 5).No significant difference of Ca status between CP and controls among Caucasians (SMD = 0.070, 95% CI − 0.116 to 0.257, P = 0.459, Table 2, Fig. 5) was observed.Sensitivity analysis changed the overall results a little (95% CI − 1.713 to 0.204).Publication bias was not observed (P = 0.346). Mg level in CP and controls A total of 1041 CP cases and 843 controls were included for testing the Mg level.AAS was used in testing the Mg level in two studies with AS in another two.No marked difference of Mg level between CP cases and controls was noted among overall populations (SMD = − 0.139, 95% CI − 0.504 to 0.226, P = 0.455, Fig. 6), Asians (SMD = − 0.131, 95% CI − 0.663 to 0.401, P = 0.629, Fig. 6), and Caucasians (SMD = − 0.074, 95% CI − 0.361 to 0.213, P = 0.614, Fig. 6).Sensitivity analysis did not change the overall results (95% CI − 0.595 to 0.310).Publication bias was not observed (P = 0.984). Discussion CP, one of the most common developmental disabilities during the childhood throughout the lifespan, is a clinical syndrome characterized by a motor disorder.CP has attracted much attention of doctors and parents of patients due to its harms to neurological and motor systems in children.Identification of the potential risk factors for CP susceptibility is helpful for the early prevention and treatment of CP.Adequate micronutrient supply in early postnatal period may be an important tool for neuroprotection.Cu, Iron, Zn are shown to play significant role in proper neurodevelopment and brain functioning.Our meta-analysis showed that CP cases demonstrated significantly lower levels of Cu, Zn, iron and Ca than those in controls among overall populations and Asians, which indicated that the deficiency of Cu/Zn/iron/Ca should be paid more attention in the population with higher susceptibility to CP.The homeostasis of Cu/Zn/iron/Ca may be very important for neuroprotection.Early monitoring and intervention may be helpful for CP prevention and treatment.Several facts may account for our findings.CP is a neurological disorder usually induced by preterm birth or infection.Metal ions are closely associated with the normal functioning of human body 31 .Trace elements deficiency is likely to cause the immune dysfunction, resulting in the increased risk of infection.Cu is a key cofactor for various enzymes, such as Cu/Zn superoxide dismutase, which plays an important role for neurological development 32 .Cu is also involved in the redox reactions 33 .CP cases are prone to Cu deficiency 34 .Suboptimal Cu status was shown to be associated with poor motor performance 35 .Cu deficiency is also known to be associated with higher susceptibility to traumatic brain 36 .Cu deficiency also affects the role of other cellular constituents involved in antioxidant activities, such as iron, selenium, and also plays an important role in diseases in which oxidative stress is elevated.Oxidative stress was involved in the brain injury.Hence, the disorder of Cu may cause brain dysfunction.For example, higher level of Cu was associated with decreased risk of Parkinson's disease, which is the second most common neurodegenerative disease 37 . Zn is necessary for the survival of various types of cells.Lots of enzymes exert the effects by creating bonds with Zn ions 38 .Zn plays a role in cell proliferation as an element of transcription factors and enzymes of DNA replication, Zn deficiency leads to a decline of Th1 immunity and promotes inflammatory reactions.Zn is also present throughout the central nervous system, playing a role in synaptic transmission, neuroregulation www.nature.com/scientificreports/and neuroprotection 39 .Zn also promotes spinal cord injury recovery through upregulating Zn transporter-1 and brain-derived neurotrophic factors 40 .Zn inhibits free radical by promoting metallothionein production.Meanwhile, Zn is crucial for retinol-binding protein synthesis and vitamin A mobilization.Zn plays a role in removing the heavy metals from the body, such as Pb, As and Hg, which are implicated in the pathophysiology of Parkinson's disease 41 .Based on the comprehensive role of Zn in the body, Zn disorder may result in the unpredictable injuries including neurological lesions. Iron is an important constituent of hemoglobin, which transfers oxygen.Iron deficiency leads to anemia.Thus, iron regulates and influences the activity of various organs, as well as the whole organism.Iron also exerts effects in the catalysis of enzymatic reactions 42 .Th cells maturation was impaired in children with iron-deficiency anemia and was regenerated by the supplementation of iron.Iron participates in the neurodevelopment 43 .Iron deficiency lowers the chances of recovery of the central nervous system and influences the children's adaptation ability.Hence, iron homeostasis is important for neuroprotection. Ca, an imPortant constituent of bones, Plays a vital role in the muscle contraction and relaxation, and also regulates the electrical conduction system of the heart 44 .Ca also regulates the function of enzymes and is associated with the metabolism of other trace elements 45 .Intracellular calcium concentration is an important regulator of several signaling mechanisms, which regulate various kinds of biological processes 46 .Alterations in calcium concentration play a vital role in muscle contraction and relaxation 47 .Dysregulated calcium levels have been observed in several muscular dystrophies, including Duchenne muscular dystrophies 48 .Metabolic bone disease is characterized by impaired Ca and P balance 49 .These previous evidence shows that Ca is closely associated with motor disorder and is a potential therapeutic target for CP. Mg, another important constituent of bones, is an antagonist of Ca, prevents excessive acetylcholine release and stimulation at the neuromuscular junction.Notably, we observed null difference of Mg status between CP and controls, which may be due to that Mg was not directly associated with the neurodevelopment.Notably, Mg sulfate was commonly applied in obstetrics due to its prevention effect of eclamptic seizures 50 , which may also affect the Mg status in neonates.Further larger numbers of studies are needed to validate our findings.www.nature.com/scientificreports/Our findings supported the idea that nutritional status influences the neurodevelopment, neurocognitive performances, and later life health outcomes.Appropriate nutritional diet is important for lowering the adverse health consequences.Also, compared with the included previously published single studies, our study was a pooled investigation with robust significances.Although the positive association between Cu/Zn/iron/Ca and CP provided novel insight for CP prevention and therapy, several limitations should be considered.First, the between-study heterogeneity may distort the final results, the random-effects model decreased the influence of the heterogeneity.On the other hand, the sensitivity analysis did not change the overall results, which indicated that our conclusion was comparatively more robust.The participants were largely from Asians, which may limit the generalization of our results.More studies from different ethnicities may be recruited in the future for more robust results.Second, the publication bias was noted for the association between Cu/Zn/Iron and CP, trim and fill analysis did not change the overall results, indicating our results were comparatively solid.Finally, despite the significant differences of Cu/Zn/iron/Ca between CP and controls, the cause-effect relationship between trace elements levels and CP risk remains inconclusive.The enrolled participants were all children with a lower age, we speculated that trace elements deficiency may precede the CP onset.Due to the lack of specific age in our investigation, further larger number studies should be performed to make a meta-regression analysis regarding the age.Further exploring the influence of trace elements on CP occurrence will have greater clinical value.In terms of our findings, the following issues should be addressed: (1) time-series analysis of the alteration of trace elements in CP, (2) longitudinal observation of the association between trace elements levels and CP Progress, (3) clarification of the cause-effect relationship between trace elements status and CP risk in prospective studies. Figure 3 . Figure 3. Difference of Zn level between CP and control. NOTE:Figure 5 . Figure 5. Difference of Ca level between CP and control. Figure 6 . Figure 6.Difference of Mg level between CP and control. Case and control: lack of detailed number of cases and controls, multiple publications of the same data 3) Outcome of interest: lack of detailed data of Cu/Zn/Iron/Ca/Mg levels Figure 1 . Flow diagram of study selection. Study Study design Ethnicity Case/Control Adjustment for confounding factors Method of testing n Trace element ContinuedVol.:(0123456789)Scientific Reports | (2023) 13:18427 | https://doi.org/10.1038/s41598-023-45697-wwww.nature.com/scientificreports/Zn level in CP and controls A total of 1871 CP cases and 1784 controls were included for testing the Zn level.AAS was used in testing the Zn level in seven studies with AS in two studies, and ICP-AES in two studies.CP cases showed markedly lower Zn level than that in controls among overall populations (SMD = − 2.223, 95% CI − 2.966 to − 1.480, P < 10 −4 , Fig. Table 1 . Characteristics of studies enrolled in this meta-analysis.CC case-control, Cu copper, Zn zinc, Ca calcium, Mg magnesium, ICP-AES inductively coupled plasma atomic emission spectrometry, AAS atomic absorption spectroscopy, AS Anodic stripping, TM turbidimetric method. Table 2 . Association between the status of trace elements and cerebral palsy.Cu copper, Zn zinc, Ca calcium, Mg magnesium, SMD standard mean difference.
v3-fos-license
2018-04-03T01:38:31.906Z
2017-11-27T00:00:00.000
30373060
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0188360&type=printable", "pdf_hash": "11f1fed999947065e1c1d0530a0b47fd6c15d5f4", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:1583", "s2fieldsofstudy": [ "Medicine" ], "sha1": "11f1fed999947065e1c1d0530a0b47fd6c15d5f4", "year": 2017 }
pes2o/s2orc
Competence in metered dose inhaler technique among community pharmacy professionals in Gondar town, Northwest Ethiopia: Knowledge and skill gap analysis Background When compared to systemic administration, if used correctly inhalers deliver a smaller enough percent of the drug right to the site of action in the lungs, with a faster onset of effect and with reduced systemic availability that minimizes adverse effects. However, the health professionals' and patients' use of metered dose inhaler is poor. Objective This study was aimed to explore community pharmacy professionals' (pharmacists' and druggists') competency on metered dose inhaler (MDI) technique. Method A cross sectional study was employed on pharmacy professionals working in community drug retail outlets in Gondar town, northwest Ethiopia from March to May 2017. Evaluation tool was originally taken and adapted from the National Asthma Education and Prevention Programmes of America (NAEPP) step criteria for the demonstration of a metered dose inhaler to score the knowledge/proficiency of using the inhaler. Result Among 70 community pharmacy professionals approached, 62 (32 pharmacists and 30 druggists/Pharmacy technicians) completed the survey with a response rate of 85.6%. Only three (4.8%) respondents were competent by demonstrating the vital steps correctly. Overall, only 13 participants got score seven or above, but most of them had missed the essential steps which included steps 1, 2, 5, 6, 7 or 8. There was a significant difference (P = 0.015) in competency of demonstrating adequate inhalational technique among respondents who took training on basic inhalational techniques and who did not. Conclusion This study shown that, community pharmacy professionals' competency of MDI technique was very poor. So as to better incorporate community pharmacies into future asthma illness management and optimize the contribution of pharmacists, interventions would emphasis to improve the total competence of community pharmacy professionals through establishing and providing regular educational programs. competency of demonstrating adequate inhalational technique among respondents who took training on basic inhalational techniques and who did not. Conclusion This study shown that, community pharmacy professionals' competency of MDI technique was very poor. So as to better incorporate community pharmacies into future asthma illness management and optimize the contribution of pharmacists, interventions would emphasis to improve the total competence of community pharmacy professionals through establishing and providing regular educational programs. Background Asthma is a chronic inflammatory condition of the airways that affects roughly 358 million peoples [1]. It is a terrifying universal health problem with an increasing prevalence worldwide. Worldwide, approximately 300 million peoples have asthma and 10% adult population over the age of 40 years may have a diagnosis of chronic obstructive pulmonary disease (COPD) [2]. All age groups are affected by this chronic airway disease with higher burden of disability [3]. Short-acting beta2 agonists are the front line management [4]. Currently, corticosteroids are the most effective treatment available for long-term asthma control. Inhaled forms of corticosteroids are typically used in the long-term management [5]. Metered dose inhaler (MDI) device is usually used, proper inhaler technique and adequate adherence are indispensable. Concerning the technique, explicit steps and excellent harmonization are required for the appropriate use of this device. Currently, beclomethasone and salbutamol inhalations are obtainable in all private, state procurement center and hospital pharmacies in Ethiopia. Poor asthma management is majorly as a result of miss diagnosis and improper or inadequate treatment [6]. Thus, good level of adherence to these medications is the cornerstone in the long-standing management of asthma as non-adherence or inhaler mishandling increases mortality, morbidity, and hospital encounter [7,8]. Studies described that around 90% of asthma and COPD victims miss use their inhalers [9,10]. Therefore, patients prescribed with MDI should be appropriately educated by experts about inhaler technique as it will lead to improved adherence and management of the disease [11]. Community pharmacists are an indispensable part of healthcare taskforce in patient counselling. However, in most studies conducted around the globe, pharmacists have suboptimal knowledge and skill on MDI technique. A recent published article emphasized that merely 7% of healthcare providers possibly will demonstrate all the proper steps of MDI use [12]. In 2014, the National Review of Asthma Deaths (NRAD) pointed out that low level of understanding and incorrect use of inhalers was assumed to have contributed to a huge figure of the 195 asthma deaths [13]. Incorrect asthma inhaler device use is linked with poor asthma control and more frequent emergency department visits and it is allied with many risk factors including poor education/instruction of patient [14]. Many papers found comparable results with doctors, nurses and pharmacists having poor knowledge on the optimal use of different inhalers [15]. Appropriate MDI use experience among clients and medical experts is still defective [16]. Another simulated study done with the aim to assess pharmacists' skill in making proper use of MDI inhalers shown a result that overall pharmacists had poor recognition in properly addressing all the steps while using inhaler. In addition, explored that pharmacy professionals' age status could influence their level of knowledge in addition to their job experience which was found to increase their level of knowledge towards using the inhaler correctly [17]. A study done in Nepal on nurses, physicians and pharmacists concluded as healthcare professionals MDI use were poor before intervention and intervention has took the lion share in changing the existing trend to a better practice [18]. A simulated client based study done in Nigeria generalized that community pharmacists lack the knowledge and skill of demonstrating basic steps in proper use of MDI [19]. In a study done in Mekelle, Ethiopia, MDI technique of pharmacy professionals was very limited [20]. The National Institute for Health and Care Excellence (NICE) guideline for people with COPD and the British Thoracic Society guideline for Asthma suggested sufficient training and education on correct use of inhaler to patients prior to providing the MDI [21]. It may not be shocking that patients use inhalers wrongly, this is because professionals understanding of the right use of these devices is also poor. Therefore, it is highly suggested to frequently assess health care experts' competency in demonstrating proper MDI use with final goal of improving inhalational treatment outcomes [15]. Although Gondar has many drug stores and pharmacies with many professionals [22], this paper is pioneer to explore the competence of community pharmacy professionals' in correctly demonstrating MDI use in Gondar town, north western Ethiopia. As there are no studies published in Gondar town and Amhara regional state as a whole, this study will add to the existing literature gap in the area of competence of community pharmacy professionals' on MDI. Furthermore, the findings will inform many stakeholders including governmental, nongovernmental organizations and academic institutions about potential gaps and barriers to patient counseling for a better practice. Study setting and design An interview based cross-sectional survey was conducted to evaluate community pharmacy professionals' competence on MDI technique. It was done in Gondar town, which is located about 750 Km Northwest of Addis Ababa. As to the 2007 Ethiopian population and housing census report, the town had an estimated population of 206,987. Gondar town has 42 community pharmacies and 20 drug stores. The study was undertaken from March to May 2017. All community pharmacy professionals (CPPs) who were working in Gondar town community pharmacies and drug stores during the study period were targeted. Sample size and sampling procedure Simple convenience sampling technique was used and a total 62 community pharmacy professionals (pharmacists and druggists/Pharmacy Technicians) were included in this study. Data collection technique and management Evaluation (data collection) tool was adapted from the National Asthma Education and Prevention Programs of America (NAEPP) step criteria for demonstration of a MDI to score the competency of use of MDIs by health care providers [23], and checked for suitability (Each steps with respect to culture, religion and other affairs of the study place. In addition, checked for applicability in terms of the need for finance, material and other issues to execute each steps). As described in detail in previous study done in quite different part/region of Ethiopia [20], here in this study CPPs were approached and provided with informed consent with the intent to confirm their agreement to participate. As soon as we received their consent, they were given a sample b2 agonist ''Salbutamol puff '', which is the most common MDI in the study area and requested to exhibit the technique right to the interviewer considering as if he/ she is asthmatic patient looking for medication use. The interviewer was one of the investigator acting as a simulated medication user but the participants already knew as he is not a real patient. Along with the interviewer, one of the very senior, experienced pharmacist and principal investigator stood up aside in the pharmacy/drugstore and watched out carefully the dispensers skill and knowledge while they are demonstrating then finally gave score for the given 11 criteria (Table 1) immediately at the Pharmacy or Drug store but the score given kept secretly (never disclosed to the pharmacist or druggist under evaluation). Scores were categorized as correctly, incorrectly demonstrated and skipped steps. According to NAEPP, CPPs' in inhalational techniques was demonstrated based on their capability to exhibit all the essential steps (step one, two, five, six, seven and eight) and a total score of 7 or above and those who were not demonstrating all the essential steps correctly and scores <7 were considered as having poor competency in demonstrating inhalational technique. Data entry and analysis Data was edited; cleaned, coded, entered, and then analyzed using Statistical Package for Social Studies (SPSS) version 20 for Windows. Categorical data were described in frequencies and percentages. Practice values were categorized by data transformation tool into two groups as poor and adequate based on the final scores. Chi-square test of association was done to point out factors associated with competency in MDI technique but we took the fisher exact test to show predictors in the table since our data did not fulfill the assumptions for Pearson's chisquare. Ethical considerations The study was ethically approved by the Institutional Review Committee of School of Pharmacy, University of Gondar with an approval number of UoG-SoP-130/2017. The data collected was kept anonymous and recorded in such a way that the involved pharmacy professionals could not be known. Moreover, the evaluation result had not been disclosed to pharmacy professionals under evaluation. Operational definitions In our study; Pharmacy. It represents a drug shop having the mandate to hold any medicine and medical equipment. In addition, the professional who is supposed to dispense inside the pharmacy is 'A pharmacist' no one else is allowed to dispense according to the (Food, Medicine, Healthcare Administration and Control Authority (FMHACA) of Ethiopia. Drug store. Unlike Pharmacy, Drug store is a drug shop but the medicine to be dispensed here is restricted that means it is not legal to hold every medications in this medicine retail outlet. For instance: It is not allowed to hold medications like psychotropic/narcotic drugs. In addition, the professional who is supposed to dispense inside the drug store is 'A druggist '. Pharmacists. In Ethiopia, Those are professionals having bachelor degree from private or Government University. In addition, they took all the courses that one medication expert has to know at the end. They took course for 4 years and now a days for five years. Druggists. We can use this name interchangeably with' Pharmacy technician'. In Ethiopia those are professionals having 'diploma degree" from college, it is not university level. They took courses for 3 years only and it is not that much comprehensive like pharmacists. Results Among 70 community pharmacy professionals approached, 62 (32 pharmacists and 30 druggists) completed the survey with a response rate of 85.6%. Eight community pharmacy professionals were not willing to involve in the study due to different reasons like; being busy and refusal to respond. Among the respondents, 39 (62.9%) were males with a mean age of 33.9 with a standard deviation of ±10.05 years. The Majority of respondents 43 (69.4%) had a work experience of <5 years. The detailed demographics of respondents are shown in (Table 2). Out of 62 participants, only 5 got training on MDI use. In this study, only three (4.8%) respondents were competent enough to demonstrate the essential steps correctly. Respondents were evaluated on their competence of MDI technique. Accordingly, step 2 (removing the cap) was the most frequently demonstrated (80.6%) step followed by step 1 (shake the content well) (72.6%). On the other hand, step 4(tilt the head back slightly) and step 7(begin breath in slowly and deeply through the mouth and actuate the canister once) were among the least demonstrated steps (9.7% and 22.6%, respectively). The details of the frequency of respondents to demonstrate each MDI technique steps described below at (Table 1). This study revealed that only 13 participants got score seven or above, but most of them missed the pertinent steps which included steps 1, 2, 5, 6, 7 and 8 (Fig 1). Fisher exact test revealed that there was a significant difference (P = 0.015) in competency of demonstrating adequate inhalational technique among respondents who took training on basic inhalational techniques and who did not. Unlike training, other variables such as educational status, work experience, working sector had no significant association with competency of delivering sufficient inhalational techniques (Table 3). Discussion The changing impact of community pharmacy professionals from their traditional dispensing responsibilities to a greater contribution to population health is being accepted in the entire world [24]. To the best of our knowledge, this is the pioneer survey to explore the competency of community pharmacy professionals in MDI technique in northwest Ethiopia. In this study, the community pharmacy professionals' demonstration of correct use of MDI was poor. The poor demonstration seen in this study concurs to findings from other countries that reported A recent study suggested around 25% of patients had not received any verbal instructions for the use of their prescribed inhaler [27]. When given, instructions were often hurried, of poor quality and not reinforced. Merely an estimated 11% of patients received follow-up assessment and education about their device use techniques [28]. A study concluded that training on basic MDI use techniques could improve the skills of patients and providers [14]. In our study, very poor performance towards MDI use techniques among CPPs was observed, this could be partially explained by the very poor pharmacy education curriculum of the country which focuses mainly on theoretical topics than practical sessions on the real society. In addition, lack of frequent supervision of CPPs performance so as to take appropriate interventions might be the reason for the poor competency of CPPs in correctly demonstrating MDI to their clients. As to the findings of this study, only 13 out of 62 participants involved in survey got score seven or above. Yet, most of them missed the essential steps (1, 2, 5, 6, 7, and 8) with only three professionals capable of addressing the essential steps and no one got all steps right in this study unlike the finding from Oman where 15% of the respondents performed all the steps correctly [29]. The study conducted in Mekelle reported that only two respondents had adequate competency in demonstrating MDI [20]. It is perhaps not surprising that patients frequently use their device(s) wrongly since healthcare professionals' understanding of the proper use of these devices is also poor. A recent study conducted in the UK stressed that only 7% of healthcare providers, including pharmacists, could demonstrate all the accurate steps in MDI use [12]. Several studies uncovered similar results with doctors, nurses and pharmacists having poor knowledge on the optimal use of different inhalers. It is essential that those providing patient training are themselves capable of showing steps correctly [27]. Among essential steps, begin breath in slowly and actuate the canister once, were the most skipped step in our findings, which is quite comparable to the Ethiopian study in Mekele [20]. However, in the Nigerian study, this step was demonstrated usually by the practicing pharmacists (90.2%) [19]. While a study conducted in Iran revealed that "depressing the canister" was the repeatedly occurring error [3]. From all steps, the frequently skipped and/or incorrectly responded was "tilt the head back" unlike the study done in Mekelle [20], where Exhale & wait one minute before the second dose was the frequently skipped and incorrectly demonstrated step. Similar result was also observed from the study done in Nepal [18]. As to the fisher exact test there was significant association (P = 0.015) in competency of demonstrating adequate inhalational technique among respondents who took training on basic inhalational techniques in Gondar town health office and who did not. However, this significance seen in our study could be partially resulted from the small number of trained pharmacy professionals. To the contrary, gender influence on their competency was found to be insignificant which is similar with the result of a study done by Chafin et al [30]. Strength and limitation This survey highlights an area of community pharmacy practice where there is lack of literature in Ethiopia. Yet, the survey has some limitations that should be noted while interpreting the results. As far the study was a cross-sectional survey conducted in Gondar town, caution should be exercised when generalizing to other cities and regions in Ethiopia. Moreover, our direct visit of community pharmacy professionals at their work place could affect the responses as it may be subjected to respondent bias, which could have been reduced had our study been simulated patient approach. Even with the above limitations, this survey has significant implications for improving the active engagement of community pharmacy professionals in health promotion and diseases state management for patients with asthma by using the result of this study as input to recommend the town health office to arrange a regular capacity building session to CPPs with the intent to improve their knowledge and skills towards MDI use techniques. Conclusion Community pharmacy professionals' competency of MDI use technique was poor. Despite involvement of all participants in patient counseling on inhalers, none of them were able to execute all steps correctly, which shows that patients who had the chance to visit those CPPs in the town were not adequately instructed. In our study, significant association was not found between educational status, work experience and work sector with competence of MDI technique. However, significant difference (P = 0.015) noted in competency of demonstrating adequate inhalational technique among respondents who took training on basic inhalational techniques and who did not, having in mind that this significant difference might be as a result of low number of trained participants. Implications To strongly integrate community pharmacies into the future asthma care and optimize the contribution of pharmacy professionals, interventions like establishing and providing regular capacity building education to CPPs has to be in action by all stakeholders. Follow-up studies seeking community pharmacy professionals' involvement through utilizing mixed studies including cross-sectional and simulated patient methodology may also be needed nationally to identify barriers and to better inform regulatory bodies.
v3-fos-license
2024-02-24T16:04:47.176Z
2024-02-20T00:00:00.000
267825663
{ "extfieldsofstudy": [], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.mdpi.com/1422-0067/25/5/2464/pdf?version=1708415577", "pdf_hash": "01f403912f1baea05a13aacdd2e1152d2357ffee", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:1585", "s2fieldsofstudy": [ "Medicine", "Biology" ], "sha1": "417c29faea071424278f09a41145b96eb03d68de", "year": 2024 }
pes2o/s2orc
Use of microRNAs as Diagnostic, Prognostic, and Therapeutic Tools for Glioblastoma Glioblastoma (GB) is the most aggressive and common type of cancer within the central nervous system (CNS). Despite the vast knowledge of its physiopathology and histology, its etiology at the molecular level has not been completely understood. Thus, attaining a cure has not been possible yet and it remains one of the deadliest types of cancer. Usually, GB is diagnosed when some symptoms have already been presented by the patient. This diagnosis is commonly based on a physical exam and imaging studies, such as computed tomography (CT) and magnetic resonance imaging (MRI), together with or followed by a surgical biopsy. As these diagnostic procedures are very invasive and often result only in the confirmation of GB presence, it is necessary to develop less invasive diagnostic and prognostic tools that lead to earlier treatment to increase GB patients’ quality of life. Therefore, blood-based biomarkers (BBBs) represent excellent candidates in this context. microRNAs (miRNAs) are small, non-coding RNAs that have been demonstrated to be very stable in almost all body fluids, including saliva, serum, plasma, urine, cerebrospinal fluid (CFS), semen, and breast milk. In addition, serum-circulating and exosome-contained miRNAs have been successfully used to better classify subtypes of cancer at the molecular level and make better choices regarding the best treatment for specific cases. Moreover, as miRNAs regulate multiple target genes and can also act as tumor suppressors and oncogenes, they are involved in the appearance, progression, and even chemoresistance of most tumors. Thus, in this review, we discuss how dysregulated miRNAs in GB can be used as early diagnosis and prognosis biomarkers as well as molecular markers to subclassify GB cases and provide more personalized treatments, which may have a better response against GB. In addition, we discuss the therapeutic potential of miRNAs, the current challenges to their clinical application, and future directions in the field. Glioblastoma Gliomas are derived from glial cells.In addition to being the most lethal and representing 30% of all primary tumors in the adult central nervous system (CNS), gliomas also encompass 75-80% of all malignant tumors within the CNS [1,2].Glioma hallmarks include a rapid growth rate, high invasion and metastasis capacities, and resistance to treatment.Based on their histological and immunobiological characteristics, gliomas have largely been classified into astrocytomas, brain stem gliomas, ependymomas, oligoastrocytomas (mixed gliomas), oligodendrogliomas, and glioblastomas [3].In 2016, the World Health Organization (WHO) proposed a classification of brain tumors from grades I to IV, depending on whether tumors present nuclear atypia, microvascular proliferation, mitotic activity, and necrosis [1,3].Gliomas then can be classified as low-grade (I and II) or high-grade (III and IV) based on their growth and invasion potential.More recent research has allowed the WHO to better categorize gliomas while considering not only histological but also molecular features [4].By following this approach, prognostic and therapeutic benefits have been observed, particularly in cases in which histologically identical grade tumors have different responses to treatment and, therefore, distinct survival outcomes. In adults, glioblastoma (GB), also referred to as grade IV astrocytoma or glioblastoma multiforme (GBM), represents the most aggressive and common type of cancer within the CNS, as it rapidly grows and invades the brain and the spinal cord [5,6].Although many of the histological and functional hallmarks of GB have been described-as well as the participation of genetic factors such as genomic rearrangements and point mutations that lead to the inactivation or altered functioning of genes, such as isocitrate dehydrogenase 1 (IDH1) [1,3,7], canonical tumor suppressor gene tumor protein P53 (TP53) [7,8], phosphoinositide 3-kinase (PI3K)/protein kinase B (AKT)/mechanistic target of rapamycin kinase (MTOR) [7], phosphatase and tensin homolog (PTEN) [9], retinoblastoma (RB) [1,7,8,10], epidermal growth factor receptor (EGFR) [7,11], and many genes in their downstream signal transduction pathways-which have been largely reported, the truth is that less than 5% of GB cases are only explained by genetic syndromes, mutations, or single-nucleotide polymorphisms (SNPs) in these genes [1,2].Thus, most of the nongenetic molecular mechanisms involved in the etiology and progression of most GB cases are scarcely known. Within the nongenetic mechanisms of gene expression control, both epigenetics and non-coding RNAs (ncRNAs) have been described to play essential roles in both physiological and pathological conditions by modifying gene expression at different levels.Among them, the altered expression of a small class of ncRNAs known as microRNAs (miRNAs) has been reported to occur in every type of cancer, including GB [6,12].Moreover, the involvement of miRNAs in providing cancer with its typical intra-and intertumoral heterogeneity has also been reported for these tumors.However, the roles of miRNAs in the etiology, chemo-resistance, immune-evasion capacity, and overall malignancy of GB are not fully understood. The need for novel diagnostic, prognostic, and therapeutic options for GB patients is pressing.With an average 2-year survival of only about 20% and a 5-year survival of less than 5-8%, it is one of the deadliest cancers [13].One of the reasons for this poor prognosis may be related to the fact that diagnosis often occurs once the tumor has reached a significant size and the patient presents clear neurological symptoms, often because of a high grade of malignancy.Diagnosis typically involves the use of imaging techniques, such as positron emission tomography (PET) scans, computed tomography (CT) scans, and nuclear magnetic resonance (NMR), and, in addition, it often requires a biopsy for pathological confirmation.These are expensive and risky procedures and it is not surprising that they are only applied once the physician has a strong suspicion of brain cancer; thus, these methods are not feasible to use as preventive or early detection methods.Therefore, the use of less invasive biomarkers that can be acquired in a more accessible manner and are easy to study either from tissue or from plasma or sera would greatly enhance our ability to detect GB early.In this regard, miRNAs are excellent candidates. miRNAs miRNAs are small non-coding RNAs with a length of 21-23 nucleotides (nts) [14].miRNAs are endogenous to organism genomes, and they are generated from a step-by-step biogenesis process (Figure 1).Firstly, miRNA genes are transcribed via RNA polymerase II (POLII) to produce a primary transcript (pri-miRNA) of around 1000 nts.Pri-miRNAs are then processed by the "microprocessor complex" consisting of the double-stranded RNAbinding protein DiGeorge syndrome critical region 8 (DGCR8) and RNAse III, DROSHA, giving rise to stem-loop precursor miRNAs (pre-miRNAs), which usually have an approximate size of 70-100 nts, a two-nucleotide overhang at their 3 ′ ends, and a 5 ′ phosphate group as a result of being processed via DROSHA [14].The next step is guided by the GTPdependent protein, Exportin-5, and, once in the cytoplasm, pre-miRNAs are recognized by another RNAse III called DICER.DICER removes the loop part of the pre-miRNA, and the resulting RNA duplex consists of the passenger RNA strand (previously known as miRNA*) and the future mature miRNA strand, which normally exerts negative control over gene expression.Then, this duplex is recognized by the RNA-induced silencing complex (RISC) and the miRISC complex is formed, depending on its sequence; the mature miRNA binds to complementary sequences found in the 3 ′ untranslated region (3 ′ UTR) of its target messenger RNAs (mRNAs).When the mature miRNA is almost fully complementary to its target sequence, the mRNA is degraded by a cut made by Argonaute (AGO).However, when only the miRNA's seed sequence is complementary to its target mRNA, translational inhibition is promoted [15][16][17].Although most miRNAs negatively regulate gene expression, either by mRNA degradation or translational inhibition, some cases of miRNAs that bind to the 5 ′ UTR of mRNAs have been reported to induce transcription [18,19] and mRNA stability and, thus, enhanced translation [20].Therefore, gene expression control by miRNAs depends on both the degree of sequence complementary between miRNAs and their target sequences and the mRNAs' region recognized by miRNAs [15][16][17]21]. have an approximate size of 70-100 nts, a two-nucleotide overhang at their 3′ ends, and a 5′ phosphate group as a result of being processed via DROSHA [14].The next step is guided by the GTP-dependent protein, Exportin-5, and, once in the cytoplasm, pre-miR-NAs are recognized by another RNAse III called DICER.DICER removes the loop part of the pre-miRNA, and the resulting RNA duplex consists of the passenger RNA strand (previously known as miRNA*) and the future mature miRNA strand, which normally exerts negative control over gene expression.Then, this duplex is recognized by the RNA-induced silencing complex (RISC) and the miRISC complex is formed, depending on its sequence; the mature miRNA binds to complementary sequences found in the 3′ untranslated region (3′ UTR) of its target messenger RNAs (mRNAs).When the mature miRNA is almost fully complementary to its target sequence, the mRNA is degraded by a cut made by Argonaute (AGO).However, when only the miRNA's seed sequence is complementary to its target mRNA, translational inhibition is promoted [15][16][17].Although most miRNAs negatively regulate gene expression, either by mRNA degradation or translational inhibition, some cases of miRNAs that bind to the 5′ UTR of mRNAs have been reported to induce transcription [18,19] and mRNA stability and, thus, enhanced translation [20].Therefore, gene expression control by miRNAs depends on both the degree of sequence complementary between miRNAs and their target sequences and the mRNAs' region recognized by miRNAs [15][16][17]21].The pri-miRNA is then processed to generate a precursor miRNA (pre-miRNA) with a size of 70−100 nucleotides (nts).The pre-miRNA has a stem-loop structure and it is transported from the nucleus to the cytoplasm by the GTPase, EXPORTIN 5. Once in the cytosol, the loop region of the pre-miRNA is removed by another type III RNAse named DICER, resulting in the generation of an RNA duplex with an approximate size of 21-23 nt.One strand of the duplex will constitute the mature miRNA, while the other one will be degraded.Once the mature miRNA is released, it recruits the RNA-induced silencing complex (RISC) and, depending on the degree of sequence complementarity and the recognized region within the miRNA target messenger RNA (mRNA), gene expression will be either positively or negatively regulated by different mechanisms of action: transcriptional or translational induction (if the miRNA binds to the 5 ′ untranslated region (UTR) of the mRNA with imperfect sequence complementarity), mRNA degradation (if the miRNA binds to the 3 ′ UTR of the mRNA with perfect/almost perfect sequence complementarity), and translational repression (if the miRNA binds to the 3 ′ UTR of the mRNA with imperfect sequence complementarity).miRNAs can act as either oncogenic (oncomiRs) or tumor-suppressing molecules depending on the cellular context.Thus, it is not surprising that altered miRNA expression profiles have been described for different cancer types, and GB is not the exception [1,5,[22][23][24][25][26][27][28][29].Moreover, as previously mentioned, a single miRNA can regulate the expression of many target genes and one gene can be targeted by multiple miRNAs [30], which results in intricate regulatory networks that can be different according to the analyzed condition [31].It is also known that, depending on the cancer type, the same miRNA can not only promote or inhibit tumor growth but can also regulate different target genes in each condition.miR-7, for example, can be an oncomiR or a tumor suppressor gene, and it also regulates distinct genes in a cellular context-dependent manner [32][33][34][35].Notably, miR-7 expression levels present dynamic changes both in normal CNS development [36][37][38][39] and in some CNS disorders [40,41], including GB [32,[42][43][44][45][46][47][48], and, as these facts are true for miR-7, they have also been reported for most of the miRNAs involved in tumorigenesis. Thus, as miRNAs can act as oncogenic or tumor suppressor molecules, it is not surprising that aberrant miRNA expression profiles have been reported to occur in every type of cancer, including gliomas [5, 22,26,29,49]. miRNAs Function in Glioblastoma Particularly in GB, the expression pattern of several miRNAs has been detected to be altered not only due to transcriptional defects or errors in their multistep biogenesis process but also due to genomic rearrangements, such as chromosomal translocations, insertions, and deletions involving genomic loci, in which, sometimes, there are miRNA genes [1,12,50].As a result, several studies have applied genomics and deep sequencing techniques to analyze miRNA expression patterns in GB; actually, there is a vast number of miRNAs described to be either upregulated or downregulated mostly in in vitro models of GB, including several human and rat GB-derived cell lines [1,5,[22][23][24][25][26][27]29,51].This has been achieved by combining different experimental methods, including quantitative PCR (qPCR), microarrays, and deep sequencing.In this sense, miRNAs miR-9, miR-10b, miR-15a, miR-16, miR-17, miR-19a, miR-20a, miR-21, miR-25, miR-28, miR-93, miR-130b, miR-140, and miR-210 have been reported as upregulated in both GB cell lines and clinical samples of grade II gliomas progressing to grade IV secondary GB [5,52] (Figure 2).On the other hand, miR-7, miR-29b, miR-32, miR-34, miR-181 family members, miR-184, and miR-328 are among the downregulated miRNAs in GB progression [5,52] (Figure 2).In additional studies, some of these and other miRNAs have been detected to significantly change and directly influence GB hallmarks, such as proliferation, angiogenesis, immune evasion, resistance to cell death, metastasis, and appearance of chemo-resistance by the target genes they regulate in this context [1,6,12].Here, we summarize the main role and some of the known target genes of miRNAs whose expression is significantly deregulated either in in vitro or GB tissue samples.Moreover, we discuss how these changes in both miRNAs and their target genes can influence GB progression and invasiveness. miR-21 miR-21 is deregulated in all described solid tumors, and GB is not the exception.miR-21 is considered an oncomiR, as its levels increase as tumorigenesis advances, and it downregulates the function of several cellular and molecular pathways by directly targeting genes such as the insulin-like growth factor (IGF)-binding protein-3 (IGFBP3) [58], reversion inducing cysteine-rich protein with kazal motifs (RECK) [59], and TIMP metallopeptidase inhibitor 3 (TIMP3) [59], resulting in enhanced cell migration and invasiveness and reduced apoptosis of tumor cells.Moreover, it was the first miRNA reported as being deregulated in this class of CNS tumor and its overexpression has been detected not only in brain tissue of GB patients but also in their cerebrospinal fluid (CSF), plasma, serum, and within serumand CSF-derived exosomes [60], suggesting that this miRNA can be used as a less invasive diagnostic, prognostic, and even monitoring molecule of a patient's response at different treatment stages. 3.1.3.miR-25 miR-25 was corroborated to act as an oncomiR in GB cell lines as its overexpression promoted cell proliferation and invasion, while its inhibition caused the opposite effects.Moreover, miR-25 was shown to target the mRNA of the neurofilament light polypeptide (NEFL) gene directly [61].In addition, miR-25 upregulation and NEFL downregulation were demonstrated to occur in human grade IV astrocytoma (GB) clinical specimens [61], suggesting an essential role for miR-25 in the development of GB malignancy and its potential as a therapeutic target for GB treatment.However, as discussed before, one miRNA can have a dual role in cancer progression, and, accordingly, miR-25 overexpression was shown to inhibit cell growth in an in vivo mouse model of GB by negatively regulating the expression of the p53 inhibitor mouse double minute 2 (MDM2) gene, thus promoting p53 accumulation in GB cells [51].These point out that miR-25 is a positive regulator of p53, underscoring a new tumor suppressor role for it in GB tumorigenesis. miR-33a The existence of cancer stem cells (CSCs) has been reported as an important characteristic of tumor growth, as these cells possess high proliferation and self-renewal rates [62].In the context of GB, the presence of glioma-initiating cells (GICs) has also been demonstrated, and miR-33a expression promotes their growth and self-renewal by directly interacting with the mRNAs encoding phosphodiesterase 8A (PDE8A) and UV radiation-resistanceassociated gene (UVRAG) genes, which are known to negatively regulate the cAMP/PKA and NOTCH pathways, respectively [63].Notably, the activation of the NOTCH pathway commonly occurs in many cancers, as it promotes CSCs' self-renewal and growth [64].Thus, miR-33a-mediated reduction in UVRAG promotes the growth and self-renewal of GICs by enhancing NOTCH activity [63].In this regard, the sole overexpression of miR-33a in non-GICs results in them acquiring GIC characteristics [63].Moreover, it has been observed that elevated levels of miR-33a are associated with the poor prognosis of GB patients [63].Thus, it seems that miR-33a has an oncogenic role in GB progression by promoting the activity of the NOTCH pathway.Additionally, miR-33a-5p was shown to negatively regulate the expression of the PTEN tumor suppressor gene in an in vitro model of GB [65].Thus, in a more immunotherapeutic approach to treat GB, an inhibitor of the programmed death ligand 1 (PD-L1) has been demonstrated to enhance radiosensitivity of GB cancer cells not only by causing DNA damage but also by downregulating miR-33a-5p expression and, therefore, elevating PTEN levels [65]. miR-93 miR-93 is a member of the miR-106b-25 oncogenic cluster of miRNAs.By itself, in vitro overexpression of miR-93 in GB-derived cells promotes cell survival, growth, and sphere formation capabilities, while, in an in vivo model of GB, miR-93 induces angiogenesis [66].In addition, miR-93 negatively regulates the expression level of the integrin beta-8 (ITGB8) gene, which is an inducer of cancerous cell apoptosis [66].Thus, increased miR-93 levels result in reduced apoptosis of tumor cells and, therefore, in enhanced GB growth [66].Additionally, when miR-93-overexpressing GB cells are co-cultured with endothelial cells, the latter ones spread, grow, and migrate more [66], suggesting that miR-93 induces tumor growth, at least in part, by promoting angiogenesis. miR-125b Despite miR-125b-2 being transcribed from a different genomic locus than miR-125b-1 (miR-125b), at the level of mature miRNAs, they share the same sequence and have the same regulatory effects within cells [67].Thus, we will indistinctively call them miR-125b. Usually, in the pro-neural subtype of GB, the activity of the canonical wingless-type (WNT)/β-catenin signaling pathway is exacerbated, leading to higher cell proliferation, increased formation of spheres, and inhibition of apoptosis [68].Moreover, GB malignancy is enhanced as downregulation of its natural inhibitor, the frizzled class receptor 6 (FZD6) gene, is exerted by the GB-overexpressed miR-125b [69].In addition, miR-125b was detected as being overexpressed in GB tissues and, in this same study, miR-125b inhibition in GB stem cells (GBSCs) enhanced the temozolomide (TMZ) apoptotic effect over them [70].Even though direct interaction between miR-125b and the mRNA of the antiapoptotic B-cell CLL/lymphoma 2 (BCL2) gene was not validated, decreased BCL2 levels were observed when miR-125b was inhibited in GBSCs, suggesting that miR-125b overexpression might confer GBSCs' resistance to TMZ by significantly reducing the levels of BCL2 [70]. It was also observed that the inhibition of miR-125b combined with the use of a PI3K inhibitor confers GBSCs' enhanced resistance to TMZ through targeting the WNT/βcatenin signaling pathway [71].Another study showed that oncogenic miR-125b confers TMZ resistance to GB cells by targeting both the TNF alpha-induced protein 3 (TNFAIP3) and the NF-κB inhibitor interacting Ras-like 2 (NKIRAS2) genes, as these cells also present increased NF-κB activity and upregulation of antiapoptotic and cell cycle genes [72].In contrast, inhibiting miR-125b results in cell cycle arrest, increased apoptosis, and increased sensitivity to TMZ, indicating that endogenous miR-125b is sufficient to control these processes [72].Most importantly, high levels of miR-125b were clearly associated with shorter overall survival of GB patients treated with TMZ [72], suggesting that this miRNA is an important predictor of patients' response to treatment. miR-141-3p Overexpression of miR-141-3p was observed in GB tissues compared to healthy brain.Moreover, a significant inverse correlation was also observed between miR-141-3p expression and P53 protein level [73].After different experiments were performed in vitro, it was demonstrated that miR-141-3p directly targets the mRNA of the TP53 gene and, therefore, it is not surprising that GB cells that present abnormal high levels of this miRNA proliferate more and present reduced cell cycle arrest and apoptosis.In addition, miR-141-3p overexpression also induces TMZ resistance of GB cells in vitro [73].By using an orthotopic mouse model of human GB, inhibition of miR-141-3p reduced tumor growth within the brain and significantly increased mouse survival [73].Thus, miR-141-3p might potentially serve as a new diagnostic marker and therapeutic target for GB treatment. miR-155-3p miR-155 is one of the most studied miRNAs within the immune system and inflammatory processes.Thus, it is not surprising that alterations in its expression profile promote the occurrence of cancer-related processes, such as cell proliferation, cell cycle progression, apoptosis, and immune system evasion [74][75][76][77].In the GB context, miR-155-3p has been shown to negatively regulate the expression of several genes related to tumorigenesis and the development of TMZ resistance, including Sine oculis homeobox homolog 1 (SIX1) [78] and protocadherin 7 (PCDH7) [55], the latter being a tumor suppressor that inhibits the WNT/β-catenin pathway.In this sense, miR-155-3p overexpression is oncogenic, as it induced cell proliferation and inhibited TMZ-induced apoptosis [78], and both oncogenic phenotypes were reversed by SIX1 overexpression and miR-155-3p inhibition.In addition, miR-155-3p inhibition reduced GB cell growth and proliferation in the brain of a mouse model and increased the survival of tumor-bearing mice [78].Even though, in the context of GB, miR-155-3p target genes involved in regulating inflammatory pathways have not been extensively reported, it is feasible that this miRNA represents a good candidate for the development of future anti-GB RNA-based therapies [79][80][81][82][83][84][85][86]. 3.1.9.miR-182 miR-182 is an oncogenic miRNA that is positively regulated by different factors, such as the signal transducer and activator of transcription 3 (STAT3) [56] and the transforming growth factor beta (TGF-β) transcription factors (TFs) [87], which are aberrantly overexpressed in GB.Interestingly, miR-182 was shown to negatively regulate the protocadherin-8 (PCDH8) gene, which results in higher proliferation and invasion rates of GB cells [56].In addition, miR-182 promotes a proinflammatory microenvironment within GB tumors by inhibiting the expression of the CYLD gene, a negative regulator of the nuclear factor kappa B (NF-κB) signaling pathway.Thus, as this miRNA is somehow related to the regulation of the immune response, it is possible that future GB therapeutic tools could be based on inhibiting the expression of this miRNA.In contrast, it has been observed that miR-182 sensitizes GB cells to TMZ treatment by inducing more apoptosis through the negative regulation of genes, including BCL2-like 12 (BCL2L12), MET proto-oncogene (MET), and hypoxia-inducible factor 2α (HIF2A) in a GB model of intracranial tumors [88].Therefore, the modulation of miR-182 levels could also be used as an RNA-based therapy against GB. miR-210-3p Hypoxia refers to a lack of oxygen and this typically occurs at the center of tumors, where regular vessels cannot supply it.In this sense, GB tumors present hypervascularization and necrosis, both caused by a hypoxic microenvironment.The hypoxia-inducible factor 1 subunit alpha (HIF1A) is the main transcription factor (TF) activated by hypoxia, which regulates the transcription of multiple target genes, including miRNAs.In this sense, miR-210-3p, together with miR-1275, miR-376c-3p, miR-23b-3p, miR-193a-3p, and miR-145-5p, were found to be upregulated by hypoxia in GB tissue, while miR-92b-3p, miR-20a-5p, miR-10b-5p, miR-181a-2-3p, and miR-185-5p were downregulated by hypoxia, and some of them present HIF1A binding sites within their promoter region [22].Additionally, miR-210-3p was found to promote the hypoxic survival and chemo-resistance of GB cells by negatively regulating hypoxia-inducible factor 3 subunit alpha (HIF3A), a negative regulator of the of hypoxic response.In contrast, in the rat GB cell line C6, miR-210-3p acts as a tumor suppressor as it inhibits both cell proliferation and migration by negatively regulating the iron-sulfur cluster assembly enzyme (Iscu) gene [89], which makes the miR-210-3p/Iscu axis a potential target for the treatment of this type of glioma. A summary of upregulated miRNAs and their targets is given in Table 1.miR-7 is one of the most expressed miRNAs within the SNC.Some of its regular functions include the control of neural precursor proliferation and both neuronal and glial differentiation.miR-7 was reported to act as an oncomiR and as a tumor suppressor molecule, depending on the type of cancer.Moreover, miR-7 is downregulated in both tumoral tissue [29,44,49,90] and serum [25] of GB patients compared to healthy controls.Particularly, in GB, miR-7 was shown to act as a tumor suppressor miRNA, as it directly targets the epidermal growth factor receptor (EGFR) gene and negatively regulates the AKT signaling pathway through the direct inhibition of its upstream regulator, the insulin receptor substrate 2 (IRS2) [44].Moreover, in this same study, the delivery of miR-7 into GB cell lines decreases both their viability and their invasion capacity [44].In another study, miR-7 was shown to negatively regulate the T-box 2 (TBX2) gene, whose expression is upregulated in GB tissue; thus, miR-7 downregulation and TBX2 overexpression in GB results in both EMT induction and increased cell invasion of GB in vitro [90].miR-7 was also shown to target the special at rich sequence binding protein 1 (SATB1) gene, the expression of which promotes GB cell migration and invasion [48], whereas focal adhesion kinase (FAK) downregulation by miR-7 results in decreased invasion of GB cells [47].Notably, miR-7-5p suppresses stemness and enhances TMZ sensitivity of drug-resistant GB cells by targeting the yin and yang 1 (YY1) TF [43].Moreover, in murine xenograft GB, miR-7 was capable of inhibiting tumor angiogenesis and growth by directly targeting the O-linked N-acetylglucosamine (GlcNAc) transferase (OGT) gene [32], and these same phenotypes were achieved when miR-7 targeted Raf-1 proto-oncogene (RAF1) in GB cell lines [45].As observed, miR-7 regulates different hallmarks of GB progression, which makes this miRNA a very promising target for the development of future GB therapies.Additionally, a previous report showed that RNA-binding proteins quaking gene isoform 5 (QKI-5) and 6 (QKI-6) regulate the miR-7 biogenesis process [50].Moreover, it was reported that pri-miR-7 processing to its mature miR-7 form is altered in GB cell lines [44]; however, the mechanism that regulates this process was not studied.Thus, it is possible that regulation by QKI-5 and QKI-6 represents one of the mechanisms by which miR-7 processing is controlled in GB, making it an additional target for future GB therapies. miR-9 The tumor suppressor miRNA known as miR-9 is typically downregulated in GB patient samples [91].One of the mechanisms that downregulate miR-9 in the GB context is its epigenetic transcriptional silencing, which is mediated by enhancer of zeste homolog 2 (EZH2) [92].In contrast to most miRNAs, both miR-9-5p and miR-9-3p (previously known as miR-9 and miR-9*, respectively) are expressed and functional [91], even though, in a contrasting study, miR-9 expression was found to be induced by hypoxia in GB cells [93].Both miR-9-5p and miR-9-3p were shown to induce GBSCs' proliferation by reducing the expression of the calmodulin-binding transcription activator 1 (CAMTA1) gene, which regularly induces the expression of antiproliferative cardiac hormone natriuretic peptide A (NPPA) [94].In GB cells, miR-9 overexpression suppresses mesenchymal differentiation by downregulating the expression of Janus kinases 1 (JAK1), 2 (JAK2), and 3 (JAK3), resulting in the inhibition of the STAT3 signaling pathway [27], while it inhibits the proliferation of GB cells by targeting the cyclic AMP response element-binding protein (CREB) [95].Moreover, the proliferation and aerobic glycolysis of GB cells are suppressed by miR-9 overexpression, which results in the downregulation of lactate dehydrogenase A (LDHA) [96].In contrast to its regular tumor suppressor activity, miR-9 induces angiogenesis and cell migration by repressing sphingosine-1-phosphate receptor 1 (S1PR1) [92] and neurofibromin 1 (NF1) [95], respectively.Remarkably, a regulatory feedback loop is presented as CREB induces the transcription of both miR-9 and NF1.Moreover, miR-9 influences GB cells' proliferation by regulating stathmin 1 (STMN1) [97], which regulates microtubule dynamics during cell division.On the other hand, miR-9 overexpression caused reduced invasion and migration of GB cells by blocking the formation of the mitogen-activated protein kinase 14 (MAPK14)/MAPK-activated protein kinase 3 (MAPKAP3) (MAPK14/MAPKAP3) complex by negatively regulating both genes and causing changes in the regulation of the actin cytoskeleton [94].Moreover, miR-9 overexpression promotes apoptosis of GB cells by suppressing the expression of the structural maintenance of chromosomes 1A (SMC1A) gene in vitro [98].Also, overexpression of miR-9 inhibits cell proliferation by directly targeting forkhead box P2 (FOXP2) [92].Additionally, miR-9 is also involved in GB cells' acquisition of chemo-resistance by negatively regulating the patched homolog 1 protein (PTCH1) gene.Additionally, the inhibition of miR-9 in resistant GB cells sensitizes them to chemotherapy [97]; however, most of its target genes during this process are currently unknown.In addition, sex-determining region y-box 2 (SOX2) TF, a direct target of miR-9-3p, is overexpressed in GB patients, leading to increased chemo-resistance, self-renewal, and tumorigenicity of GBSCs within GB tumors [97].Thus, it is possible that the modulation of miR-9 levels represents a promising therapeutic strategy to diagnose and treat GB in the future. miR-29a The miR-29 family of miRNAs has tumor suppressor functions, as they usually promote apoptosis of cancerous cells by targeting the cell division cycle 42 (CDC42) gene [99,100].In GB tumors, miR-29 expression is usually decreased due to hypermethylation of its promoter region.Moreover, it was found that miR-29 overexpression inhibits proliferation, migration, and invasion of GB cells by negatively regulating the expression of genes, including the DNA methyltransferases 3 alpha (DNMT3A) and 3 beta (DNMT3B) [101], TNF receptor-associated factor 4 (TRAF4) [100], and QKI-6 [102], with the latter interaction being a possible regulator of miR-7 levels in this context.As previously mentioned, GBSCs play an important role in GB progression; thus, the expression of genes in these cells is also crucial for tumor growth [103].In this sense, it was found that miR-29a downregulation in GBSCs results in the overexpression of its target genes, platelet-derived growth factor subunit A (PDGFA) and C (PDGFC) [104], which leads to less apoptosis and higher proliferation, migration, and invasion rates of GB cells [100,104].Thus, targeting miR-29a levels might be useful for GB treatment. miR-30a miR-30a was found to be present in GB-cell-derived exosomes.Moreover, the overexpression of this miRNA increases GB cells' chemosensitivity to TMZ by directly targeting Beclin 1 (BECN1) and by inhibiting autophagy [105].Notably, it was demonstrated that miR-30a negatively regulates the expression of the brain-derived neurotrophic factor (BDNF) in a GB cell line when treated with paroxetine, a typical antidepressant drug [106].Thus, even though the role of miR-30a:BDNF was not evaluated in the context of GB progression [106], it is possible that it has an important function in promoting tumor growth, as BDNF is known to induce cell survival. miR-34a It is not surprising that miR-34a acts as a tumor suppressor miRNA, as it is a direct transcriptional target of P53.In GB tumors, TP53 is one of the typically silenced/mutated genes and, therefore, its targets, including miR-34a, are also downregulated.In comparison to normal brain tissue, miR-34a is downregulated in GB and, accordingly, its overexpression in GB cells inhibits cell proliferation and induces apoptosis by directly inhibiting BCL2 [28] and the nicotinamide adenine dinucleotide (NAD)+ hydrogen (NADH)-dependent sirtuin 1 (SIRT1) [28] genes.In an independent study, the platelet-derived growth factor (PDGF) signaling pathway was shown to repress miR-34a expression in GB.In this sense, the platelet-derived growth factor receptor alpha (PDGFRA) gene is directly targeted by miR-34a in this cancer type, thus constituting a negative feedback regulatory loop that results in GB progression [107].In another study, it was shown that miR-34a directly targets different oncogenes in GB cells, including notch receptor 1 (NOTCH1) [108,109] and 2 (NOTCH2) [108] and cyclin-dependent kinase 6 (CDK6) [108]; therefore, miR-34a overexpression inhibits GB cells' proliferation, cell cycle progression, survival, and cell invasion by targeting these three genes [108].Moreover, miR-34a expression level is inversely correlated with Met proto-oncogene (MET) levels in human GB tumors [108]; however, whether MET is directly targeted by this miRNA was not evaluated.Another direct target of miR-34a that contributes to GB tumorigenesis by regulating translation is Musashi RNA-binding protein 1 (MSI1); thus, miR-34a overexpression reduced MSI1 protein levels, resulting in decreased cell proliferation [109].The RPTOR-independent companion of the mTOR complex 2 (RICTOR) gene has also been reported as a direct target of a miR-34a and, through its downregulation, GB malignancy increases by indirectly activating both the AKT/mTOR and the WNT signaling pathways [28].miR-34a was also demonstrated to directly target TF YY1, which results in EGFR overexpression and GB tumor growth [110].Additionally, in p53-mutated GB tumors, the loss of miR-34a expression results in higher levels of its direct target gene, WNT ligand 6 (WNT6), which, in turn, activates WNT signaling and eventually promotes WNT-mediated GB chemo-resistance to TMZ.Thus, TMZ treatment, together with the inhibition of miR-34a, induces drug sensitivity in p53-mutant GB cells and extended survival in xenograft mice in vivo [111].Noteworthy, by analyzing publicly available genomic data from GB patients, an integrative in silico study uncovered the potential regulation of the TGF-β signaling pathway, which is usually overexpressed in GB [87] via an SMAD family member 4 (SMAD4) transcriptional network, mainly orchestrated by miR-34a, which, at the same time, was observed to be a good discriminator of the pro-neural and mesenchymal GB subtypes [112].These results indicate that using the already published, as well as future, bioinformatic analyses of available genomic data of GB patients [28] can provide a more comprehensive panorama of the intricate gene regulatory networks acting through GB progression to potentially enhance current and future diagnostic, prognostic, and therapeutic tools. miR-101-3p miR-101-3p is considered a tumor-suppressor miRNA in many cancers [113].It has many targets involved in cell proliferation and immune control and, thus, is downregulated in several cancer types [113].In the context of glioblastoma, miR-101-3p is involved in inhibiting several tumor hallmarks, such as invasion, proliferation, migration [114,115], metastasis [116], and chemo-resistance [117].miR-101-3p regulates many genes.It has been shown that, while the prostaglandin-endoperoxide synthase 2 (PTGS2) gene-which encodes for the COX2 protein-is upregulated in GB vs. normal tissue, miR-101-3p is markedly downregulated.PTGS2 regulates the conversion of arachidonic acid to prostaglandin (PGE2), which, in turn, enhances the activity of T-regulatory cells (T-regs), an anti-inflammatory cell type that reduces the immune response.miR-101-3p directly downregulates PTGS2 and drastically reduces invasion and proliferation [115].On the other hand, miR-101-3p also inhibits GB proliferation, migration, and invasion in vitro as well as tumor growth in vivo, at least in part, by directly downregulating SRY-box transcription factor 9 (SOX9) TF, which, in turn, promotes the activity of the AKT, WNT, and BML1 pathways [114].In addition, miR-101-3p downregulates the expression of the tripartite motif-containing 44 (TRIM44) gene.TRIM44 TF is a known regulator of the EMT process, and its inhibition by miR-101-3p reduces cell migration and proliferation [116].Finally, it has also been shown that, in both TMZ-chemo-resistant GB cell lines and patient-derived samples, miR-101-3p is significantly downregulated.This chemo-resistant phenotype is reversed by miR-101-3p overexpression, and it is, in part, mediated by the downregulation of the glycogen synthase kinase 3 beta (GSK3B) gene that encodes for a protein kinase involved in cell metabolism. miR-124-3p miR-124-3p is one of the most abundantly expressed miRNAs in neural tissue and it has an important function in neurodevelopment and neural cell differentiation [118].Several studies have found that miR-124-3p is downregulated in GB, and this alters tumor cell growth, survival, migration, and chemo-resistance [119][120][121][122].It has been shown that the mRNA of Ras homolog family member G (RHOG) is directly targeted by miR-124-3p in the GB context.RHOG is a small GTPase involved in cell migration.When miR-124-3p is downregulated, RHOG upregulation promotes cell proliferation and migration.Overexpression of miR-124-3p significantly decreases cell migration and survival and increases apoptosis levels [119].miR-124-3p also downregulates SOS Ras/Rac guanine nucleotide exchange factor 1 (SOS1), which regulates the RAS/MAPK pathway by interacting with RAS and promoting guanosine diphosphate (GDP) to guanosine triphosphate (GTP) conversion to activate the pathway, resulting in enhanced cell survival.When miR-124-3p is downregulated in GB cells, SOS1 expression is increased and it promotes cell growth [121].Another notable miR-124-3p target gene is FOS-like 2, AP-1 transcription factor subunit (FOSL2), which gives origin to the Fos-related antigen-2 (FRA2) protein.FOSL2 is a TF involved in EMT.FOSL2 is upregulated in GB in response to miR-124-3p downregulation and its knockdown results in decreased cell proliferation, migration, and invasion [122].miR-124-3p also negatively regulates the Aurora kinase A (AURKA) gene, a mitotic serine/threonine kinase involved in cell cycle control.AURKA is upregulated in GB, and its overexpression in patients is correlated with poor survival.AURKA overexpression enhances cell survival and chemo-resistance.In turn, AURKA downregulation by miR-124-3p overexpression suppresses cell growth and increases chemosensitivity [120]. miR-128-3p miR-128-3p is a tumor suppressor miRNA that is highly expressed in the mammalian brain.Thus, it is frequently downregulated in many cancer types, including GB [123].In GB, miR-128-3p has been shown to regulate proliferation, migration, tumor formation, and chemo-resistance and may be involved in GBSCs biology [124][125][126].miR-128-3p is particularly downregulated in GBSCs in comparison to regular tumor cells.Notably, treatment with DNA methylation inhibitors enhances miR-128-3p expression with higher effects in GBSCs, suggesting that miR-128-3p may be silenced by epigenetic mechanisms.Furthermore, miR-128-3p upregulation leads to decreased cell proliferation, migration, and invasion in vitro, and its overexpression decreases tumor growth in vivo.These effects seem to be mediated, at least in part, by the miR-128-3p-mediated downregulation of both the BMI1 proto-oncogene polycomb ring finger (BMI1) gene, which forms part of the epigenetic polycomb repressor complex 1 (PRC1), and the E2F transcription factor 3 (E2F3) gene, which is involved in the RB pathway [126].Moreover, like miR-101-3p, miR-128-3p downregulates PTGS2 expression to promote proliferation of GB cells [124].Additionally, miR-128-3p is involved in GB cells' acquisition of chemo-resistance.While miR-128-3p downregulation is correlated with high malignancy in vivo and in vitro, miR-128-3p overexpression has the opposite effect and it enhances chemosensitivity to TMZ treatment.In this context, miR-128-3p seems to downregulate both the platelet-derived growth factor receptor alpha (PDGFRA) and MET proto-oncogene, receptor tyrosine kinase (MET) genes, which are involved in EMT, thus enhancing the effect of TMZ.Therefore, the overexpression of MET abrogates this effect, whereas its silencing using small interfering RNAs (siRNAs) phenocopies miR-128-3p overexpression [125]. 3.2.9.miR-142-3p miR-142-3p is downregulated in GB.It regulates proliferation, migration, invasion, and chemo-resistance and may have a considerable role in immunosuppression [127][128][129][130][131]. miR-142-3p regulates the EGRF pathway by directly inhibiting the AKT serine/threonine kinase 1 (AKT1) gene, thus decreasing cell proliferation when overexpressed [130].It also mediates cell migration and invasion by directly downregulating Rac family small gtpase 1 (RAC1), a small GTPase involved in synaptic function and the regulation of matrix metalloproteinases [129].miR-142-3p is also involved in regulating the immune response.Proinflammatory interleukin 6 (IL-6) promotes DNA methylation of miR-142-3p promoter, thus silencing it through epigenetic mechanisms.The IL6 gene is a direct target of miR-142-3p, forming a regulatory loop.miR-142-3p also targets the high-mobility group at-hook 2 (HMGA2) gene, which is an activator of SOX2 transcription, a well-known stemness marker.So, when miR-142-3p is downregulated, there is a high expression of both its direct target genes, IL6 and HMGA2, and its indirect target gene, SOX2, which correlates with poor patient survival.Conversely, upregulating miR-142-3p in in vivo models decreases IL6, HMGA2, and SOX2 levels, leading to less tumor growth [128].Noteworthy, it has also been found that miR-142-3p is markedly downregulated in tumor-infiltrating macrophages, which correlates with an M2 anti-inflammatory phenotype, although the precise mechanisms by which miR-142-3p may regulate this phenotype need to be further investigated [131].Finally, miR-142-3p is also involved in chemo-resistance, as the O-6-methylguanine-DNA methyltransferase (MGMT) gene is its direct target.The MGMT protein acts in DNA repair by counteracting alkylating agents, such as TMZ, which is the most widely used chemotherapy agent against GB, an alkylating agent.Thus, miR-142-3p downregulation enhances MGMT expression and resistance to TMZ, whereas its overexpression reverts this effect [127]. miR-146a-5p/miR-146b-5p The miR-146 family of miRNAs comprises two members, miR-146a-5p and miR-146b-5p, which share an identical seed sequence and, therefore, possess similar targets.Both miRNAs arise from the same precursor and are negative regulators of the immune response.These miRNAs are highly expressed in microglia and other immune cells [132,133].Although tightly linked, both miRNAs are usually studied individually and have distinct targets, although there is evidence that one can rescue the function of the other [132].Both miR-146a-5p and miR-146b-5p are downregulated in GB and have targets involved in cell proliferation, migration, invasion, stemness, and chemo-resistance [134][135][136][137][138][139][140][141].miR-146a-5p has been shown to directly downregulate NOTCH1, an important modulator of cell stemness and a pathway that enhances EGFR, one of the key gliomagenesis pathways.The knockdown of miR-146a-5p promotes tumorigenesis in astrocytes, while its overexpression in GB cells inhibits proliferation, migration, tumor growth, and GBSC formation, and it enhances apoptosis [134,136].Moreover, miR-146a-5p negatively regulates POU class 3 homeobox 2 (POU3F2) and SWI/SNF-related, matrix-associated, actin-dependent regulators of chromatin, subfamily A, member 5 (SMARCA5), two TFs involved in stemness.Upregulation of miR-146a-5p reduces POU3F2 and SMARCA5 TFs levels and increases chemosensitivity to TMZ treatment.Furthermore, low levels of miR-146a-5p correlate with worse patient outcomes [135].miR-146b-5p downregulates the EGFR pathway that is frequently upregulated in GB.Moreover, apart from epigenetic silencing, the locus that harbors both miR-146a/b is frequently lost in GB patients.Overexpression of miR-146b-5p reduces cell migration, invasion, and AKT phosphorylation, a hallmark of tumorigenesis [141].Another target gene that is regulated by miR-146b-5p is TNF-receptor-associated factor 6 (TRAF6).TRAF6 is an adaptor protein that positively regulates both the PI3K/AKT and mitogen-activated protein kinase 7 (MAP3K7) pathways that enhance cell survival.miR-146b-5p expression is negatively correlated with both TRAF6 expression and tumor grade in patient biopsies [140].miR-146b-5p is also downregulated during the acquisition of TMZ chemo-resistance, while TRAF6 expression is increased.miR-146b-5p overexpression reverts this phenotype [139].Low miR-146b-5p levels, or its loss due to genomic rearrangements, are also correlated with increased migration and invasion capacities, at least partially through the regulation of the matrix metallopeptidase 16 (MMP16) gene, a matrix metalloproteinase.Upregulation of miR-146b-5p markedly reduced MMP16 expression and GB cells' migration and invasion capacities [137,138].Although most reports have shown that the downregulation or loss of miR-146a/b indicates poor prognosis for patients [134,139,140], a recent report found that miR-146b-5p is upregulated in GB recurrence [142].The reported cohort is small and further investigation is needed to assess whether miR-146b-5p can act as an oncogene in this context. miR-181a/b/c/d The miR-181 family is a conserved family of four miRNAs that are produced by four genomic loci: one containing miR-181a-1 and miR-181b-1, another one containing miR-181a-2, the third one containing miR-181b-2, and the last one containing the miR-181c and miR-181d genes.They all share the same seed sequence; thus, they may regulate a similar group of genes [143].The whole family is downregulated in GB, and its downregulation positively correlates with tumor stage [144].All miR-181 members have been found to negatively regulate the member of RAS oncogene family (RAP1B) gene, which is involved in cytoskeleton remodeling.Decreased RAP1B levels correlate with TMZ chemo-resistance, and the upregulation of miR-181a/b/c/d reverses this phenotype [145].miR-181a-5p has been found to also downregulate F-box protein 11 (FBXO11) directly, which is part of the Skp, Cullin, and F-box (SCF) ubiquitin ligase complex that negatively regulates P53.In GB, low miR-181a-5p correlates with higher migration ability in cells.In contrast, its overexpression also correlates with lower migration and invasion rates, as well as with higher apoptosis rates, concomitant with lower FBXO11 levels.Furthermore, miR-181a-5p overexpression increased TMZ chemosensitivity [146].miR-181b-5p suppresses GB growth by inhibiting Sp1 transcription factor (SP1) TF.GB cells with low levels of miR-181b-5p have increased levels of SP1 and show increased levels of both the glucose transporter type 1 (GLUT1) and the pyruvate kinase M1/2 (PKM2) genes, which are regulators of glucose metabolism and are predicted targets of SP1 [147].Furthermore, increased levels of miR-181b-5p decrease tumor growth in vivo, which is reversed by SP1 overexpression [147].Lastly, it has been shown that miR-181d-5p directly downregulates MGMT expression (similar to miR-142-3p; see above).As expected, miR-181d-5p downregulation is correlated with poor patient survival when the patient underwent TMZ treatment but not when patients are untreated.This suggests that the miR-181d-5p-mediated regulation of MGMT affects TMZ chemo-resistance and low levels of miR-181d-5p would predict a poor drug response [148]. A summary of downregulated miRNAs and their targets is given in Table 2. Increased cell survival, proliferation, invasion, and chemo-resistance.Reduced apoptosis. miRNAs as Biomarkers in Liquid Biopsies Previously, we summarized some of the most representative miRNAs with a key role in GB's appearance and progression.However, some of them have been detected not only in in vitro and postmortem GB samples but also in serum, plasma, and even circulating exosomes of GB patients [6].This suggests that, as previously described for other types of cancer, circulating miRNAs are promising candidates to serve as early diagnosis and prognosis GB noninvasive biomarkers.Notably, it has been demonstrated that circulating miRNAs, independently of whether they are within exosomes or not, possess high stability and are practical and straightforward to detect in almost any kind of biofluids, including breast milk, blood, serum, saliva, urine, semen, and cerebrospinal fluid (CFS).Some of the reported circulating miRNAs in GB patients' biofluids are listed in Table 3.As well as being used as GB biomarkers, some miRNAs have been recently proven to serve as good monitoring molecules of GB patients' response to treatment.Thus, the use of miRNAs for the development of future RNA-based GB therapies is very promising. Modulation of miRNA Expression Levels as a Therapeutic Strategy in Glioblastoma As mentioned before, it has been shown that many miRNAs are up-or downregulated in cancer mainly due to changes in the levels of both the TFs and epigenetic factors involved in the control of their transcription [24,159].As mature miRNAs exert their regulatory function in the cytoplasm, it is key to understand that, before becoming functional to regulate gene expression, some of the steps of the canonical miRNAs' biogenesis pathway (Figure 1) might be considered targets for the development of specific therapeutic tools.However, miRNA processing in GB has been scarcely studied and, therefore, there are no current strategies that have been developed to modify any step of this pathway.Moreover, as all the involved proteins in miRNAs' biogenesis are responsible for the processing and generation of all cellular miRNAs, this means it is more complicated to develop tools to alter the production of a particular miRNA without altering the levels of many others.Thus, research in this area will be needed in the near future to first mechanistically explain cases as the one reported about the altered processing of pri-miR-7 to pre-miR-7 in GB [44] and, second, to develop therapeutic strategies to produce the desired amount of a particular mature miRNA. Because cancer, in general, and GB, in particular, is a complex disease, it is to be expected that many genes may be deregulated and this naturally includes miRNAs [1,6,12].However, there is mounting evidence showing that miRNA deregulation may be a causal factor for the disease and not a simple consequence of it.Such evidence has been shown both in in vitro and in vivo models (see above), and it is also suggested that distinct miRNA signatures can be found in large cohorts of patient samples [23,52,160].In addition, many miRNAs have been shown to be part of key pathways that are involved in different aspects of GB biology, such as tumor formation, immune evasion, rapid growth, vascularization, cell diversity, formation of stem-cell-like cells, and chemo-resistance [24,159].Moreover, miRNAs are recognized for their pleiotropic effects: one miRNA can modulate many targets and one target gene may be modulated by many miRNAs at the same time and in the same cell.This property has been proposed as a tool for finding key miRNAs in cancer that would have a significant effect on the transcriptional regulatory network if they are modulated [31]. Many strategies have been proposed to regulate mature miRNAs' function, and two general classes of miRNAs can be identified in cancer.First, oncomiRs can be defined as miRNAs that promote tumorigenesis, cell growth, cell survival, immune evasion, etc. OncomiRs are often upregulated in GB (see the above; upregulated miRNAs in GB (Table 1)) [1,6,12,161].As a therapeutic strategy, oncomiRs should be blocked or downregulated.On the other hand, tumor suppressor miRNAs are miRNAs that are involved in processes, such as differentiation, cell death, cell cycle control, immunogenicity, and other mechanisms, that inhibit tumor growth and formation (see the above; downregulated miRNAs in GB (Table 2)) [1,6,12,161].Those miRNAs are downregulated in GB and overexpressing them would have a positive therapeutic effect. As mentioned before, a single miRNA can behave as an oncomiR or a tumor suppressor miRNA, depending on its cell context.It is not surprising that the same miRNA may be classified as oncogenic in one cancer type or subtype and as a tumor suppressor in another.Thus, finding good miRNA therapeutic targets involves a good understanding of the underlying biology and the effects that a particular miRNA has on its targets in a particular cell context [162,163].Thus, many of the aforementioned findings may not be necessarily generalizable to all cell contexts (e.g., tumor stage, tumor type, whether the tumor is being treated or has been treated with chemotherapy agents, genetic ancestry of the patient, etc.), and it is likely that many therapies should be developed for specific tumor contexts.Therefore, more basic research needs to be carried out to enhance our understanding of miRNA (de)regulation in GB formation, growth, invasion, immune evasion, response to chemotherapy, etc. That said, two general strategies can be followed to alter miRNA expression [162,163].Because miRNAs are small molecules that, in principle, may cross the blood-brain barrier, synthetic mimics are often used to enhance their expression [161,164].Like the RNA-based therapeutic approaches that have been used in COVID-19 vaccines, miRNA mimics can be delivered directly as double-stranded or single-stranded RNA molecules that would be directly loaded into the RISC (Figure 1).Those molecules often have modified nucleosides to enhance their stability.Alternatively, miRNAs can be delivered in vectors, either as complete genes including the sequence of the natural pre-miRNA or in synthetic miRNAlike precursors, which combine a known precursor with the sequence of the miRNA for which overexpression is desired.In both cases, the precursor is delivered as DNA and needs to be transcribed in the nucleus and processed as naturally occurring miRNAs.Both strategies pose advantages and disadvantages that are not dissimilar to those of COVID-19 mRNA/vector vaccines. When one wants to decrease miRNA expression, two popular strategies have been followed [162,163].One is the use of antagomiRs as therapeutic tools.AntagomiRs (also called ASOs or antisense oligonucleotides) are RNA molecules with modified nucleosides that dramatically enhance their stability [165].Those molecules are designed to be complementary to the miRNA sequence.Because of their enhanced stability, they can sequester miRNA molecules without being degraded.Because they are small molecules (virtually the same size as miRNAs), they are, in principle, easy to transport and deliver.The second strategy involves the use of miRNA sponges as a tool to decrease miRNA activity.Naturally occurring miRNA sponges are circular RNA molecules with dozens or even hundreds of miRNA binding sites [166].As antagomiRs, miRNA sponges can also sequester miRNA molecules, allowing the expression of the original miRNA targets.The advantage of using sponges vs. antagomiRs is that one could potentially target several miRNAs with the same sponge if the target sequence of different miRNAs is included in the sponge sequence.One major disadvantage is that the generation of circular RNAs is technically more challenging than small single-stranded RNAs and their delivery may be more challenging depending on their size. Finally, with the advance of epigenetic editing tools, many of which are based on CRISPR/Cas systems, one may envision a future in which no external miRNA/anti-miRNA molecules need to be added [167].Endogenous miRNA genes may be either silenced or overexpressed by modulating their epigenetic marks.One of the main advantages of such an approach is that the silencing or enhancing effect may be maintained by epigenetic mechanisms, potentially making the effects long-lasting and reducing the need for repeated doses.Independently of the method of choice, one of the main challenges for the realization of RNA-based therapeutics is the so-called delivery problem, which will be further discussed in the future of RNA-based therapies for GB section. Current FDA-Approved RNA-Based Therapies to Treat Cancer COVID-19 was both a great challenge for humanity and clear proof of the power of RNA-based therapies.The quick development of many RNA-based vaccines, either vaccines based on mRNAs or vectors, has shown the great promise of RNA medicine.However, such a success would have been impossible to achieve without decades of research in RNA biology and therapeutics [168]. Although there are no currently approved RNA-based therapies to treat any type of cancer, this will very likely change in the near future. In general terms, four types of RNA molecules have been approved for their use in therapeutics: mRNAs, ASOs, siRNAs, and aptamers [169,170].Of these, siRNAs and ASOs are relevant for this review.Eight ASO-based therapies have been approved by the FDA so far, mainly for the treatment of rare genetic diseases.Fomivirsen was the first approved ASO therapy.It was approved in 1998 but it was quickly discontinued in 2002 due to secondary effects and risk concerns [169,170].A second generation of ASOs was developed and Mipormersen was approved in 2013, followed by the successful Nusinersen in 2016, the first RNA-based approved therapy for Duchenne muscular dystrophy.Several ASO treatments, mainly for Duchenne and other muscular dystrophies, were approved between 2016 and 2021.Moreover, between 2018 and 2022, four siRNA-based therapies (which act very similar to endogenous miRNAs) were approved [169,170]. Although most of the approved therapies target very specific and somewhat rare diseases, the massive application of mRNA-based vaccines has dramatically increased our knowledge and the advancement of RNA-based technologies.This has boosted the interest in ongoing clinical trials as well as in starting new ones. Several ASO-based therapies are being investigated for different cancer types in phase II clinical trials.Likewise, some siRNA-based therapies have been proposed, including the use of an inhibitor of Cbl proto-oncogene B (CBLB), which is being investigated in several cancers, including brain cancer, and it is currently in phase I [171].Additionally, some miRNA-based strategies are being investigated for the treatment of other diseases, including miR-34a in the treatment of melanoma [171]. As is the case with many technologies based on nucleic acids and other molecular therapies, delivering the molecule to the correct place at the correct time is fundamental for therapeutic success and safety.Thus, investigating technologies for the efficient and safe release of RNA molecules will be key for the advancement of the field, together with a keen understanding of the (epi)genetic networks underlying disease states. The Future of RNA-Based Therapies for Glioblastoma Because the brain is generally isolated from the rest of the body through the bloodbrain barrier, treating brain-related diseases has proved challenging in general [172].This is not the exception with RNA-based therapies.Because miRNAs are molecules that exquisitely fine-tune gene expression in both time and space [118,173], it is to be expected that any therapy that modulates them also needs to be very time-and space-specific if one does not want to create unintended and potentially dangerous side effects [174].The fact that, as discussed previously, the same miRNA can be both oncogenic or tumor-suppressing, depending on the cellular context, highlights the importance of achieving some grade of specificity on the target organ, preferably at the cell-type level [162,163]. Although it has been demonstrated that the blood-brain barrier becomes compromised in GB [175], it persists as a relevant barrier that has to be overcome to achieve sufficient molecule concentration for a therapeutic effect with minimal off-targets in other organs. For this, two main strategies may be envisioned, which are not mutually exclusive.One is to develop small devices that can be surgically implanted in the brain near the tumor mass, which can release the RNA molecule (or a mix of RNA within lipid-based or another type of carrying particle as a cell delivery tool) either constantly or in fixed intervals [176,177].Advanced models may even modulate the amount of RNA to be delivered based on other metrics that the same device could measure, such as the production of specific biomarkers, maybe even other miRNAs or other RNA types.Such a strategy has the advantage of potentially achieving a high concentration of the therapeutic RNA in the tumor while minimally affecting adjacent regions or other organs [176].Moreover, the combination with other biomarkers and the possibility of fine-tuning or even personalizing the dose depending on the patient's response would be ideal.However, besides the surgical risks inherent to such approaches, the device would likely be expensive and it would require highly trained personnel for installation, monitoring, and removal, thus making it inaccessible as the first type of treatment [176,177]. Another strategy is to develop a delivery method that allows higher specificity without the need of external hardware.Such strategies may use pseudo-viral or viral-like capsids, nanoparticles of various types, or lipid-based strategies.The advantages and disadvantages of such strategies have been reviewed extensively elsewhere [178,179] but we would like to point out a few of them.Strategies based on viral-like capsids take advantage of the natural tropism of several viruses.For example, the use of delivery systems based on the rabies virus, which has a high and specific brain tropism, has been proposed as an alternative to the challenge of reaching the brain without compromising other organs [180].On the other hand, nanoparticles and nanotechnology, in general, are extremely promising fields.It is difficult to specify the characteristics of the technology, as each nanoparticle may have significantly different properties depending on its composition.However, in general, the goal would be to find a nanoparticle that allows for specific delivery to the brain, maybe even to the tumor, while avoiding releasing its cargo in other parts of the body.The material must be secure; it should not be immunogenic (at least not outside the context of the tumor), and, preferably, it should be relatively easy and cheap to fabricate.It is likely that such materials will be developed in the coming years as the field of nanotechnology advances quickly. Much hope is put into the development of lipid-based particles [181].The main advantage of lipid-based particles is that, as they are naturally generated and can have natural tropisms for specific organs and even particular cell types, they can achieve the goal of being specific, potent, and safe [181].In fact, mRNA vaccines are delivered in lipid-based particles and, while they are not yet very specific, they have been proven to be safe [182].Lipid-based particles can be generated artificially, which has the advantage of allowing a very homogeneous preparation, which is relatively easy to characterize, fabricate, store, and administrate [181].Howevser, we still lack the knowledge to achieve extreme organ specificity with such types of particles.Naturally occurring extracellular vesicles, many of which contain RNA molecules, in particular miRNAs, may be an attractive alternative [183].It has been shown, for example, in in vivo models of GB, that vesicles derived from irradiated cells can be used to enhance tumors' immunogenicity [184].Such strategies have the advantage of using cell mechanisms that, while they may not be fully understood, still achieve potent antitumor activity.The disadvantage is that their preparation may be complex and their molecular effects need to be explored before being used as therapies in humans. Finally, CRISPR/Cas technologies may also provide new avenues for treatment and research.Because direct gene editing may be difficult to control and ethically controversial, epigenetic editing may be an attractive alternative [167].Thus, the epigenetic editing of specific miRNAs may achieve considerable changes in the transcriptional network with only a few editions.The same delivery problems that apply to RNA molecules also apply to CRISPR/Cas systems, with the added complexity of delivering a relatively large protein (Cas).However, given the current research intensity in the field, it is likely that better and more potent CRISPR/Cas-based therapeutic strategies will be developed soon. Although there are several roadblocks that need to be resolved, the use of RNA-based therapies, particularly those that use or modulate miRNAs, seems very promising for developing successful, potent, and secure therapies. Conclusions Despite focusing on highlighting miRNAs' potential as diagnostic, prognostic, and therapeutic tools for GB, another class of non-coding RNAs longer than 200 nts-named long non-coding RNAs (lncRNAs)-has also been very useful in the study of cancer biology, including GB.Thus, it is not surprising that, by combining all knowledge regarding these two types of ncRNAs and future ncRNA-based therapies, GB biology and treatment might become more understandable and promising, respectively. It is evident that more basic research is needed to increase our understanding of the relevant biology, particularly the effects that miRNAs have on the transcriptional network and how they are related to the remarkable ability of GB to grow aggressively while avoiding the immune system.Moreover, there is increasing information about the contributions of both the tumor microenvironment and the participation of immune cells, including monocytes/macrophages/microglia and T cells, in GB progression [7].Therefore, targeting not only tumor cells but also blood vessels and immune cells should be considered in designing better clinical trials aimed to develop improved molecular-based treatments for GB.Such knowledge would allow us to discover diagnostic, prognostic, and therapeutic miRNAs to more effectively combat this devastating disease. Finally, very little is known about the possible genomic variation in miRNAs between different populations.Although it would be desirable to find miRNAs that are always affected in GB independently of the context, perhaps a more realistic approach would be to consider the genetic and environmental differences that may affect the expression of specific miRNAs in different populations and even among different individuals.Thus, profiling both mRNA and miRNAs in populations with diverse genetic, ethnic, cultural, and environmental backgrounds would be key to enhancing our understanding of miRNAs and their application as diagnostic, prognostic, and therapeutic tools. Figure 1 . Figure 1.miRNA biogenesis pathway.miRNA biogenesis begins with the transcription of a miRNA gene by RNA polymerase II (POLII), which results in the production of a long transcript called primary miRNA (pri-miRNA) that is recognized by the microprocessor complex consisting of the type III RNAse, DROSHA, and the DiGeorge syndrome critical region gene 8 (DGCR8) protein.The pri- Figure 1 . Figure 1.miRNA biogenesis pathway.miRNA biogenesis begins with the transcription of a miRNA gene by RNA polymerase II (POLII), which results in the production of a long transcript called primary miRNA (pri-miRNA) that is recognized by the microprocessor complex consisting of the type III RNAse, DROSHA, and the DiGeorge syndrome critical region gene 8 (DGCR8) protein.The pri-miRNA is then processed to generate a precursor miRNA (pre-miRNA) with a size of 70−100 nucleotides (nts).The pre-miRNA has a stem-loop structure and it is transported from the Figure 2 . Figure 2. Downregulated (left) and upregulated (right) miRNAs in GB.Their related functions are depicted in boxes.Enhanced cellular functions are labeled with an upward-pointing arrow, while reduced cellular functions are labeled with a downward-pointing arrow. Figure 2 . Figure 2. Downregulated (left) and upregulated (right) miRNAs in GB.Their related functions are depicted in boxes.Enhanced cellular functions are labeled with an upward-pointing arrow, while reduced cellular functions are labeled with a downward-pointing arrow. Table 3 . Circulating miRNAs in GB biofluid samples potentially serve as diagnostic and prognostic biomarkers.
v3-fos-license
2022-12-14T16:15:09.490Z
2022-12-01T00:00:00.000
254616694
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.mdpi.com/2076-393X/10/12/2108/pdf?version=1670588914", "pdf_hash": "2a9734e4f16c3626b4fa2f92d95399614f915d4e", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:1588", "s2fieldsofstudy": [ "Medicine" ], "sha1": "ff6153c9360d0cc7c573bfbe401ac141a9f3c682", "year": 2022 }
pes2o/s2orc
Efficacy Studies against PCV-2 of a New Trivalent Vaccine including PCV-2a and PCV-2b Genotypes and Mycoplasma hyopneumoniae When Administered at 3 Weeks of Age This study aimed to evaluate the efficacy of a new trivalent vaccine containing inactivated Porcine Circovirus 1-2a and 1-2b chimeras and a Mycoplasma hyopneumoniae bacterin administered to pigs around 3 weeks of age. This trivalent vaccine has already been proved as efficacious in a split-dose regimen but has not been tested in a single-dose scenario. For this purpose, a total of four studies including two pre-clinical and two clinical studies were performed. Globally, a significant reduction in PCV-2 viraemia and faecal excretion was detected in vaccinated pigs compared to non-vaccinated animals, as well as lower histopathological lymphoid lesion plus PCV-2 immunohistochemistry scorings, and incidence of PCV-2-subclinical infection. Moreover, in field trial B, a significant increase in body weight and in average daily weight gain were detected in vaccinated animals compared to the non-vaccinated ones. Circulation of PCV-2b in field trial A and PCV-2a plus PCV-2d in field trial B was confirmed by virus sequencing. Hence, the efficacy of this new trivalent vaccine against a natural PCV-2a, PCV-2b or PCV-2d challenge was demonstrated in terms of reduction of histopathological lymphoid lesions and PCV-2 detection in tissues, serum and faeces, as well as improvement of production parameters. Introduction Porcine circovirus 2 (PCV-2) is a single-stranded DNA (ssDNA) ubiquitous virus in swine and the causative agent of the so-called Porcine Circovirus Diseases (PCVD), which may be manifested as subclinical or clinical infections [1]. The subclinical infection (PCV-2-SI) is the most common form of PCV-2 infection outcome [1], causing a loss of average daily weight gain (ADWG) without evident clinical signs and, consequently, significant economic losses [2,3]. PCV-2 systemic disease (PCV-2-SD) is characterised by loss of weight, digestive signs, paleness of skin and dyspnoea in pigs mainly between six and eleven weeks of age. This is the most severe outcome causing significant economic impact in the swine industry worldwide [4,5]. Vaccination is the primary tool to reduce PCV-2 infection. In fact, PCV-2a vaccines have successfully decreased the prevalence and severity of PCV-2 viral infections [21]. Nowadays, PCV-2 vaccines in Europe are based on the PCV-2a genotype or a combination of PCV-2a and PCV-2b genotypes [22][23][24]. Interestingly, some studies showed a closer genetic and antigenic relation between PCV-2b and PCV-2d sequences compared to PCV-2a and PCV-2d sequences [20,23]. Hence, a bivalent vaccine containing PCV-2a and PCV-2b genotypes may be a relevant option to face against the several PCV-2 genotypes that are circulating under field conditions [25,26]. In addition to PCV-2, Mycoplasma hyopneumoniae (M. hyopneumoniae) is also an important pathogen that usually circulates in pigs during the postweaning period. M. hyopneumoniae is the main causative agent of enzootic pneumonia (EP) and one of the main contributors of porcine respiratory disease complex (PRDC), a multimicrobial and multifactorial condition in which different bacterial and viral agents are involved, including PCV-2 [27][28][29][30]. Therefore, combination of PCV-2 and M. hyopneumoniae in one ready-to-use vaccine is a relevant option to reduce the management of the animals and, consequently, reduce the stress and the management of associated costs [31][32][33]. In fact, the number of combined vaccines including PCV-2 and M. hyopneumoniae has increased in the last few years, with seven PCV-2/M hyopneumoniae combined vaccines in the market nowadays. The current work aimed to elucidate the efficacy against PCV-2a or PCV-2b experimentally or naturally infected pigs of a new trivalent vaccine containing inactivated porcine circovirus 1 (PCV-1)/PCV-2a and PCV-1/PCV-2b chimeras (cPCV-1/2a, cPCV-1/2b) as well as M. hyopneumoniae (CircoMax Myco ® ) administered in one dose at three weeks of age. Different PCV-2 vaccination regimens are already studied and commercialised. In fact, this chimeric vaccine has already shown efficacy in pre-clinical and clinical trials applied in a split-dose immunisation regime at three days of age and three weeks later [26]. This latter regime is especially interesting in those farms where PCV-2 infection is detected at early stages of life (lactating or early nursery periods) or when herd immunity is poor. However, in those farms with PCV-2 infection detected later, a single-dose vaccination should be enough to prevent or reduce a PCV-2 disease outbreak. Moreover, further advantages of a single dose regime include reduction of management and vaccination costs. Materials and Methods A total of two pre-clinical and two clinical studies were conducted. Each of these assays were independently performed and evaluated by different regulatory agencies (pre-clinical studies by the U.S. Department of Agriculture (USDA) regulatory agency and clinical ones by European Medicines Agency (EMA) regulatory agency). For this reason, requirements for efficacy parameters were different and some laboratory analyses were conducted with different methodological approaches and using distinct interpretation criteria. Efficacy variables considered in pre-clinical studies were PCV-2 antibody levels, PCV-2 viraemia and faecal shedding and PCV-2 detection in lymphoid tissues and microscopic lymphoid lesions. In case of clinical trials, the primary efficacy variable was the PCV-2 viraemia as determined by real time quantitative PCR (qPCR) and the following variables were considered as secondary variables: PCV-2 antibody levels, body weight, ADWG, the correlation between PCV-2 antibody levels before vaccination and ADWG, mortality, PCV-2 faecal shedding, PCV-2 detection in lymphoid tissues, microscopic lymphoid lesions and PCV-2-SD or PCV-2-SI diagnoses. Pre-Clinical Studies Two experimental studies were carried out including vaccinated and non-vaccinated pigs challenged with PCV-2a or PCV-2b. Both were approved by the corresponding institutional animal care and use committees (IACUC) from Zoetis (KZ-3226e-2017-06-tkh for the PCV2a study and PJ017 for the PCV2b study). PCV-2a Challenge Study The study design is described in Table 1. A total of 25 clinically healthy animals with maternally derived antibodies (MDA) complied with the inclusion criteria and they were enrolled in the non-vaccinated group, and 23 were in the vaccinated one. Vaccinated animals were treated with one dose of the Investigational Veterinary Product (IVP) containing Porcine Circovirus Vaccine, cPCV-2a and cPCV-2b killed viruses, and M. hyopneumoniae bacterin adjuvanted with 10% SP Oil (equivalent to Fostera Gold ® and CircoMax Myco ® ) in a single dose of 2 mL by the intramuscular (IM) route. The non-vaccinated group received a placebo containing a M. hyopneumoniae bacterin adjuvanted with 10% SP Oil in the same volume and by the same inoculation administration. Following vaccination and until challenge, blood was collected from pigs at weekly intervals and tested by ELISA for PCV-2 antibody detection. All pigs were challenged with a PCV-2a field strain (4 mL dose; 2 mL IM and 2 mL intranasally (IN)) five weeks postvaccination (SD35), when the S/P ratio of mean MDA titre was ≤0.2 in the non-vaccinated group, ensuring maximal susceptibility to viral infection. The PCV-2a field strain (isolate 40895; GenBank accession number AF2640) diluted 1:2 in Optimem (Gibco) was used for PCV-2a challenge. The diluted stock material had a titre of 10 5.6 TCID 50 /mL. Following challenge, blood and faecal swabs were collected two times every week to determine PCV-2 viraemia and shedding by qPCR. PCV-2 antibodies were also measured by ELISA at weekly intervals post-challenge. Pigs were euthanised and necropsied at 21 days post challenge (SD56). Lymphoid tissues (tracheobronchial, mesenteric and superficial inguinal lymph nodes and tonsil) were collected, and they were processed for histopathology and PCV-2 immunohistochemistry (IHC). PCV-2b Challenge Study The study design is described in Table 1, following the same schedule indicated for the PCV-2a challenge experiment. A total of 19 clinically healthy animals with MDA in the non-vaccinated group, and 16 in the vaccinated one, complied with the inclusion criteria and were enrolled in the study. When non-vaccinated pigs had a mean MDA titre corresponding to S/P ≤0.2 (approx. SD41), a PCV-2b challenge (4 mL dose; 2 mL IM and 2 mL IN) was performed. The PCV-2b strain (isolate FD07; GenBank accession number GU799576) diluted 1:2 in Optimem (Gibco) was used for the challenge with a final titre of 10 5.3 TCID 50 /mL. All pigs were euthanised at SD61-62 and the same procedures and sampling were performed during necropsy as indicated in the PCV-2a challenge study. Farm Selection A total of two field trials were conducted in two different Spanish commercial farms. Criteria for farm selection were the existence of problems with PCVD or history of PCVD in the last two-and-a-half years. Farm A was a two-site commercial farm (breeding and gestation plus nursery) with 2660 sows and a weekly farrowing batch system. Piglet weaning was carried out around four weeks of age. The sow farm was seropositive against M. hyopneumoniae, Porcine reproductive and respiratory syndrome virus (PRRSV) and seronegative to Aujeszky's disease virus (ADV). Gilts and sows were crossbred (Duroc x Landrace). The sow and gilt vaccination farm programme included PRRSV, Porcine parvovirus, Erysipelothrix rhusiopathiae, Swine influenza virus (SIV), Actinobacillus pleuropneumoniae and PCV-2 (the piglets at weaning, the gilts at 6 months of age and the sows post-partum) immunisations. At fattening facilities, pigs were vaccinated twice against ADV. Farm B was a farrow-to-finish commercial farm with 10,500 sows with a weekly farrowing batch system. Piglet weaning was conducted at approximately 25 days of age. The sow farm was seropositive against M. hyopneumoniae and PRRSV and seronegative to ADV. Gilts and sows were of Pietrain breed. The sow and gilt vaccination farm programme included immunisation against PRRSV, SIV, Porcine parvovirus, Erysipelothrix rhusiopathiae, Escherichia coli, Clostridium perfringens type C, atrophic rhinitis, ADV, M. hyopneumoniae and PCV-2 (at 3 and 6 weeks of age). Gilts were also vaccinated against PCV-2 at two-and-a-half, six and seven months of age. Piglets were vaccinated against PRRSV before weaning and against ADV, PRRSV and SIV at fattening. Study Design The design of these field studies was blinded, randomised and controlled. A total of 4076 male and female pigs (2037 vaccinated and 2039 non-vaccinated) were enrolled in two trials: A and B (Table 2). The sample size used for each variable was calculated by a biometrician using data from field safety and efficacy studies previously performed [34]. The number of animals in each batch was determined by the number of clinically healthy pigs available on the particular week of study initiation. Thus, field trial A required recruitment of pigs from three different batches, while for the field trial B, one batch was enough. Selection of pigs included in the study and distribution (blocked by gender) in vaccinated and non-vaccinated groups were conducted between SD-3 and SD0, SD0 being the vaccination day. Study animals were clinically observed daily throughout the study. A single vaccination was performed at three weeks of age approximately with 2 mL of a trivalent vaccine containing inactivated cPCV-1/2a, cPCV-1/2b chimeras and M. hyopneumoniae bacterin (CircoMax Myco ® , Zoetis Inc., Lincoln, NE, USA) by IM route in the neck. Non-vaccinated pigs received 2 mL of phosphate buffer saline (PBS). Pigs from each treatment group were housed comingled in the maternity and nursery phase, but males and females were sepa-rated by pen at fattening (in each pen there were vaccinated and non-vaccinated animals from the same gender). Blood samples and faecal swabs from piglets were collected at 7, 11, 16, 20 and 25 weeks of age approximately. Blood samples were also collected at three weeks of age (just before vaccination). Sera samples were analysed by a validated in-house PCV-2 antibody ELISA and by a qPCR assay and faecal swabs were analysed by qPCR. Moreover, the body weight was registered before vaccination, at 16 weeks of age and before slaughter for 400 animals approximately for each treatment group (a minimum of 328 animals and a maximum of 438 as indicated in Supplementary Material Table S1). Animals weighed during the study were not the same at each timepoint due to deviations occurring during the study (animal deaths or animals not found at the weighing moment) as indicated in Supplementary Material Table S1. Thus, extra animals from the same treatment group were selected for weighing when any animal selected for this action was missing. Dead animals or pigs euthanised for welfare reasons from weaning until the slaughterhouse were examined post-mortem to determine the cause of death. Tissue samples collected at each necropsy (tracheobronchial, mesenteric and superficial inguinal lymph nodes and tonsil) were processed for histopathology and PCV-2 IHC for PCVD diagnosis performed by a pathologist blinded to the treatment status. Moderate and severe histological lesions together with a moderate or high amount of PCV-2 antigen in lymphoid tissues were diagnosed as PCV-2-SD [1]. When a PCV-2-SD diagnosis was confirmed in the studied herd, 60 animals (30 per treatment group) were selected and necropsied to obtain the above-mentioned lymphoid tissue samples. These samples were analysed by histopathology and PCV-2 IHC. The Cap gene (ORF2) from 20 serum samples with the highest PCV-2 viral load (6.3-8.3 log 10 DNA copies/mL) belonging to non-vaccinated groups was sequenced to determine the PCV-2 genotype/s circulating in the farms. Clinical studies were approved by the Olot Animal Welfare Committee (ID PJ023) and performed according to Guidelines on Good Clinical Practices. PCV-2 Genotyping Total DNA was extracted from serum samples, amplified and sequenced. Then, a phylogenetic analysis was performed, and a phylogenetic tree was edited. All these procedures were performed as described in Pleguezuelos et al.'s study [26]. DNA Extraction and PCV-2 qPCR DNA from pre-clinical serum and faecal samples were extracted and qPCR-analysed with a non-commercial in-house qPCR as indicated in Mancera-Gracia et al. [6]. No threshold was applied; therefore, all detected values were reported as positive. In the case of clinical studies, serum and faecal samples were extracted and qPCR-analysed with a commercial kit LSI VetMAXTM Porcine Circovirus Type 2-Quantification Applied Biosystems, Lisseu, France) [26]. The limit of detection (LOD) of the technique in serum samples was 4 × 10 3 DNA copies/mL and in faecal swabs it was 1 × 10 4 DNA copies/mL. The limit of quantification (LOQ) in serum samples and faecal swabs was 1 × 10 4 DNA copies/mL. Log 10 transformation of qPCR results was conducted, and the results were interpreted as follows: -Negative results or values below LOD were given a value equal to half of the LOD (log 10 3.3 copies/mL for serum samples and log 10 3.7 copies/mL for faecal swabs). -Values between LOD and LOQ were considered positive and were given a value equal to LOQ (log 10 4.0 for serum and faecal swabs). -Values over LOQ were considered positive and were given the log 10 qPCR result obtained. PCV-2 Serology Pre-clinical and clinical PCV-2 antibodies were detected using a validated in-house PCV-2 antibody ELISA [26]. Sera samples with sample/positive control (S/P) ratio (OD sample-OD negative control/OD positive control-OD negative control) values ≥ 0.5 were considered positive. Histopathology and PCV-2 IHC Lymphoid samples collected at necropsy (tracheobronchial lymph node, mesenteric lymph node, superficial inguinal lymph node and tonsil) were fixed by immersion in 10% buffered formalin and examined for lesions compatible with PCV-2, including lymphocyte depletion (LD) and histiocytic replacement (HR). Moreover, another section was cut for PCV-2 antigen detection by IHC [5]. LD, HR and the amount of PCV-2 antigen were scored from 0 (no lesions/no staining) to 3 (severe lesions/widespread antigen distribution) for each lymphoid tissue collected. In field trials, dead or euthanised pigs from weaning age were classified as PCV-2-SD or PCV-2-SI, following the diagnostic criteria indicated below: Presence of at least one of the following clinical signs: wasting, weight loss, paleness of the skin, dyspnoea, diarrhoea, jaundice and/or inguinal superficial lymphadenopathy (only applicable to PCV-2-SD cases). Statistical Analyses Pre-clinical and field trials statistical analyses were carried out using the software SAS/STAT (User's Version 9.4, or higher, SAS Institute, Cary, NC, USA). A logarithm transformation was applied to the data before statistical analyses were conducted when needed. Comparisons were performed between treatment groups (vaccinated vs. nonvaccinated) from each trial. A general linear repeated measures mixed model was performed to analyse the following variables from pre-clinical and field studies in each study: sera and faecal qPCR results, ELISA S/P values and body weight. Linear functions of the least squares mean for body weights were used to calculate estimates of the ADWG for each period. Moreover, a Pearson Correlation Coefficient was also calculated to evaluate the correlation between PCV-2 antibodies before vaccination and the ADWG during the whole study. A generalised linear mixed model was performed to analyse the following variables from pre-clinical and field studies in each study: ever positive (detected positive on at least one sampling point) for viraemia/shedding, mortality, LD, HR and IHC results separately, and diagnosis of PCV-2-SD or PCV-SI. When the mixed model did not converge, Fisher's Exact test was used for analysis. Additionally, MDA's effect on seroconversion of vaccinated piglets from field trials was evaluated by calculating a Pearson Correlation Coefficient between PCV-2 antibodies before vaccination and the increase of PCV-2 antibodies at seven weeks of age (Delta value). The significance level (α) was set at p ≤ 0.05 for all statistical analyses. PCV-2a Challenge Clinical Evaluation No clinical signs or mortality due to treatment administration or challenge were recorded in any studied group. One animal from the non-vaccinated group was found dead at SD31 due to oedema disease. Additionally, one animal from the vaccinated group showed a left hind leg lameness at SD5. After several days with an anti-inflammatory treatment, no response was observed, and it was removed from study. PCV-2 Antibody Detection Mean ELISA S/P ratios results are represented in Figure 1A. The least squares mean of the S/P ratio decreased from SD0 to SD21 in both experimental groups. After challenge, S/P ratios in the vaccinated group were significantly higher (p ≤ 0.05) than the non-vaccinated group at any timepoint tested post-challenge. PCV-2 Viraemia and Faecal Shedding All pigs included in the study were negative for PCV-2 qPCR before the challenge (SD39). PCV-2 viraemia was initially detected seven days post-challenge. From SD42 until SD49, viral load in serum in the vaccinated group was significantly lower (p ≤ 0.01) than in the non-vaccinated one ( Figure 1B). Moreover, the percentage of ever viraemic pigs was significantly lower (p ≤ 0.01) in the vaccinated group compared to the non-vaccinated one (Table 3). Faecal shedding was in all cases with low viral load and was initially detected in the non-vaccinated group four days after challenge and continued through the end of study ( Figure 1C). Additionally, from SD46 and until the end of the study (SD56), faecal shedding was significantly lower (p ≤ 0.01) in the vaccinated group compared to the non-vaccinated one. The percentage of pigs ever positive for faecal shedding was significantly lower (p ≤ 0.01) in the vaccinated group than in the non-vaccinated group (Table 3). PCV-2 Detection in Lymphoid Tissues and Microscopic Lymphoid Lesions PCV-2 virus antigen was detected only in very few animals from the non-vaccinated group in lymphoid tissues by IHC and there were no significant differences among treatment groups. Additionally, the percentage of pigs with LD and HR were not significantly different among treatment groups. The percentage of non-vaccinated pigs with lesions was very low (Table 4). HR and LD were considered in a combined analysis, as HR is combined with LD in cases of PCV-2-SD, but there were no significant differences among treatment groups (Table 4) No clinical signs or mortality were recorded in any studied group due to treatment or after challenge. PCV-2 Antibody Detection The mean PCV-2 ELISA S/P ratio results obtained during the study are represented in Figure 2A. All the pigs included in the MDA-positive groups had moderate levels of PCV-2 antibodies (ELISA S/P ratios ranging from 0.5 to 1.3). After challenge, a boost in PCV-2 antibody values was observed in the vaccinated group starting at SD48 (seven days postchallenge). The levels of PCV-2 antibodies detected in vaccinated pigs were significantly (p ≤ 0.01) higher than those from non-vaccinated ones at all timepoints after challenge (SD48, SD55 and SD61/62). PCV-2 Viraemia and Faecal Shedding The viral load detected in serum was significantly lower (p ≤ 0.05) in the vaccinated group compared to the non-vaccinated one in the whole post-challenge period (from SD48 to SD61/62), except for SD45 ( Figure 2B). Additionally, the amount of PCV-2 detected in faecal swabs was significantly lower (p ≤ 0.01) in the vaccinated group compared to nonvaccinated one at SD59 and SD61/62 ( Figure 2C). No significant differences were detected in the percentage of pigs ever positive in serum and faeces among groups (Table 3). PCV-2 Detection in Lymphoid Tissues and Microscopic Lymphoid Lesions The percentage of pigs with positive IHC scores in any of the lymphoid tissues evaluated was significantly higher (p ≤ 0.05) in non-vaccinated animals when compared to the vaccinated ones. No significant differences were detected between vaccinated and non-vaccinated pigs in the percentage of animals with PCV-2-associated lesions in any of the lymphoid tissues evaluated (Table 4). Clinical Evaluation Body weight results and the ADWG are represented in Table 5. No significant differences in terms of body weight and ADWG were observed among vaccinated and nonvaccinated groups of field study A at any time. In field study B, a significantly higher (p ≤ 0.05) body weight was observed in the vaccinated group at 16 and 24-27 weeks of age (1-5 days before the slaughterhouse) compared to the non-vaccinated one. Additionally, in study B, ADWG from vaccinated animals was significantly higher (p ≤ 0.05) in the three periods (from three weeks of age to 16 weeks of age, from 16 weeks of age to 24-27 weeks of age and from three weeks of age to 24-27 weeks of age) than non-vaccinated group. No significant differences were detected in mortality between treatment groups in either field trial. No significant correlation was observed between PCV-2 antibody levels before vaccination and ADWG was detected in vaccinated groups of either field trial, indicating that ADWG of vaccinated pigs was independent of ELISA S/P titres at vaccination. PCV-2 Antibody Detection No significant differences between treatment groups in mean PCV-2 S/P ratios before the time of vaccine/placebo administration were found in any of the field trials ( Figures 3A and 4A). In field trial A, piglets from the vaccinated group had significantly higher (p ≤ 0.05) mean PCV-2 antibodies from 7 to 16 weeks of age compared to those of the non-vaccinated one ( Figure 3A). In field trial B, piglets from the vaccinated group had significantly higher (p ≤ 0.05) mean S/P values at 11 and 16 weeks of age. In contrast, a significantly lower (p ≤ 0.05) mean PCV-2 S/P ratio was detected at 25 weeks of age compared to the non-vaccinated group ( Figure 4A). The correlations between the PCV-2 ELISA S/P values of vaccinated animals before immunisation and their increase at seven weeks of age (Delta value) are represented in Figure 5A,B. A significantly negative (p ≤ 0.05) correlation between IgG ELISA S/P values and PCV-2 antibody levels at seven weeks of age was detected in vaccinated groups from both field studies, indicating that the higher the PCV-2 S/P of the mother before vaccination, the lower the increase in PCV-2 S/P values observed at seven weeks of age. No significant correlation was obtained for the non-vaccinated groups in both field trials (data not shown). PCV-2 Viraemia All tested pigs (n = 204) were PCV-2 qPCR-negative before vaccination. A significantly lower (p ≤ 0.05) PCV-2 load and percentage of viraemic pigs was observed in vaccinated pigs from both field trials from 16 to 25 weeks of age compared to the non-vaccinated groups ( Figures 3B and 4B and Table 6). Table 6. Proportion and percentage of PCV-2 qPCR-positive pigs (>3.3 log10 DNA copies/mL) at least in one sample point for each experimental group and field trial. Different letters indicate significant differences among experimental groups (p ≤ 0.05) for each field trial. In field trial A, the percentage of positive pigs peaked at 20 weeks of age in the nonvaccinated group (36/47 (76.6%)) and at 16 weeks of age in the vaccinated one (13/45 (28.9%)). The peak of viraemia (maximum viral load in serum) was observed at 16 weeks of age for both groups. Proportion (%) of Pigs Detected Viraemic per Sampling Point Total Proportion (%) of Ever In field trial B, the percentage of positive pigs increased to a maximum of 100% (61/61) at 16 weeks of age in the non-vaccinated group. In the vaccinated group, it was obtained at seven weeks of age (28/48 (58.3%)) and decreased afterwards. The peak of viraemia was observed at 7 and at 16 weeks of age in the vaccinated and the non-vaccinated groups, respectively. The percentage of pigs ever viraemic (detected positive at least at one sampling point) of both field trials was also significantly lower (p ≤ 0.05) in the vaccinated group compared to the non-vaccinated one ( Table 6). PCV-2 Viraemia All tested pigs (n = 204) were PCV-2 qPCR-negative before vaccination. A significantly lower (p ≤ 0.05) PCV-2 load and percentage of viraemic pigs was observed in vaccinated pigs from both field trials from 16 to 25 weeks of age compared to the non-vaccinated groups ( Figures 3B and 4B and Table 6). Table 6. Proportion and percentage of PCV-2 qPCR-positive pigs (>3.3 log 10 DNA copies/mL) at least in one sample point for each experimental group and field trial. Different letters indicate significant differences among experimental groups (p ≤ 0.05) for each field trial. WOA: weeks of age. * Negative animals with a missing value in any of the timepoints were excluded from the analysis. Study Group Proportion (%) of Pigs Detected Viraemic per Sampling Point Total Proportion (%) of Ever In field trial A, the percentage of positive pigs peaked at 20 weeks of age in the nonvaccinated group (36/47 (76.6%)) and at 16 weeks of age in the vaccinated one (13/45 (28.9%)). The peak of viraemia (maximum viral load in serum) was observed at 16 weeks of age for both groups. In field trial B, the percentage of positive pigs increased to a maximum of 100% (61/61) at 16 weeks of age in the non-vaccinated group. In the vaccinated group, it was obtained at seven weeks of age (28/48 (58.3%)) and decreased afterwards. The peak of viraemia was observed at 7 and at 16 weeks of age in the vaccinated and the non-vaccinated groups, respectively. The percentage of pigs ever viraemic (detected positive at least at one sampling point) of both field trials was also significantly lower (p ≤ 0.05) in the vaccinated group compared to the non-vaccinated one ( Table 6). PCV-2 Faecal Shedding PCV-2 faecal shedding results from field trial A and B are summarised in Figures 3C and 4C, respectively. In field trial A, statistically significant lower (p ≤ 0.05) PCV-2 faecal shedding was observed in vaccinated pigs at 25 weeks of age compared to non-vaccinated pigs. In field trial B, a statistically lower (p ≤ 0.05) PCV-2 load in faecal swabs was also detected in vaccinated pigs from 16 to 25 weeks of age than in non-vaccinated ones. In field trial A, the peak of faecal shedding (maximum viral load in faeces) was observed at 20 weeks of age for both groups. In case of field trial B, peak faecal shedding was observed at 16 weeks of age for both groups. Regarding the percentage of positive faecal swabs detected at least in one sampling point, no statistical differences were detected in any of the two studies between the vaccinated pigs (45/46 (97.8%) and 63/63 (100.0%) from field trials A and B, respectively) and the non-vaccinated ones (45/45 (100.0%) and 61/61 (100.0%) from field trials A and B, respectively). PCV-2 Genotyping To determine the main PCV-2 genotype/s circulating in the farms during the study periods, a total of 20 PCV-2 qPCR-positive samples with the highest viral load (6.3-8.3 log 10 DNA copies/mL), 10 for each field trial and belonging to non-vaccinated groups, were sequenced. A phylogenetic tree relating the ORF2 sequences obtained in these studies together with reference strains was built to determine the predominant genotypes present (Supplementary Material Figure S1). In field trial A, genotype PCV-2b was found in nine out of ten serum samples. One serum sample failed to be sequenced. In field trial B, genotype PCV-2a was found in two serum samples, PCV-2d in four samples and no sequence was obtained from the remaining four sera. Histopathology and PCV-2 IHC Histopathology and IHC results of the field trials are summarised in Table 7. Table 7. Proportion of animals with histopathology (histiocytic replacement (HR) and lymphoid depletion (LD)) and immunohistochemistry (IHC) results scores > 0 in at least one of the four lymphoid tissues evaluated (mesenteric lymph node, superficial inguinal lymph node, tracheobronchial lymph node and tonsil) corresponding to pigs which died or were euthanised during the study. Different letters indicate significant differences among experimental groups (p ≤ 0.05) within each field trial. Non-vaccinated animals from both field studies showed a significantly higher (p ≤ 0.05) HR and positive PCV-2 IHC compared to vaccinated ones. Moreover, in field trial A, a significantly higher (p ≤ 0.05) incidence of PCV-2-associated lymphoid lesions (HR and LD together) was detected in non-vaccinated pigs than in vaccinated ones. Discussion PCVDs are causing great economical losses to the swine industry [1]. Vaccination of piglets against PCV-2 is the main control method to prevent PCVD in swine farms worldwide [35]. In general, combined vaccination of PCV-2 and M. hyopneumoniae around three weeks of age is one of the main strategies to reduce the impact of these two diseases [31][32][33]. PCV-2 vaccine benefits have been reported in terms of reduction in mortality [34], PCV-2 viraemia and lymphoid lesions [36,37], the frequency of co-infections and also the improvement of the ADWG [36][37][38][39][40] in PCV-2-SD scenarios. Moreover, an improvement of ADWG, percentage of runts, body condition and carcass weight has been also detected in the case of PCV-2-SI [3]. Interestingly, most PCV-2 vaccines in the market are based on the PCV-2a genotype. This is because the high degree of cross-protection between the major circulating genotypes worldwide (PCV-2b and PCV-2d) [22,[41][42][43][44][45]. However, PCV-2 vaccines do not eliminate virus replication and transmission, and it has been speculated that broader-spectrum genotype-based vaccines may help in controlling better the infection under field conditions [21]. Hence, the aim of the present study was to evaluate the efficacy of a new trivalent vaccine containing inactivated cPCV-2a, cPCV-2b and M. hyopneumoniae, administered in piglets around three weeks of age. The efficacy of this vaccine has been previously demonstrated in a regime of split-dose immunisation at three days and three weeks of age [26], but it was important to ascertain its efficacy in a single-shot regime at the most common timing of vaccination against those two pathogens (around weaning). Both vaccination regimens for piglets are interesting for the swine industry and the selection of one or the other should depend on the dynamics of PCV-2 infection detected in the farm and the levels of herd immunity. A split-dose regime can help prevent early PCV-2 infections and provide solid immunity earlier in life. In the case of farms with late PCV-2 infection of piglets and significant herd immunity, a one-dose regime should be enough to counteract the detrimental effects of PCV-2 infection. In addition, one single short regime adds practical advantages such as a reduction of pig handling and stress, as well as economic ones such as less need for labour and human resources. To accomplish the mentioned objective, four studies were carried out, including two pre-clinical studies performed in the USA and two clinical ones in the EU. Different regulatory agencies evaluated these studies; therefore, the requirements of each of the agencies were also different. Due to that, data of the qPCR were expressed differently in pre-clinical and clinical studies. However, the use of different qPCR techniques did not interfere in the analysis of the variables nor in the global efficacy assessment of the vaccine. Additionally, the interpretation of the results was not altered since the comparison between pre-clinical and clinical studies was not the goal of the present work. Improvement of clinical variables such as signs compatible with PCVDs, body weight evolution, ADWG or mortality are usual claims of PCV-2 vaccines. However, these differences are unlikely to be detected under experimental settings with a limited number of animals and the fact that PCV-2 infection outcome is usually subclinical. Therefore, these claims are mostly demonstrated under field conditions, by means of large trials. In the present case, a significantly greater body weight at 16 and 24-27 weeks of age (one to five days before going to the slaughterhouse) and higher ADWG at the three periods (3-16 weeks of age, 16 to slaughter, and three weeks of age to slaughter) were observed in vaccinated pigs compared to non-vaccinated ones in field trial B. These differences in body weight were not statistically significant in field trial A. However, they showed a remarkable tendency for improvement of approximately 0.8 kg live weight at 16 weeks of age and 1.7 kg at slaughter, being an interesting improvement from an economical perspective [46]. These results are similar to those of several studies where a bivalent vaccine against M. hyopneumoniae and PCV-2 was evaluated in pigs vaccinated at three weeks of age, showing a greater ADWG during the finishing period [2,[47][48][49][50][51] or from vaccination to the slaughter period [50,51]. Remarkably, no correlation between MDA and ADWG was observed in vaccinated animals, evoking that ADWG was independent of the MDA present at the time of vaccination as already observed in other studies [36,52], and indicating no evidence of interference of vaccine efficacy by MDA levels of the pigs from the tested herds. A high mortality was detected in field trial B compared to the historical mortality in the farm, probably related to an outbreak of Streptococcus suis or Glaesserella parasuis infection, since gross lesions associated with these pathogens (fibrinous polyserositis, fibrinous pericarditis and/or polyarthritis) were observed in a high number of necropsied pigs. However, no significant effect of the vaccine on mortality was found in any of the studies, in agreement with some studies where the PCV-2-M. hyopneumoniae combined vaccine was administered in three-week-old pigs [2,47,49,50,52]. However, our and these mentioned results contrasted with other studies where a significantly lower mortality was observed in vaccinated animals [48,53] compared to non-vaccinated ones. It is noteworthy that the present field studies were designed with vaccinated and non-vaccinated commingled within the same pens, so, globally, vaccinated pig benefits could be worsened and nonvaccinated detriments could be ameliorated due to an overall increase of infectious pressure for vaccinated animals and a lower one for non-vaccinated ones [53]. Vaccination of pigs with one dose at three weeks of age with the trivalent vaccine reduced the IHC scorings in vaccinated animals significantly (PCV-2b pre-clinical and both clinical trials) or numerically (PCV-2a pre-clinical trial). Additionally, a significantly lower percentage of pigs with lymphoid lesions (when HR and LD were analysed together and when HR was analysed alone) was detected in the field trials. These results are in concordance with those observed in the split-dose vaccination at three days of age and three weeks later with the same trivalent vaccine used in this work [26]. Additionally, in the study of Park et al. [32], where a PCV-2-M. hyopneumoniae combined vaccine was administered at three weeks of age and a challenge three weeks later with PCV-2 and M. hyopneumoniae was performed, the reduction of the percentage of animals with lymphoid lesions and PCV-2 positive cells in their lymph nodes in vaccinated pigs compared to non-vaccinated ones was demonstrated. Additionally, incidences of PCVD-SD and PCV-2-SI from both field studies (A and B) were numerically and statistically higher, respectively, in non-vaccinated groups compared to those of vaccinated ones, further indicating that vaccination reduces the clinical and subclinical impact of PCV-2 infection. Vaccination generated a higher level of IgG antibodies after an experimental PCV-2a or PCV-2b challenge (in pre-clinical studies) or after a natural infection (in field studies), resulting in a faster humoral immune response upon infection. Such response paralleled with a reduction of PCV-2 loads in serum, faecal excretion, percentage of PCV-2 viraemic pigs (evaluated in clinical studies) and percentages of ever viraemic animals (except in the PCV-2b challenge pre-clinical study). The results agree with several studies under experimental and field conditions where piglets were injected with a combined PCV-2-M. hyopneumoniae vaccine or placebo at different ages (three days of age plus three weeks later, three or four weeks of age) [2,26,32,33,[48][49][50]54] and PCV-2 viraemia and/or faecal excretion were significantly reduced in the vaccinated group compared to the placebo group. Levels of MDA are very important for piglet immune response success upon vaccination [55] and the potential MDA interference on vaccine efficacy has not been yet demonstrated under field conditions [56], except in very particular situations with extremely high antibody values at vaccination [57,58]. In both field trials, a statistically significant negative correlation was detected between PCV-2 IgGs before vaccination and antibody values at seven weeks of age in all vaccinated animals, indicating a PCV-2-elicited antibody response of the vaccine dependent on MDA titres. These results indicate that lower PCV-2 S/P ratio levels should, ideally, ensure a seroconversion response after vaccination. However, it has been widely demonstrated that MDA do interfere with vaccine seroconversion [52,57,[59][60][61], although not in all studies [26,36,40]. Importantly, such negative MDA effect on vaccine-elicited humoral immune response is not apparently related with a reduction of vaccine efficacy as observed in the present and other studies [50,56,60]. However, it is also evident that vaccine efficacy cannot be measured by vaccine seroconversion since not only humoral response, but also cell-mediated response, is involved in the protection against PCV-2 [2,50,57,60,62]. Conclusions According to the results obtained globally in pre-clinical and field studies, a single immunisation at three weeks of age approximately with the novel PCV-2a/PCV-2b/M. hyopneumoniae vaccine was effective against PCV-2 infection (PCV-2a, PCV-2b or mixed PCVa/PCV-2d) by reducing productive losses, viral load and shedding and histopathological lymphoid lesions.
v3-fos-license
2018-08-01T18:48:51.171Z
2018-07-01T00:00:00.000
51621497
{ "extfieldsofstudy": [ "Medicine", "Computer Science", "Materials Science" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.mdpi.com/1424-8220/18/7/2122/pdf", "pdf_hash": "91cce76ab5464853559f34fdbff1cfa9c17c0b28", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:1594", "s2fieldsofstudy": [ "Materials Science" ], "sha1": "91cce76ab5464853559f34fdbff1cfa9c17c0b28", "year": 2018 }
pes2o/s2orc
Characterization of the Use of Low Frequency Ultrasonic Guided Waves to Detect Fouling Deposition in Pipelines The accumulation of fouling within a structure is a well-known and costly problem across many industries. The build-up is dependent on the environmental conditions surrounding the fouled structure. Many attempts have been made to detect fouling accumulation in critical engineering structures and to optimize the application of power ultrasonic fouling removal procedures, i.e., flow monitoring, ultrasonic guided waves and thermal imaging. In recent years, the use of ultrasonic guided waves has been identified as a promising technology to detect fouling deposition/growth. This technology also has the capability to assess structural health; an added value to the industry. The use of ultrasonic guided waves for structural health monitoring is established but fouling detection using ultrasonic guided waves is still in its infancy. The present study focuses on the characterization of fouling detection using ultrasonic guided waves. A 6.2-m long 6-inch schedule 40 carbon steel pipe has been used to study the effect of (Calcite) fouling on ultrasonic guided wave propagation within the structure. Parameters considered include frequency selection, number of cycles and dispersion at incremental fouling thickness. According to the studied conditions, a 0.5 dB/m drop in signal amplitude occurs for a fouling deposition of 1 mm. The findings demonstrate the potential to detect fouling build-up in lengthy pipes and to quantify its thickness by the reduction in amplitude found from further numerical investigation. This variable can be exploited to optimize the power ultrasonic fouling removal procedure. Introduction Fouling formation is a major problem for many industries including the offshore industry [1]. It is an important factor contributing to the assessment of service lifetime and the safety of marine facilities [2]. Large sums of money are spent on cleaning and preventative measures to maintain offshore structures in a state of operation and efficiency. Current fouling removal methods include hydraulic, chemical and manual processes. The most common fouling mechanisms in offshore structures are the deposition of hard-scale and growth of marine organisms, accumulating in engineering structures such as pipes and ship hulls. The type of fouling is dependent on the environmental conditions surrounding the structure. Current removal methods can be costly and time consuming due to necessary halts in production. One successful method of fouling removal is the use of chemicals [3]. This achieves up to 100% de-fouling but has the disadvantage of a negative environmental impact due to the release of chemicals after use, as well as requiring down-time of the facility. Another promising method that has recently emerged is the use of ultrasound. Currently, ultrasonic baths are used for cleaning specific, individual parts of the offshore plant by Fundamentals of Ultrasonic Guided Waves Compared to conventional Ultrasonic Testing (UT), UGW is an emerging technique and requires understanding of the elastic wave propagation within the structural boundaries to obtain a reliable assessment of the structural health [15]. Navier's equation of motion for an isotropic elastic unbounded media is as follows (refer to Equation (1)): where λ and µ are Lamé constants, u is the three-dimensional displacement vector, ∇ is the three-dimensional Laplace operator and ρ is the material density. Using Helmholtz decomposition, substituting into Navier's equation, gives the following Equations (2) and (3), where c l and c s are the velocities of longitudinal and shear waves respectively. In the derivation, there are two types of elastic waves that can propagate in solids (longitudinal waves and shear waves). These can travel in any direction. To identify the different wave modes, nomenclature for guided waves has been introduced by Silk and Bainton [16]. The vibration modes in a cylindrical structure can be denoted as follow: where X is the vibration mode (torsional, longitudinal, flexural), n is an index identifying the harmonic variants of displacement around the circumference, and m is an index identifying the vibration complexity within the wall of the pipe. The variation in wave velocity relative to the operating frequency is known as dispersion [15]. This causes spreading of the signal when propagating through a structure, which is an undesirable phenomenon when using UGW inspection as it makes the data interpretation complex. Pavlakovic et al. [17] developed the commercial software DISPERSE which has been used to generate Another promising software code for plotting dispersion curves is the open-source core code based on Semi Analytical Finite Element methods (SAFE) known as GUIGUW (Graphical User Interface for Guided Ultrasonic Waves) [18]. This code has been developed in MATLAB and is a stand-alone software. Its advantages are; enhanced numerical stability, computational efficiency and it allows multiple layers to be investigated. For the current paper, it is useful to investigate the effects of the addition of a fouling layer when generating dispersion curves. Figure 1 displays the dispersion curves generated by the GUIGUW software as dashed lines. The graph shows reasonable agreement between both dispersion plotting codes. Current State of the Art of Ultrasonic Guided Waves The commercialization of Ultrasonic Guided Wave (UGW) systems began in the late 1990s. Current commercial UGW systems are listed in Table 1 [19]. Dependent on the UGW system, the systems can inspect pipelines that are coated, insulated, buried or operating at high temperatures. UGW systems can inspect not only pipelines but also tanks, bridges and offshore structures. Primarily, the method has been used to detect anomalies in engineering assets where they can lead to catastrophic bursts and failures. There has been recent work on applying the UGW technique for fouling detection [9], specifically for food industry applications carrying food/liquids. Lohr & Rose [14] used a 2.62 MHz piezoelectric transducer on an angled Plexiglas wedge to produce the non-leaky longitudinal wave S0 through a stainless steel pipe. The results showed that the amplitude decreases with the addition of the fouling layer (tar) seen in the L(0,5) mode with an increase in fouling thickness. Hay & Rose [13] also investigated the use of Ultrasonic Guided Waves for fouling detection using a comb sensor operating at 2.5 MHz attached to a stainless steel pipe. The longitudinal mode L(0,4) showed high sensitivity to the addition of fouling. Both investigations [13,14] operated at a higher frequency range (MHz) and also only studied longitudinal waves; this limits the length fouling detection from one location due to the higher level of attenuation. The current study focuses on the use of a lower frequency range (kHz) and torsional wave modes to achieve a prolonged coverage using a single location. Another promising software code for plotting dispersion curves is the open-source core code based on Semi Analytical Finite Element methods (SAFE) known as GUIGUW (Graphical User Interface for Guided Ultrasonic Waves) [18]. This code has been developed in MATLAB and is a stand-alone software. Its advantages are; enhanced numerical stability, computational efficiency and it allows multiple layers to be investigated. For the current paper, it is useful to investigate the effects of the addition of a fouling layer when generating dispersion curves. Figure 1 displays the dispersion curves generated by the GUIGUW software as dashed lines. The graph shows reasonable agreement between both dispersion plotting codes. Current State of the Art of Ultrasonic Guided Waves The commercialization of Ultrasonic Guided Wave (UGW) systems began in the late 1990s. Current commercial UGW systems are listed in Table 1 [19]. Dependent on the UGW system, the systems can inspect pipelines that are coated, insulated, buried or operating at high temperatures. UGW systems can inspect not only pipelines but also tanks, bridges and offshore structures. Primarily, the method has been used to detect anomalies in engineering assets where they can lead to catastrophic bursts and failures. There has been recent work on applying the UGW technique for fouling detection [9], specifically for food industry applications carrying food/liquids. Lohr & Rose [14] used a 2.62 MHz piezoelectric transducer on an angled Plexiglas wedge to produce the non-leaky longitudinal wave S0 through a stainless steel pipe. The results showed that the amplitude decreases with the addition of the fouling layer (tar) seen in the L(0,5) mode with an increase in fouling thickness. Hay & Rose [13] also investigated the use of Ultrasonic Guided Waves for fouling detection using a comb sensor operating at 2.5 MHz attached to a stainless steel pipe. The longitudinal mode L(0,4) showed high sensitivity to the addition of fouling. Both investigations [13,14] operated at a higher frequency range (MHz) and also only studied longitudinal waves; this limits the length fouling detection from one location due to the higher level of attenuation. The current study focuses on the use of a lower frequency range (kHz) and torsional wave modes to achieve a prolonged coverage using a single location. UGW was also used to detect fouling in a duct using the acoustic hammer technique [20][21][22] and an ultrasonic transducer wedge at 500 kHz [21]; however, the research focused on signal processing aspects of the received signal to detect fouling. The application of acoustic hammer is inadvisable for industry use as the hammer impact may be inconsistent and may vary in amplitude, resulting in difficulties of comparison between the amplitude changes due to the accumulation of fouling and those due to the impact of the hammer itself. The transducer wedge application used was operated at a high frequency and did not specify the wave mode used in the investigation. The UGW research on fouling detection has shown it to be sensitive to the change in material and thickness of layers [9]. The method itself is non-invasive and can be used whilst fouling removal is being carried out to monitor the cleaning. Another area that has not been investigated is the application of UGW to long range fouling detection. Recent investigations have focused on smaller samples which may justify why longitudinal mode excitation has been used [13,14] as it is dispersive but it is only being applied to a short specimen due to the excitation at MHz range [15]. The low frequency UGW has been used to inspect tens of meters of pipes for over two decades (commercial system i.e., Teletest [23]) due its low attenuation as an inherent characteristic; furthermore, benefits of using low frequency UGW over conventional UT have been reported in the literature for the inspection of elongated structures [24]. The current study investigates the use of fundamental torsional mode T(0,1) for its non-dispersive characteristics over the operating frequency range of UGW (20-100 kHz) for long-range detection. Finite Element Analysis A review of numerical modelling methods has been discussed in depth by Wallhäußer et al. [9], where various research studies have attempted to model and predict fouling. The benefits of modelling are the ability to predict the amplitude drop, attenuation and other parameters to relate these to the presence of fouling. This allows the development of fouling to be predicted and removed before the structure reaches a detrimental condition that results in pipe blockage, bursts and human casualties. More specifically, predictive models can be used for comparison when monitoring real-time data from a structure. This collective data can be cross-referenced with the predictive model to determine the extent of fouling build-up. The SAFE method is commonly used for generating dispersion curves for different structures. An example of this is the modelling of hollow cylinders with coatings to generate dispersion curves and attenuation characteristics of axisymmetric and flexural modes [25]. 3D hybrid models have also been investigated, combining both SAFE and Finite Element Analysis (FEA), to model UGW interaction with non-axisymmetric cracks in elastic cylinders [26], allowing the technique to be used on defects with complex shapes. FEA methods have been applied to model UGW propagation within a structure for more specific applications. For example, a 2D FE model was used for UGW propagation in complex geometries and proved to be more effective than analytical solutions [27]. UGW propagation in bones has been modelled using FEA of the fracture callus and healing course within a three-stage process [28]. 3D numerical simulations have been carried out on UGW for non-destructive inspection of CFRP rods with delamination [29]. The software code ABAQUS has been used to model UGW propagation for long-range defect detection in rail road tracks [30]. ABAQUS has also been used to model longitudinal and torsional wave propagation on a cylinder [31]. The optimal excitation mode was selected using signal processing algorithms and used the reflection coefficient for defect sizing. Although ABAQUS can successfully model UGW propagation, COMSOL Multiphysics has recently become more popular due to its multiphysics and post-processing capabilities. For example, COMSOL has been used to model UGW propagation in the frequency domain, later converted to the time domain using a Fast-Fourier Transform (FFT) [32] which reduces computational time. UGW Inspection Laboratory experiments were conducted to investigate the UGW propagation within a 6.2-m long 6-inch schedule 40 carbon steel pipe. This study was conducted to characterize the change in UGW propagation as an effect of the presence of fouling within the pipe wall. Two Teletest ® UGW collars were used to collect data in pitch-catch configuration to ease the data interpretation. Each collar consists of 24 transducers evenly spaced around the circumference of the 6-inch schedule 40 carbon steel pipe. The transmission collar is placed 1 m away from the pipe end and the receiving collar is placed 4 m away from the transmission collar as shown in Figure 2. A tool lead is connected to both collars to synchronize the data collection. Baseline data are collected from the clean pipe before generating hard-scale fouling on the inner wall of the pipe of the type known as Calcite. Data collection is implemented by transmitting a torsional wave mode from the transmission collar and monitoring the transmitted signal from the receiving collar. The data collection was performed over a frequency range of 30-80 kHz in 1 kHz increments. Furthermore, different number of cycles for the input signal was also considered to state the optimum number to use in order to detect fouling higher sensitivity. wave propagation on a cylinder [31]. The optimal excitation mode was selected using signal processing algorithms and used the reflection coefficient for defect sizing. Although ABAQUS can successfully model UGW propagation, COMSOL Multiphysics has recently become more popular due to its multiphysics and post-processing capabilities. For example, COMSOL has been used to model UGW propagation in the frequency domain, later converted to the time domain using a Fast-Fourier Transform (FFT) [32] which reduces computational time. UGW Inspection Laboratory experiments were conducted to investigate the UGW propagation within a 6.2-m long 6-inch schedule 40 carbon steel pipe. This study was conducted to characterize the change in UGW propagation as an effect of the presence of fouling within the pipe wall. Two Teletest ® UGW collars were used to collect data in pitch-catch configuration to ease the data interpretation. Each collar consists of 24 transducers evenly spaced around the circumference of the 6-inch schedule 40 carbon steel pipe. The transmission collar is placed 1 m away from the pipe end and the receiving collar is placed 4 m away from the transmission collar as shown in Figure 2. A tool lead is connected to both collars to synchronize the data collection. Baseline data are collected from the clean pipe before generating hard-scale fouling on the inner wall of the pipe of the type known as Calcite. Data collection is implemented by transmitting a torsional wave mode from the transmission collar and monitoring the transmitted signal from the receiving collar. The data collection was performed over a frequency range of 30-80 kHz in 1 kHz increments. Furthermore, different number of cycles for the input signal was also considered to state the optimum number to use in order to detect fouling higher sensitivity. The excitation signal applied is a sine wave modulated using the Hann window function (refer Equation (5)). where t is time, f is the central frequency and n is the number of cycles. Fouling Generation To obtain data for comparison with the COMSOL model, fouling was generated on the inner wall of a pipe by heating the outer wall of the pipe up to 120 °C whilst spraying a highly concentrated The excitation signal applied is a sine wave modulated using the Hann window function (refer Equation (5)). where t is time, f is the central frequency and n is the number of cycles. Fouling Generation To obtain data for comparison with the COMSOL model, fouling was generated on the inner wall of a pipe by heating the outer wall of the pipe up to 120 • C whilst spraying a highly concentrated calcium carbonate solution on the inner wall as illustrated in Figure 3. A Cooper heating system [33] was used to heat four heating mats wrapped around the pipe, and further wrapped with carbon fiber insulation as shown in Figure 3. Three thermocouples were placed between each mat to monitor the temperature to allow the Cooper heating system to reach and maintain the target temperature. Fifty liters of deionized water solution were prepared with 1.5 g/L of Calcium Chloride and 1.5 g/L of Sodium Bicarbonate. The highly concentrated mixture was placed into a manual pressure sprayer connected to a 3.2-m telescopic pressure sprayer lance. liters of deionized water solution were prepared with 1.5 g/L of Calcium Chloride and 1.5 g/L of Sodium Bicarbonate. The highly concentrated mixture was placed into a manual pressure sprayer connected to a 3.2-m telescopic pressure sprayer lance. Figure 4a displays the inner pipe wall before undergoing fouling. Figure 4b shows successful generation of hard-scale fouling. There is some corrosion on the inner walls. The fouling generation is carried out and achieves a layer of Calcite on the inner pipe wall. After creating a layer of fouling on the inner pipe wall, the UGW Teletest collars were placed at their original locations to collect further data for analysis. Figure 4b shows successful generation of hard-scale fouling. There is some corrosion on the inner walls. The fouling generation is carried out and achieves a layer of Calcite on the inner pipe wall. After creating a layer of fouling on the inner pipe wall, the UGW Teletest collars were placed at their original locations to collect further data for analysis. Figure 4a displays the inner pipe wall before undergoing fouling. Figure 4b shows successful generation of hard-scale fouling. There is some corrosion on the inner walls. The fouling generation is carried out and achieves a layer of Calcite on the inner pipe wall. After creating a layer of fouling on the inner pipe wall, the UGW Teletest collars were placed at their original locations to collect further data for analysis. Experimental Results The laboratory experiment was conducted over a frequency range of 30-80 kHz. The maximum amplitude of the monitored pulse is plotted in Figure 5. For the studied case, there is an 80% drop in amplitude at 50-80 kHz and, therefore, this frequency range was neglected in this study (marked in black dashed line). Based on the results, the frequency range of interest for this study is 30-45 kHz. There was a reduction in sensitivity for the Calcite layer at the lower end of the frequency range (<40 kHz) due to having a comparatively larger wavelength. Therefore, 45 kHz was selected in this study for further analysis. Experimental Results The laboratory experiment was conducted over a frequency range of 30-80 kHz. The maximum amplitude of the monitored pulse is plotted in Figure 5. For the studied case, there is an 80% drop in amplitude at 50-80 kHz and, therefore, this frequency range was neglected in this study (marked in black dashed line). Based on the results, the frequency range of interest for this study is 30-45 kHz. There was a reduction in sensitivity for the Calcite layer at the lower end of the frequency range (<40 kHz) due to having a comparatively larger wavelength. Therefore, 45 kHz was selected in this study for further analysis. The signal obtained from exciting a 5-cycle torsional mode at 45 kHz was compared for both baseline and fouled pipe in Figure 6. There is a 2 dB drop in amplitude over 4 m (0.5 dB/m) due to the presence of fouling. The number of cycles is also compared in Figure 7, which shows approximately a 20% drop in signal amplitude at 5, 10 and 15 cycles. The frequency bandwidth of the input signal can be calculated as follow: where Fbw is the frequency bandwidth, ki is the bandwidth of the desired lobe (where ki = 0 for main lobe and ki = 1 is for the first side lobe). At a particular frequency, the bandwidth of the excited pulse is dependent on the number of cycles [34]. The frequency bandwidth (main lobe) of the 5-cycle input signal is in the frequency range of 27-63 kHz whereas the bandwidth (main lobe) of the 15-cycle input signal is 39-51 kHz. As shown in Figure 5, there is a higher amplitude response when the frequency gets lower due to having a larger wavelength. This behavior is asymptotic but the amplitude variation over different number Normalised Amplitude The signal obtained from exciting a 5-cycle torsional mode at 45 kHz was compared for both baseline and fouled pipe in Figure 6. There is a 2 dB drop in amplitude over 4 m (0.5 dB/m) due to the presence of fouling. The number of cycles is also compared in Figure 7, which shows approximately a 20% drop in signal amplitude at 5, 10 and 15 cycles. Numerical Investigation To aid understanding of the wave propagation over a pipeline with and without fouling accumulation, an FEA model was created in COMSOL Multiphysics 5.3. The model followed the geometry of the 6.2-m, 6-inch schedule 40 carbon steel pipe and replicated the geometry and placement of the transmission and receiving transducers for pitch-catch configuration for transmitting and receiving the signal data, as shown in Figure 2a. Transmission points were placed at 1 m from one end of the pipe to simulate 24 transducers in the experiment. For ease of computation, symmetry was invoked to analyse 1/48th of the complete model permitting just one point load to be applied. The point load was applied in a direction dependent on the wave mode being excited-for a torsional mode, this is applied perpendicular to the length of pipe. The pressure point load is a 5-cycle sine wave modulated using the Hann window function (refer to Equation (5)). The receiving point is placed 4 m away from the transmission point. A dynamic transient simulation to map out the propagation of the wave requires the calculated mesh to be optimal. The wave equation requires the time stepping within the solver to complement the meshing itself to yield an accurate solution. The meshing size requires a minimum of 8 2nd-order mesh elements per wavelength. The equation used to calculate the maximum allowed element size (ho) [14,35]: Normalised Amplitude Numerical Investigation To aid understanding of the wave propagation over a pipeline with and without fouling accumulation, an FEA model was created in COMSOL Multiphysics 5.3. The model followed the geometry of the 6.2-m, 6-inch schedule 40 carbon steel pipe and replicated the geometry and placement of the transmission and receiving transducers for pitch-catch configuration for transmitting and receiving the signal data, as shown in Figure 2a. Transmission points were placed at 1 m from one end of the pipe to simulate 24 transducers in the experiment. For ease of computation, symmetry was invoked to analyse 1/48th of the complete model permitting just one point load to be applied. The point load was applied in a direction dependent on the wave mode being excited-for a torsional mode, this is applied perpendicular to the length of pipe. The pressure point load is a 5-cycle sine wave modulated using the Hann window function (refer to Equation (5)). The receiving point is placed 4 m away from the transmission point. A dynamic transient simulation to map out the propagation of the wave requires the calculated mesh to be optimal. The wave equation requires the time stepping within the solver to complement the meshing itself to yield an accurate solution. The meshing size requires a minimum of 8 2nd-order mesh elements per wavelength. The equation used to calculate the maximum allowed element size (ho) [14,35]: The frequency bandwidth of the input signal can be calculated as follow: where F bw is the frequency bandwidth, k i is the bandwidth of the desired lobe (where k i = 0 for main lobe and k i = 1 is for the first side lobe). At a particular frequency, the bandwidth of the excited pulse is dependent on the number of cycles [34]. The frequency bandwidth (main lobe) of the 5-cycle input signal is in the frequency range of 27-63 kHz whereas the bandwidth (main lobe) of the 15-cycle input signal is 39-51 kHz. As shown in Figure 5, there is a higher amplitude response when the frequency gets lower due to having a larger wavelength. This behavior is asymptotic but the amplitude variation over different number cycles is low and can be negligible due to the low attenuation and non-dispersive characteristics of T(0,1) mode. However, this behavior can be detrimental for the excitation of longitudinal modes. Numerical Investigation To aid understanding of the wave propagation over a pipeline with and without fouling accumulation, an FEA model was created in COMSOL Multiphysics 5.3. The model followed the geometry of the 6.2-m, 6-inch schedule 40 carbon steel pipe and replicated the geometry and placement of the transmission and receiving transducers for pitch-catch configuration for transmitting and receiving the signal data, as shown in Figure 2a. Transmission points were placed at 1 m from one end of the pipe to simulate 24 transducers in the experiment. For ease of computation, symmetry was invoked to analyse 1/48th of the complete model permitting just one point load to be applied. The point load was applied in a direction dependent on the wave mode being excited-for a torsional mode, this is applied perpendicular to the length of pipe. The pressure point load is a 5-cycle sine wave modulated using the Hann window function (refer to Equation (5)). The receiving point is placed 4 m away from the transmission point. A dynamic transient simulation to map out the propagation of the wave requires the calculated mesh to be optimal. The wave equation requires the time stepping within the solver to complement the meshing itself to yield an accurate solution. The meshing size requires a minimum of 8 2nd-order mesh elements per wavelength. The equation used to calculate the maximum allowed element size (h o ) [14,35]: where c is the velocity, N is the number of elements per wavelength and f 0 is the center frequency. The fouling model was created in the same manner. However, a 1-mm solid layer was modelled on the inner wall of the pipe to represent the expected thickness of fouling to attach to the pipe during experimentation. The properties of this layer can be found in Table 2. The COMSOL model investigated the torsional mode over 30-45 kHz in 5 kHz steps. After selecting a frequency, this model was used to investigate the addition of a Calcite layer at 1-mm, 3-mm and 5-mm thickness. To validate the model, the Time of Arrival was calculated using the group velocity of the torsional mode found in Figure 1. Time of Arrival can be calculated as follows (refer Equation (8)) [35]: where x is the distance from transmitter to receiver and c t is the group velocity of the torsional wave mode at the operating frequency. Numerical Results and Discussions The fouling detection experiment was conducted using the Teletest ® on a 6.2-m 6-inch diameter schedule 40 carbon steel pipe (baseline and fouled). Experimental results were compared to the FEA results to achieve a good correlation with the effects of the additional fouling layer. The COMSOL model investigated 30-45 kHz signals at 5 kHz increments. 45 kHz was selected due to the signal having a shorter pulse length as shown in Figure 8. At this frequency, the addition of the Calcite layer was investigated with 1-, 3-and 5-mm of fouling thickness on the inner pipe wall. Compared to the baseline model in Figure 9, the receiving pulse shows a drop in amplitude with the increase of the Calcite layer thickness. results to achieve a good correlation with the effects of the additional fouling layer. The COMSOL model investigated 30-45 kHz signals at 5 kHz increments. 45 kHz was selected due to the signal having a shorter pulse length as shown in Figure 8. At this frequency, the addition of the Calcite layer was investigated with 1-, 3-and 5-mm of fouling thickness on the inner pipe wall. Compared to the baseline model in Figure 9, the receiving pulse shows a drop in amplitude with the increase of the Calcite layer thickness. There is a shift in Time of Arrival in Figure 9 with the incremental thickness of the Calcite layer. With the increase in Calcite thickness, the pulse of interest arrives faster, this is potentially due to the change in the velocity of the T(0,1) mode with the incremental Calcite thickness. The GUIGUW [18] code was used to plot the dispersion curves against the incremental fouling layer and is tabulated in Table 3. Adding a fouling layer to the pipe increases the velocity of the torsional mode. Although the increase is small, this finding can be concluded as the cause of the shift in Time of Arrival. Using the velocity found for each thickness of Calcite, the Time of Arrival can be calculated (refer Table 3). The Time of Arrival is calculated using the peak-to-peak values of the signal. There is a 1% error in the comparison of theoretical and numerical Time of Arrival. When comparing the variable difference between the Time of Arrival for each case, it is immaterial due to focusing on small thicknesses of Calcite which would make the data harder to use for interpreting the fouling thickness in comparison to the amplitude drop. There is a shift in Time of Arrival in Figure 9 with the incremental thickness of the Calcite layer. With the increase in Calcite thickness, the pulse of interest arrives faster, this is potentially due to the change in the velocity of the T(0,1) mode with the incremental Calcite thickness. The GUIGUW [18] code was used to plot the dispersion curves against the incremental fouling layer and is tabulated in Table 3. Adding a fouling layer to the pipe increases the velocity of the torsional mode. Although the increase is small, this finding can be concluded as the cause of the shift in Time of Arrival. Using the velocity found for each thickness of Calcite, the Time of Arrival can be calculated (refer Table 3). The Time of Arrival is calculated using the peak-to-peak values of the signal. There is a 1% error in the comparison of theoretical and numerical Time of Arrival. When comparing the variable difference between the Time of Arrival for each case, it is immaterial due to focusing on small thicknesses of Calcite which would make the data harder to use for interpreting the fouling thickness in comparison to the amplitude drop. Conclusions and Future Works The present paper investigates the capability of using Ultrasonic Guided Waves for detection of hard-scale fouling in pipelines. A 6.2-m long 6-inch schedule 40 carbon steel pipe was used in this investigation. For comparison, the pipe underwent fouling generation treatment at an increased rate prior to the data acquisition which shows a 0.5 dB/m drop in signal for the addition of a 1-mm thick Calcite layer. An experimentally validated FEA was used to study the effect on UGW propagation at incremental thickness of fouling. With increase in thickness, the amplitude of the signal was shown by the largest reduction in amplitude shown by the 5-mm Calcite case. The shift in Time of Arrival due to the fouling thickness has been discussed but the shift in Time of Arrival is low and this would be unsuitable for characterizing the fouling thickness due to the required sensitivity and other features that may build up within a pipe wall that can affect this small shift in time such as corrosion. This work demonstrates the potential of using UGW for long range fouling detection in pipeline based on the 0.5 dB/m amplitude reduction due to a 1-mm Calcite layer. It is further numerically investigated that the amplitude drop due to Calcite thickness can be used to characterize the thickness of the fouling due to the significant drop relative to fouling thickness. This variable can be used to support the optimization of the power ultrasonic fouling removal procedure in future work. Furthermore, this experimentally validated numerical simulation can also be used to optimize the fouling detection capabilities as part of the future developments in this technology and also the sensitivity and the level of attenuation has to be compared against the conventional UT in future studies to get a performance evaluation. Author Contributions: H.L. is the lead author and main contributor to the paper. She conducted the experimental investigation and created the FEA methodology. She has also written the original draft of the manuscript and carried out the literature review and formal analysis of the data collected. P.S.L. has contributed to the conceptualization to carry out this investigation as well as supervision of the research, project management and coordination of the research activity, visualization of manuscript and furthermore, contributed towards reviewing and editing of the original manuscript. T-H.G. has contributed greatly by supervision of the PhD research, reviewing of the original paper and also acquiring funding for the HiTClean project to carry out the current investigation. L.C.W. has contributed plenty as the academic supervisor of the PhD research, also contributing towards funding acquisition and further reviewing and editing of the original paper. J.K. has contributed towards project management and coordination as well as supervision, he has also contributed towards funding acquisition and reviewing of the original manuscript. Funding: This research was funded by Innovate UK, grant number 102491.
v3-fos-license
2023-07-11T02:06:01.280Z
2023-06-13T00:00:00.000
259493429
{ "extfieldsofstudy": [], "oa_license": "CCBYSA", "oa_status": "HYBRID", "oa_url": "https://www.ijfmr.com/papers/2023/3/3749.pdf", "pdf_hash": "ebea16031217e7457ee3607acce1ad41f79f498a", "pdf_src": "Anansi", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:1595", "s2fieldsofstudy": [ "Political Science" ], "sha1": "a474676f5bb3b72b209211ab8f3f8aead29bb3e1", "year": 2023 }
pes2o/s2orc
The Resurgent Taliban: Implications for the Geo-Strategic Scenario in South Asia : The seizure of power by the Taliban and its ally, the ill-famed Haqqani Network, directly brought to the fore the face of the real masters of the Taliban - the Inter-Services Intelligence (ISI). As the newly established Islamic Emirate – which is what the Taliban prefers to call the state of Afghanistan – was taking its first steps, it faced the stubborn resistance from the very same place where the Taliban had failed to establish its control in the past too – the region of Panjshir. At the same time, reports grew abound about the deepening of internal conflicts between the Taliban and Haqqani Network, prompting the ISI chief Lieutenant Gen. Faiz Hameed to fly to Kabul in order to set the house in order. The catastrophic implications of the rise of a narco-terrorist state in India’s neighbourhood can hardly be overlooked. The link between organised crime and narco-terrorism is well established and even countries with the best of intentions and abilities fail to turn the tide, which is fuelled by such an unholy nexus. Afghanistan, where the cash strapped Islamic Emirate is still looking to establish a semblance of order in the country, it neither has the intention nor the ability to disrupt the trade. Therefore, Afghanistan under the Taliban is emerging as a major narco-empire. those fighters in Afghanistan and Kashmir stands exposed to the entire world. The whole of South Asia and a greater part of Central Asia are facing the consequences of this vicious double game. Quite like in Kashmir, Islamabad's compulsive disposition of stoking fundamentalist flames in Afghanistan is no secret today. It can be traced back to Zulfiqar Ali Bhutto's proxy war in the mid-1970s against the regime of Mohammad Daoud Khan, the first President of the Afghan republic, much before Soviet tanks could enter Kabul. Bhutto's man for the task, Major-General Naseerullah Khan Babar, who was elevated as Interior Minister by his daughter Benazir in 1993, is known to have prided the Taliban as 'his boys'. When a sizable chunk of these mujahids (guerrillas) dialled their primordialism several notches higher and repackaged themselves as talibs (students) by the mid-1990s, Pakistan's state agents continued to generously suckle them. Most talibs, after all, were moulded at radicalization schools run by Pakistan along its porous frontiers with Afghanistan. A fallout of the US' War on Terror after 9/11 was the regrouping of the militants fleeing Afghanistan under the banner of the Tehreek-e-Taliban Pakistan (TTP), which quickly began biting the hand that fed them. Hence, in view of the troubles that are currently brewing in our immediate neighbourhood, this paper focuses on India's approach as a regional power in countering these challenges not through military means but through its ancient philosophies of Vasudhaiva Kutumbakam (he world is one family) and Sarve Bhavantu Sukhinaha (may everyone be happy) and the more recently enunciated political principle of Sabka Saath, Sabka Vikas, Sabka Vishwas (together, for everyone's growth, with everyone's trust). The Rise of the Taliban To analyze the rise of the Taliban, one has to first understand the socio-geopolitical situation of Afghanistan in which they emerged. The 1970s were a turbulent decade not only for the Afghans but for entire West Asia. The first incident that upset the tranquility prevailing in the landlocked Kingdom of Afghanistan was the 1973 coup d'état by Mohammad Daoud Khan who deposed his cousin King Mohammad Zahir Shah. Zahir Shah's four-decade long, peaceful reign, which witnessed a slew of progressive reforms, sought to modernise the Afghan society through a number of egalitarian measures aimed at the socio-economic empowerment of women by promotion of female literacy and vocational training. Daoud Khan started enthusiastically championing the issue of Pashtunistanan irredentist concept which claimed Pakistani territories inhabited by the Pashtuns -North West Frontier Province (now Khyber Pakhtunkhwa) and Federally Administered Tribal Areas (FATA) to be part of Afghanistan. After the loss of East Pakistan (now Bangladesh) in 1971 and Pakistan took steps to counter a perceived threat from India and growing Pashtun nationalism. This was reflected in the increased Islamizaton of the society through the rapid proliferation of madrassas and increased assistance for Islamist groups that could be used as proxies in Kashmir and Afghanistan. The ISI still looks at Afghanistan from the outdated prism of a potential Afghan-Indian pincer movement. (Sirrs,2016) The issue of Pashtunistan and the dispute over the Durand line made Pakistan take note of the Afghan situation more seriously that it did before. After suffering one of the most humiliating defeats ever inflicted in the history of warfare at the hands of its arch-nemesis India in the 1971 -Bangladesh Liberation war in which it lost half of its territory (which included one of the most fertile soil in the world in the form of East Pakistan) and in which 93,000 of its soldiers surrendered before the Indian Army and were taken in as Prisoners of War -Pakistan started perceiving itself as being surrounded by enemies on both sideson the East by India and on the West by a irredentist Afghanistan which was the only country in the world which had refused to even recognize the very existence of Pakistan at the UN in 1947 and was now passionately advocating for the irredentist Pashtun ethno nationalist cause of Greater Afghanistan which threatened to cause a second partition of Pakistan. Perceiving itself to be surrounded by enemies on both sides Pakistan became obsessed with the idea of acquiring strategic depth -to a territory where it could fall back in case of attack by India, which it sought to acquire in the form of Afghanistan.(Arni and Tandon,2014) The Soviet invasion of Afghanistan and the US entry in Afghanistan gave Pakistan the perfect opportunity to fulfil its long cherished dream of acquiring strategic depth vis-à-vis India and opened up a bonanza of military and economic aid for the newly established Zia regime and in order to wage a proxy war via Pakistan. The US propagated Pakistan as a frontline state, agreeing to provide the Zia-ul-Haq led military junta with a massive USD 3.2 billion aid package and a lifting of the embargo on arms supplies. Zia-ul-Haq reaped maximum benefits from United States' strategic compulsions. The US also started funding the anti-Communist mujahideen in Afghanistan through Pakistan and after 1982 the Zia regime received assistance worth USD 5 billion, including advanced weapon systems to train the mujahideen in Afghanistan. The Islamic Revolution in Iran led to the US losing one of its most important bases in the Middle-East and also led to a rise in Sunni-Wahhabi radicalism as a response to radicalism. The Red Army's invasion of Kabul (apparently to stabilize the newly established and faltering Communist regime) resulted in millions of Afghan civilians fleeing to Pakistan through the pervious Durand line. Hundreds of madrassahs sprang up in the Afghanistan-Pakistan border areas which were funded by the CIA and Saudi Oil sheikhs. Wahhabism, as mentioned above, is an ultraconservative movement within Sunni Islam, named for the 18th-century Saudi theologian Muhammad ibn Abdul Wahhab and is the version of Islam enshrined in Saudi law and practiced there today. After the Iranian Revolution in 1979, Saudi Arabia was worried that the Muslim world would be dominated by a Shia country -Iran. So they started funding Sunni majority Pakistan to run these madrassas on their Afghan border and slowly the Wahhabi culture claim into Deobandi Islam. In these madrassahs the Afghan refugees were radicalised and an extreme form of Wahhabi Islam which was radically different from the syncretic Sufi Islam they had hitherto practiced, was indoctrinated into them. Dar al-Uloom Haqqania in Akora Khattak became the most prominent of such madrassahs where little children were taught a deliberately distorted form of Radical Wahhabi Islam and were exhorted to engage in warfare (jihad) against the non-believers. Wahhabi influence grew in Pakistan and Afghanistan throughout the 1980s, when the CIA and Saudi Arabia both funnelled arms to mujahideen guerrilla groups fighting the Soviet occupation, during the Cold War. Over time, different strains of Deobandi Islam were influenced by the different politics of Afghanistan and Pakistan and the Wahhabiinfused strain, practiced by the Taliban, started attacking more moderate Muslims and people of other faiths, and the original Deobandi strain, Thus the true Quranic meaning of jihad which is concerned more with inner spiritual salvation was entirely misappropriated to propagate a Wahhabi petrodollar Islam which had serious implications on the entire world in the years to come. Taliban's rise: Threat to Global Peace and our National Security Taliban's comeback to power has led to the mushrooming of radical Islamic terrorist organisations in the region and has rejuvenated the morale of beleaguered jihadists from Bosnia to Bangladesh. A few days after retaking Kabul, Taliban spokesperson, Suhail Shaheen, said that his organisation intends to 'raise voice for Muslims' in Indian-administered Kashmir. Is it just a coincidence that since the regime change in Afghanistan, Jammu and Kahmir has witnessed a spate of brutal terrorist attacks specifically targeting the non-Kashmiris working population in the state? The number of infiltration attempts have also increased manifold times along with attacks on our security forces. The Taliban regime, therefore, poses a huge challenge to Indian national security as visible in the sudden increase in insurgency in the Valley. The ISI seems to be following the same strategy of using Afghan mujahideen belonging to different organisations to promote insurgency in Kashmir through proxies such as Hizbul Mujahihideen and JeM. No one can forget the catastrophic consequences of the Pakistan backed insurgency in the 1990s which led to the genocide and expulsion of hundreds of thousands of Kashmiri Pandits from the valley many of whom are still living as refugees in their own country. To prevent a recurrence of that catastrophe, firstly we need to be aware of the true nature of the Taliban led Islamic Emirate of Afghanistan. Despite claiming to have changed and pretending to adopt a more progressive outlook with regards to subjects such as female education, the Taliban in reality remains the same herd of regressive mullahs hell bent on taking Afghanistan back to the stone ages. Since Taliban's ascent to power, multiple incidents of public executions of the regime's dissidents and then putting them on public display by hanging the corpses on cranes and using helicopters, have been reported. These are done to instil terror in the minds of the common people to coerce them into following the Pashtunwali laws that the Taliban has imposed on the entire Afghan people. No civilized regime can condone such brutality and the fact that that such incidents are order of the day in the new Afghanistan further expose the true face of the Taliban and establish the fact that they have not changed at all. Only their public relations skills have enhanced thanks to the ISI's training. Another threat emanating from Afghanistan is the resurgence of the Islamic State through its regional wing Islamic State-Khorasan (IS-K). The IS-K is an amalgamation of former Taliban, al-Qaeda, Tehrik-e-Taliban Pakistan and other smaller jihadist group fighters coming together under the established brand-name of the Islamic State, challenging the Taliban on their home turf. The Haqqani Network, a loyal proxy of Pakistan's ISI, have been allies of the Taliban with the group's founder's son Siraj Haqqani now serving as the interior minister in the Taliban government. However, the Haqqanis have also in the past partnered with the Islamic State, providing technical assistance to them for carrying out attacks, including the brutal attack on the Gurudwara Har Rai Sahib which killed 25 in Kabul in March 2020. While the Haqqani Network remains extremely close to the Taliban, who in turn are at odds with the Islamic State, the rationale for the Haqqanis assisting the Islamic State has been to provide cover to the Taliban, as the chosen counter-terrorism experts and "keepers of the peace". Pakistan has reportedly encouraged the Haqqani Network to build on its ties with the IS-K in order to retain its leverage in Afghanistan and ensure that the IS-K can front attacks for the Haqqanis or LeT while Pakistan can claim plausible deniability. This complicated dynamic between Taliban, IS-K and Haqqani Network ensures that whatever the situation, the Pakistani deep state maintains a strong degree of influence in Afghanistan. (Pant and Shah,2021) The very fact that the Pakistani deep state wields enormous influence on the newly re -established Islamic Emirate of Afghanistan should be a cause of concern for India. The Haqqani Network's links with the Islamic State-Khorasan and the fact that they form a significant part of the Taliban regime is a cause of concern not only for South Asia but for the world at large. The conflict between the Islamic State-Khorasan and the Taliban also reflects the deep divides within Sunni Islam in China's Role China's conspicuous romance with the Taliban also poses significant threats to Indian strategic interests in the region. China has compelling reasons to work with the Taliban. Firstly, it does not want the newly established Islamic Emirate to provide safe havens or propaganda support to the East Turkestan separatists claiming to represent the persecuted Uyghur Muslim population of China's Xinjiang province. Secondly, it has plans to extend its ambitious Belt and Road Initiative (BRI) to Afghanistan by constructing a passage linking Afghanistan to Pakistan through the Wakhan corridor. In the long term, China also wants to ensure its own access to Afghanistan's significant mineral resources, including its vast copper deposits. What perturbs India the most is Beijing's ability to expand its political and diplomatic footprint in Afghanistan with the return of a Taliban regime. (Grossman,2021) China remains intractably hostile toward India and is closely allied with its adversary Pakistan. Through symmetric and asymmetric means China has been relentlessly trying to occupy Indian territory in Ladakh, Uttarakhand and Arunachal Pradesh. It is already occupying more than 38,000 sq.km of Indian territory in Aksai Chin and has been trying to infiltrate through Depsang Plains and Gogra Valley Ladakh and Arunachal (where it has even built an entire village in Upper Subansiri district) in recent times using salami slicing strategy. With its deep pockets, China will actively work to limit any Indian influence in a Taliban-run Afghanistan; the Taliban's own reservations about India will only help facilitate Beijing's ability to keep New Delhi at bay. Rise of a Narco-Terrorist State Talban is gradually turning Afghanistan into a narco-terrorist state as narcotics kingpins now occupy senior positions in the Afghan government. Afghanistan accounts for 85% of the global acreage under opium cultivation making the Pakistan-reared Taliban the world's largest drug cartel. It controls and taxes opioid production, oversees exports and shields smuggling networks. This is essential to their survival. So reliant are the Taliban on narcotics trafficking that their leaders have at times fought among themselves over revenue-sharing. In India -which is located between the world's two main opium producing centers, the Pakistan-Afghanistan-Iran "Golden Crescent" and the Myanmar-Thailand-Laos "Golden Triangle"seizures of Afghan-origin heroin have increased. In recent years, with the resurrection of the Taliban, Afghanistan has drastically expanded its production of methamphetamine. The Taliban uses several smuggling routes to move opiates, the most prominent among them being the south-eastern route, which snakes through Pakistan (Sopko,2021) By allowing the Taliban to enrich and sustain themselves with drug profit during the two decades long war in Afghanistan, the US contributed to its own humiliating defeat at the hands of a narco-terrorist organization. It is not too late for the US, the EU and other international bodies such as the International Narcotics Control Board of the UN, to start targeting the Taliban and its allies as drug cartels through its federal courts. The global community needs to understand that the rise of a narco-terrorist state will have serious consequences for the US, Europe and the region. After all, Afghan-origin opioids have resulted in high rates of drug addiction and deaths around the world, from the US and Europe to Africa and Asia. The South Asian region and especially India and has been at the receiving end of this catastrophe, which has ruined the lives of the future generations in India's border states such as Punjab where Afghan-origin opioids are smuggled in large quantities through Pakistan. Given the fact that Afghanistan's economy is in dire straits, the Taliban has a strong incentive to ramp-up production and trafficking. The Sunni Pashtun narco-terrorist organisation ruling Afghanistan has also signed a deal with an Australian company to set up a cannabis processing plant. All these pose serious threats to not only India's national security but to the entire South Asian region and the world as the smuggling of narcotics across the region has increased by leaps and bounds since the Fall of Kabul on the 15 th of August this year. A lucid understanding of the nexus between Islamist terrorism and the global narcotics trade holds the key to crush the Islamabad backed Taliban's primary source of income, such as blocking shipments and seizing illicit profits, often parked in banks and real-estate investments abroad. Multilateral cooperation will play a crucial role in this process. India's Response The rise to power of the of the Taliban in Afghanistan poses serious threats to not only the regional security of South Asia but to the entire world. Despite the Taliban's promise of not letting its territory to be used by terrorist organisations, Afghanistan under the Islamic Emirate can easily become a safe haven for radical Wahhabi jihadists who can pose a serious threat to our national security as well as global security. India through its civilizational values of Vasudhaiva Kutumbakam which considers the entire world as a family can provide a counterpoise to this rising tide of Islamic radicalism. India has pursued a "soft power" strategy towards Afghanistan, sticking to civilian rather than military matters. Indian assistance has focused on building human capital and physical infrastructure, improving security and helping the agricultural and other important sectors of the country's economy. India is building roads, proving medical facilities and helping with educational programs in an effort to develop and enhance long-term local Afghan capabilities. India's involvement in multifarious infrastructure development projects to assisting the Afghan people with necessary medicines and vaccine to tackle the COVID-19 pandemic to training Afghan National Army troops to providing food grains via Iran's Chabahar port has earned India immense goodwill and respect across all sections of the Afghan political and social spectrum. India's response to countering terrorism in Afghanistan as well as inside its own territory has been through development. India's invaluable contribution in rebuilding its war torn neighbour can hardly be overlooked. From constructing the Salma dam with a water storage capacity of 640 million cubic irrigating 2,00,000 acres of farm land to building the Afghan parliament to building multiple hospitals such as the largest pediatric hospital in Afghanistan -Indira Gandhi Institute for Child Health, India has always played a significant role in the reconstruction and rehabilitation process in Afghanistan. The preferential trade agreement signed by India and Afghanistan gives substantial duty concessions to certain categories of Afghan dry fruits when entering India with Afghanistan allowing reciprocal concessions to Indian products such as sugar, tea, and pharmaceuticals. India also piloted the move to make Afghanistan a member of the South Asian Association of Regional Cooperation (SAARC) with the hope that with the entry of Afghanistan into the SAARC, issues relating to the transit and free flow of goods across borders in the region can be addressed, thereby leading to greater economic development of Afghanistan and the region as a whole. India has played an important role by laying the foundations for sustainable economic development in its northwestern neighbour. (Pant,2012) As a consequence, India has come to enjoy considerable soft power in Afghanistan. Indeed, ordinary Afghans appear to have welcomed Indian involvement in development projects in their country. Indian films and television programs are extremely popular among the local Afghan populace. Despite being aware of the national security threats emanating from the Taliban regime and the Taliban's barbarity towards its own people, India once again proved that it is committed towards tackling extremism through development through its all-encompassing policy of Sarve Bhavantu Sukhinah by agreeing to send 50,000 metric tonnes of wheat through Pakistan to Afghanistan to help its people tackle one of the worst food shortages the country has ever faced. India has a fundamental interest in ensuring that Afghanistan emerges as a stable and economically integrated state in the region. New Delhi has from time to time reiterated its commitment towards an Afghan-led and Afghan-owned peace calling for the formation of a truly inclusive government that represents all the major ethnic and political groups of Afghanistan. India has also been a vocal advocate for a peaceful, secure, united, sovereign, stable and prosperous inclusive Afghanistan that exists in harmony with its neighbors. According to UN reports, one in two Afghans faces emergency levels of acute food insecurity and more than 3 million under 5-year-olds are expected to face acute malnutrition by the end of the year. The effects of the COVID-19 pandemic along with the humanitarian crisis have wreaked havoc across the country, as a direct result of the Taliban and US policy decisions. With each side quick to blame the other, the onus falls on regional nations, including India to attempt to mitigate a worsening disaster. Given that a politically and economically stable Afghanistan is a strategic priority for India, India maintains that the ongoing effort to help Afghanistan emerge from war, strife and privation is its responsibility as a regional power. Conclusion The only possible solution to the Afghan catastrophe lies in a legitimate, responsible, empowered, and inclusive government in Kabul. The economic collapse of the Afghan state and the evolving humanitarian crisis must be prevented at all costs. Reaching out to the Afghans and amplifying their voices in having a government that is legitimate and acceptable to them would be first step in the right direction. While the Delhi Regional Security Dialogue on Afghanistan did try to reach out to the regional countries, India should look for new alliances in Central, West, and South Asia to stitch a coalition of the willing. It is time for New Delhi to step up and reach out to the larger sections of Afghan society including women and civil society groups, political leaders and business groups, who are looking for assistance in having a legitimate, representative and inclusive leadership in their country. A failed state in the neighbourhood combined with narco-terrorism cannot be ignored as it will have serious consequences for India's security in the days to come. The Indian government as part of its policy of "Sabka Saath, Sabka Vikas, Sabka Vishwas" has successfully shown through socio-economic development in Jammu and Kashmir that how terrorism can be encountered through inclusive development. India's response to the humanitarian crisis in Afghanistan in which the Indian Air Force rescued thousands of desperate Afghans irrespective of their religious affiliations providing them safe refuge in India has further enhanced India's global image as a responsible power and a net security
v3-fos-license
2022-02-09T16:09:30.831Z
2022-02-07T00:00:00.000
246669057
{ "extfieldsofstudy": [], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://doi.org/10.1080/23570008.2022.2034396", "pdf_hash": "a65a4480eec6bcbbb7288eabb27e64e13b8afec4", "pdf_src": "TaylorAndFrancis", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:1597", "s2fieldsofstudy": [ "Environmental Science" ], "sha1": "d3623ad18a26d9614090c62e1773eacfcb83d442", "year": 2022 }
pes2o/s2orc
Deteriorating water quality state on the structural assemblage of aquatic insects in a North-Western NigerianRiver ABSTRACT Benthic aquatic insects receive the most direct impact when surface waters are perturbed. However, scarce data and understanding about activities’ effects on surface water ecosystems remain a critical challenge for water resource managers and policymakers in tropical regions. In this study, we surveyed the implications of deteriorating physical and chemical parameters on aquatic insects’ structural assemblage to ascertain the ecological health of River Hadejia in North-Western Nigerian. We sampled aquatic insects and physicochemical parameters in three stations influenced by various land-use activities such as informal settlements and agricultural activities for six months. The two-way analysis of variance (ANOVA) revealed physicochemical parameters such as transparency, depth, and nitrate were not significantly affected by sites’ land-use activities (p > .05) in the six months sampled. However, mean electrical conductivity was lowest in Station 3 (104.3 ± 8.04 µS/cm). Dissolved oxygen (DO), five days biochemical oxygen demand (BOD5) values recorded portray a relatively perturbed water system. We recorded four aquatic insects orders belonging to 11 families and taxa. Dytiscus sp. was the most abundant taxon in the study area. A total of 44, 37, and 35 individuals of aquatic insects were recorded in stations 1, 2, and 3 in the river. The Post hoc test performed for all the diversity indices were not significantly different between the studied stations (p > .05). Canonical correspondence analysis (CCA) revealed poor relationship between the physicochemical parameters and the aquatic insects. However, Gyrinus sp. was positively affected by increased water depth, showing a strong negative association with depth. Cluster analysis revealed that aquatic insects’ assemblage structures were mainly grouped by temporal factors (months) rather than spatial differences between the sites. Overall, this study provides further insights and understanding regarding land-use impacts on the ecological health of the River Hadejia, and we recommend more stringent regulations to control human pressure on the river systems within the studied area to enable surface waters in the area to sustain the provision of desired and valued ecosystem services. Most freshwater bodies in Nigeria, River Hadejia inclusive, have been subjected to increasing human disturbance, resulting in changes in the environmental variables, consequently affecting the structural and functional ecology of the freshwater systems (Garba, Ekanem, & Garba, 2017;Umara, Ramlib, Arisc, Jamilb, & Tukur, 2019). Hence, biomonitoring method that employs aquatic insects' structural assemblage can provide us with good insights to environmental management of freshwater ecosystems, allowing us to make quality decision toward accurate and justifiable actions regarding ecological status of freshwater ecosystems (Abowei & Sikoki, 2005;Asonye, Okolie, Okenwa, & Iwuanyanwu, 2007). The pollution of freshwater bodies caused by domestic and industrial effluents is a common anthropogenic impact on watercourses globally, including freshwater ecosystems in Nigeria. Pollution modifies the physicochemical properties of water, affecting aquatic insects community distribution in a given water body. Lately, River Hadejia has had its fair share of anthropogenic disturbances due to incessant defecation, washing, urination, dumping of refuse, and runoff of fertilizers from nearby farming settlements, among others. This pollution problem is exacerbated by high population growth occasion by migration from neighboring communities surrounding Hadejia. Hadejia town is the headquarter of one of the five Emirates in the Jigawa State of Nigeria, with a high rate of rural-urban migration due to its commercial state. The rural-urban migrations have implications on the ecological health of the river system and its constituent's aquatic communities. For example, previous studies in the study area have reported severe deterioration of water quality in the Hadejia River system and the surrounding wetlands, with deleterious effects on aquatic biodiversity (Ahmed, Agodzo, Adjei, Deinmodei, & Ameso, 2018;Umar, Ramli, Aris, Jamil, & Abdulkareem, 2018;Umara et al., 2019). The degradation of ecological health and biodiversity of the river system because of human activities have affected the sustainable delivery of desired ecosystem services for livelihood and wellbeing, drawing us back from achieving the global goals of clean water and sanitation for all. Ruralurban migration has been reported to be increasing exponentially in sub-Sahara Africa (Edegbene, Arimoro, & Odume, 2019;Parienté, 2017), and such migratory activities grossly affect negatively the structural and functional assemblage and diversity of aquatic biota which include plankton, macroinvertebrate and fish (Gieswein, Hering, & Lorens, 2019). In this study, we aimed to ascertain the implication of deteriorating physical and chemical parameters on the aquatic insect structural assemblage to determine the health of the Hadejia River. This study is pertinent due to the continued anthropogenic disturbance along the River Hadejia catchments. The study will unravel the present health status of the river, enabling river managers and other appropriate authorities to manage the river sustainably. The study area The River Hadejia is a tributary of the Yobe River situated in Jigawa State within the administrative boundaries of North-western part of Nigeria. The river and surrounding wetlands cover a catchment area of about 3500 km 2 at an altitude of 152-305 m above sea level (BirdLife International, 2016). The River Hadejia flows through major cities including Hadejia and Nguru with various land uses that lie on or near its banks (Abubakar, 2009), and as such, the river is in local municipality of Hadejia and its environs. It lies between latitude 12°27 12.49'N and longitude 10°02 28.14'E in the north eastern corner of Jigawa State (Figure 1). The climate in the study area is semi-arid, with average annual rainfall ranging between 600 mm to 762 mm, and the humidity is range between 25-41% (Abubakar, 2009;Edegbene, 2020). The Hadejia area temperature varies substantially between 12°C in December and January, and 40°C in March and April. Geology of the area is underlined by rock and younger sediments of the Chad formation. Vegetation in the area falls within the Sudan Savannah with extensive open grassland and a few scattered trees (Abubakar, 2009). Sampling sites We selected three study sites from the study area, based on accessibility and human activities around the area (i.e. community location). Selected sites include Station 1 located at the Aguyaka community, Hadejia Local Municipal Area (Plate 1). Human activities include washing of clothes, bathing, and subsistence fishing. Farming is the major occupation of the populace here. The biotopes are characterized by sand and loamy soil. Station 2 was located in the Yan Wanki quarters in Hadejia Local Government (Plate 2). This station is about 2 km away from station 1. Human activities include agricultural activities, washing of clothes, bathing, and fishing. The substrates here are mainly loam and clay with a sparse distribution of stones and boulders. Whereas Station 3 was located in the Bakin Gada, which is very close to the bridge that connects Hadejia to Bulangu in Kafin Hausa Local Government Area of Jigawa State (Plate 3). This station is about 1 km away from Station 2. Subsistence fishing and cattle grazing are the main land uses in this station. Other land uses include, sand dredging, bathing, defecation and washing of cars, clothes, and other households. These activities represent the leading anthropogenic disturbance that affects this site. Physicochemical analysis We measured temperatures using mercury in a glass thermometer, whereas transparency was determined using Secchi disc with black and white paints. Depth was measured with a calibrated rod in centimeters. Three locations where strategically marked out for the measurement of water flow velocity by timing a float (average of three trials) as it moved over distance of 10 m; the flow velocity was computed by dividing the distance measurement by the time (Gordon, McMahon, & Finlayson, 1994). We determined turbidity in nephelometric turbidity unit (NTU) and pH with the Portable Turbidity meter model WGZ-B and pH meter (HANNA HI 9828 multi-probe meter manufactured by HANNA instruments), respectively. On the other hand, we took electrical conductivity (EC), and total dissolved solids (TDS) readings with Conductivity meter DDSJ-308A. Dissolved oxygen (DO) in water, five days biochemical oxygen demand (BOD 5 ) and nutrient variables, including phosphate and nitrate were analyzed following APHA (American Public Health Association) (1998). Sampling of aquatic insects Aquatic insects sample collections were carried out early in the morning between the month of January and February then April to July 2018. Sampling was not done in March 2018 due to logistic problems. Kick net method techniques were employed for the collection of aquatic insects. The Kick net used in this collection is a square-shaped instrument with a long metal handle (Merritt & Cummins, 1998). During sampling, the kick-net was inserted in the river at the littoral zone and approached by walking upstream to disturb the substrate for sample collection. Five kicks were done in each station on every sampling expedition. After kicking, each sample was placed inside a white enamel tray for sorting using forceps. Identification and preservation of the samples The specimens collected in the field were preserved in 10% formalin before being taken to the laboratory for identification and enumeration. In the laboratory, samples were placed in a slide and viewed using a binocular microscope for proper identification, following (Javier, David, & Rafael, 2011) pictorial guide. After which, voucher samples of the aquatic insects were preserved in 40% formalin for future reference. Data analyses We computed the descriptive statistics of physicochemical parameters, including range, mean and standard error for each station, and two-way analysis of variance (ANOVA) was used to test the level of significant differences between stations and months. We then further used post hoc (honestly significance difference; HSD) test to determine stations that differed from each other. The descriptive statistics, ANOVA and HSD, were calculated using the PAST statistical package (Hammer, Harper, & Ryan, 2001). The Structuralassemblage of aquatic insects was presented in a table. We used ANOVA to test the statistical significant differences in biological metrics, including abundance, number of taxa, Shannon diversity index, evenness, Simpson dominance, Margalef's index between the sampled stations, and we used the post doc HSD test to indicate metrics that differed statistically, and these tests were conducted in PAST software package (Hammer et al., 2001) . Canonical correspondence analysis (CCA) evaluated the relationships between aquatic insect communities and analyzed environmental variables in PAST software package (Hammer et al., 2001). We log (x + 1) transformed physical and chemical parameters dataset used for the CCA model to prevent the undue influences of extreme values on the final CCA model. The statistical significance of the CCA model was revealed by a Monte Carlo permutation test at 999 permutations argument (Jckel, 1986). Cluster analysis based on Bray-Curtis similarity index ascertained if spatial (stations) or temporal (months) factors primarily influenced aquatic insects' assemblage distribution in the study area. We run cluster analysis on log (x + 1) transformed aquatic insect abundance data in PAST statistical package (Hammer et al., 2001). Physical and chemical parameters of River Hadejia Mean and standard error of physical and chemical parameters in River Hadejia are presented in Table 1. Two-way analysis of variance (ANOVA) performed showed transparency, depth, and nitrate to be statistically not significantly different (p > .05) among the months sampled. At the same time, air temperature and DO were significantly different (p < .05) in the sampled months. However, physicochemical parameters mean values did not differ among the stations sampled (p > .05). Mean electrical conductivity was lowest in Station 3 (104.3 ± 8.04 µS/cm). The DO and BOD 5 values portray a relatively perturbed water state. The DO value ranged from 1.1 to 5.7 mg/l with the highest mean value (3.44 ± 0.8 mg/l) and BOD 5 (1.04 ± 0.3mg/l) recorded in Station 3. Nutrients (nitrate and phosphate) were relatively low during the study period. The river was slightly alkaline, as revealed by the mean pH value of 7.27, 7.77, and 7.82 for Stations 1, 2, and 3, respectively. Except for DO and BOD 5 , all the parameters were within the permissible limit in the Nigeria Federal Environmental Protection Agency (FEPA) and Standard Organization of Nigeria (SON). Aquatic insect community structure of River Hadejia Four orders of aquatic insects belonging to 11 families and taxa were recorded during the entire study period. Dytiscidae (Dytiscus sp.) was the most abundant taxon in the study area. Pollution-sensitive species were sparingly represented by Ephemeroptera taxa with two representative taxa (Baetidae and Ephemerellidae). A total of 44, 37, and 35 individuals of aquatic insects were recorded in stations 1, 2, and 3 in the river. Generally, a sparse distribution and abundance of aquatic insects were noticed in the River Hadejia. Ecological indices of aquatic insect in River Hadejia Mean diversity indices of aquatic insects are presented in Table 2. Mean taxa (number of species), Simpson dominance, Shannon Weiner index, and Margalef index (taxa richness) were significantly higher in Station 3. Stations 1, 2, and 3 showed no significant difference in the mean value of Evenness. Abundance (number of individuals) was higher in station 1 (7.83 ± 2.96). Post hoc test performed for all the diversity indices were not significantly different between stations (p > .05). Relationship between aquatic insects and physical and chemical parameters in River Hadejia The CCA revealed a little or no relationship between the physical and chemical parameters and the aquatic insects. However, the first canonical axis explained over 70% of the variation in the aquatic insect's data set, indicating a good ordination model. The eigenvalues for axes 1 and 2 were 0.126 and 0.054, respectively ( Table 3). The Monte Carlo permutation test performed on the first two canonical axes showed no significant difference (p > .05). Dysticus sp. and Platyenemidae were associated to axis 1, while the other remaining aquatic insects were associated to axis 2 except Naucoris sp. and Hydrophilus sp., which were located at the center of the CCA triplot ( Figure 2). Gyrinus sp. was positively affected by increased water depth. Biochemical oxygen demand had a relatively strong correlation with Nepa sp. From the CCA triplot, Stations 1, 2 and 3 only had Dytiscus sp., Naucoris sp. and Nepa sp. in common, while the other aquatic insects were not linked to any station specifically as seen in the scattered distribution of the biota in the triplot ( Figure 2). Generally, no physical and chemical parameters showed correlation with the aquatic insects collected during the study except for TDS that shows a slight association with Noterus sp. and BOD 5 slightly affected the distribution of Ephemerella sp. and Nepa sp. Cluster analysis indicated that aquatic insects clustering were mainly influenced by month rather than by stations, with insects collected from the same month more closely associated than those collected from stations ( Figure 3). Physical and chemical parameters Deteriorating physical and chemical parameters in a given water body are reported to have a debilitating effect on aquatic macroinvertebrates distribution and abundance Sundermann, Gerhardt, Kappes, & Haase, 2013). The high EC and BOD 5 in the present study indicate a distressed watercourse. This, no doubt, may be occasioned by the incessant human influences on the river. Earlier studies elsewhere in southern Nigeria have reported a similar occurrence in river courses and its catchments (Andem, Okorafor, Eyo, & Ekpo, 2014;Arimoro et al., 2015). Study in selected rivers in Tanzania revealed a perturbed water quality due to the debilitating effect of anthropogenic activities on the water systems (Kaaya et al. 2015). Reduced DO concentration in all the study stations sampled is also a pointer to the river's heavy alteration. This may be caused by the influx of fertilizer runoff from nearby farmlands as northern Nigeria is known for extensive farming activities for commercial purposes. Most farmers in the area use fertilizer and other chemicals to cultivate their crops. Generally, it may be concluded that the deteriorating water state of River Hadejia is as a result of uncontrolled human disturbance on the river due to unenforced or poorly enforced regulations guiding the water bodies in Nigeria. Hence, the ravaging state of rivers and other water bodies in Nigeria. Structural-assemblage of aquatic insects Aquatic insects in various quarters have been used as biomonitoring tools in assessing the ecological health of water bodies (Barman & Gupta, 2015;Edegbene, Arimoro, Odoh, & Ogidiaka, 2015). In this study, Dytiscidae (Dytiscus sp.) was the most predominant species and well represented in the three stations sampled. This may be hinged on favorable environmental or other conditions that enhance this group of aquatic insects in River Hadejia. Studies have suggested structural-assemblage changes due to geomorphological factors and other instream destruction of the physical habitat, as factors that contribute to the abundance and distribution of some macroinvertebrates (Barman & Gupta, 2015;Selvakumar, Sivaramakrishnan, Janarthanan, Arumugam, & Arunachalam, 2014). Pollution-sensitive insect species were sparingly represented in their present study. This is an indication of perturbed ecological health of the iver. Various authors have reported that Ephemeroptera, Plecoptera and Trichoptera (EPT) indicate moderately disturbed to clean water (Adakole & Anunne, 2003;Akamagwuna et al. 2019b) depending on their species composition and abundance. Hemiptera was the second most abundant aquatic insect in the study area. Studies elsewhere have reported a similar trend in the preponderance of this group of macroinvertebrates (Huang, Lock, Chi Dang, De Pauw, & Goethals, 2010;Naranjo, Riviaux, Moreira, & Court, 2010;Takhelmayum, Gupta, & Singh, 2013). This they ascribed to the ability of hemipteran to utilize atmospheric oxygen as they skate on the surface of the water in the face of deteriorating DO concentration. This may be why Hemiptera was fairly represented in River Hadejia despite the fact that DO concentration was very low. It can be inferred from the structural assemblage of aquatic insects in River Hadejia that the water is fast deteriorating, judging from the weak structural assemblage of the insects. Ecological indices and diversity of aquatic insects Diversity indices performed confirmed the reaches of the river studied to be perturbed. The mean Margalef index (taxa richness) for the three stations was less than 3. It has been earlier proclaimed by Lenat, Smock, and Penrose (1980) that the Margalef index value greater than 3 indicates clean water condition while a value less than 1 portrays polluted water. In the present study, the taxa richness values for the three stations are less than 1, further confirming the devastating effect of deteriorating water state on the aquatic biota and the ecological health condition of the river. Recently, a study in a dam in Northern Nigeria reported a closely similar trend of taxa richness (Edegbene, 2020). This was reported to be due to poor environmental factors in the dam occasioned by the menace posed by Typha grass and other human activities. Relationship between aquatic insects and environmental variables Canonical correspondence analysis (CCA) constructed for this study revealed a little or no association between the aquatic insects and the physical and chemical parameters. The eigenvalue in axes 1 and 2 derived from CCA triplot was less than 1.0. Eigenvalues associated with each axis equal the correlation coefficient between species and stations scores (Gauch, 1982;Pielou, 1984). Thus, an eigenvalue close to 1 represents a high degree of correlation between species and stations or any other variable and an eigenvalue close to zero indicates little correlation (Palmer, 1993). For instance, in this present study, the CCA triplot showed that axis 1 had an eigenvalue of 0.126 while axis 2 was 0.054. This indicates a very low correlation between the aquatic insects, the physical and chemical variables, and the sampled stations. Axis 2 in the CCA triplot was weakly associated with aquatic insects, as revealed from the near-zero eigenvalue. We suggest Nepa sp. as an indicator of deteriorating ecological health conditions of freshwater systems as they were influenced by increased BOD 5 concentration. At the same time, Gyrinus sp. was positively affected by increased water depth. Hence, Gyrinus sp. may be affirmed to be a deep water dweller. Conclusion and recommendation This study serves as a baseline survey on the use of aquatic insects in assessing the health of River Hadejia. The river's ecological health has been compromised due to the various factors listed above, ranging from poor environmental variables to poor structural assemblage of aquatic insects. For example, air temperature and DO differed significantly between months sampled. Season differences were also more influential in affecting the water quality of the Hadejia rivereine system than site spatial differences. Sensitive insects' species of the orders Ephemeroptera Plecoptera and Trichoptera (EPT) were poorly represented in the study area, with Baetidae and Ephemerellidae (order Ephemeroptera) being the predominant EPTs, further indicating poor water quality conditions. Hence, it can be concluded that the influence of human activities has a debilitating effect on the health of River Hadejia. However, we recommend that further studies should be conducted which will involvemultiple sites and rivers within the Hadejia emirate and its envrion to confirm the results of this study and to understand better the effects of pollution on the functionality of river systems within the emirate. We further recommend that more stringent regulations to control human pressure on the river systems withinthe studied area to enable surface waters in the area to sustain the provisionof desired and valued ecosystem services. EO wrote the initial draft of the manuscript. FCA drew the study area map. AOE, FCA and KHN reviewed and polished the final manuscript. AOE supervised the entire research project work. All authors read and approved the final manuscript.
v3-fos-license
2021-09-25T16:27:51.853Z
2021-08-22T00:00:00.000
239621250
{ "extfieldsofstudy": [ "Chemistry" ], "oa_license": "CCBYSA", "oa_status": "HYBRID", "oa_url": "http://www.insightsociety.org/ojaseit/index.php/ijaseit/article/download/12596/2989", "pdf_hash": "dd2de3eb82c098d54480267de06ba87aa0eaa38d", "pdf_src": "Adhoc", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:1599", "s2fieldsofstudy": [ "Environmental Science", "Chemistry" ], "sha1": "65ccf87fd7bdca6ff0266016591c275e1373b066", "year": 2021 }
pes2o/s2orc
Enhancement of The Sappanwood Extract Yield by Aqueous Ultrasound-Assisted Extraction Using Water Solvent — Sappanwood (Caesalpinia sappan L) is a member of Leguminosae plant, which is popular for its function as a natural source of red dye and has been traditionally used to prepare food and beverage in Southeast Asia. From the pharmacological point of view, the heartwood extract of this plant exhibits various biological activities, such as antibacterial, anti-photoaging, anti-allergic, anti-inflammatory, antioxidant, etc. Brazilin as the main antioxidant compound can be efficiently extracted by the ultrasound-assisted extraction (UAE) method. This study aimed to evaluate the effect of temperature (30 to 60°C), time (5 to 25 minutes), and solid-liquid ratio (1:5 to 1:8 g/mL) on the batch ultrasound-assisted extraction of antioxidant compounds from sappanwood heartwood. The results showed that the feasible extraction process was at 30°C using the solid-liquid ratio of 1:5 g/mL for 15 minutes to obtain an extract yield of 3.0%. The yield was 1.50 times compared to the conventional extraction without ultrasound. The increase in temperature and solvent resulted in higher yields. However, considering the energy for extraction and product purification, extraction at ambient temperature with minimum solvent volume was favorable. Meanwhile, the pseudo-second-order mass transfer model exhibited good statistical parameters, and R 2 was higher than 0.99 with lower RSMD values. This model can be used to describe the kinetics of UAE of antioxidant compounds from sappanwood. The model was meaningful in determining the efficient extraction time with reasonable antioxidant performance. I. INTRODUCTION The use of local plant biodiversity for many purposes, specifically in food, beverage, and traditional medicine, has been steadily improving in today's society. Sappanwood (Caesalpinia Sappan L) or Brazilwood in English can be easily found in the scrub jungle and limestone hills in Southeast Asia [1]. In Indonesia, this plant is called Kayu Secang and has played a significant role in popular agricultural commodities for centuries. It is used to prepare traditional drinks (Wedang Secang) by the folk of Central Java to boost human health by boiling or steeping the dried sappanwood pieces, ginger, cinnamon, cloves, and lemongrass in hot water [2]. In addition, sappanwood, belonging to the Caesalpiniaceae family, is well-known as a natural source of the coloring agent in the food industry. This water-soluble natural dye is mainly in brazilin and is generally obtained through water extraction of sappanwood heartwood. Brazilin has been reported to exhibit various pharmacological properties, such as antioxidant activity, antibacterial activity, antiacne activity, anti-inflammatory activity, anticancer activity, and hypoglycemic activity [3]- [6]. Depending on the pH, brazilin appears to be amber to red in color, where the red color can be obtained under basic conditions (pH >7) [7]. Exposure to light, pH, and air may induce the oxidation of brazilin to form brazilein due to the alteration of a hydroxyl group to the carbonyl group, which is responsible for the intense red color of the sappanwood extract [7], [8]. Based on his study on the storage of sappanwood water extract at 4°C and 25°C under neutral and basic conditions (pH 7.0, 8.0, and 9.0), Sinsawasdi confirmed that the exposure to light at a given pH gave a more pronounced brazilin degradation to Brazilian that the exposure to higher temperature [9]. Brazilin can be obtained via aqueous extraction of sappanwood heartwood, by which other water-soluble materials will also be coextracted. Currently, the brazilin extraction was well done by maceration in which can be used for antibacterial, lipase inhibitory, and antioxidant [10]. However, this method requires a long extraction time. Another common sappanwood extraction method is by using distilled water extraction in the soxhlet apparatus [11]. Using the soxhlet apparatus, the extraction time can be shortened. Occasionally, the conventional extraction by the soxhlet apparatus was conducted at a higher temperature that may degrade the ingredients, such as bioactive compounds. Ultrasonic-assisted extraction method can be a promising green technology to replace the aforementioned conventional extraction methods. Indeed, it offers some superiorities over the formers by minimizing solvent and energy consumption, enhancing the extraction rate, and achieving a high extraction yield in a short extraction time. This is because the ultrasonic wave can generate a cavitation effect that can distract the raw material's cell walls, leading to a remarkable increase of mass transfer of the target compounds into the solvent [12]- [14]. While temperature, time, solid to liquid ratio, and solvent selection are reported to be the main influential parameters that can affect the UAE performance, polar solvents have been reported to be the suitable solvents for the extraction of phenolic compounds like brazilin. Based on their investigations through comparison of several solvents to extract brazilin from sappanwood heartwood, namely water, methanol, ethanol, acetonitrile, and acetone, Xia et al. reported that methanol and water are the most suitable solvents [15]. Unfortunately, methanol is more toxic, expensive, and may cause serious environmental issues. Therefore, using water as a solvent to extract brazilin from sappanwood heartwood is preferable because it is easy to obtain, cheap, reusable, and environmentally benign. The previous research about brazilin extraction from sappanwood studied the brazilin yield [1], [15] and its pharmacological properties [1], [7]. The process transfer during the extraction involving diffusion of brazilin to solvent and extraction rate were scared. The aspects are important to estimate process extraction parameters in which are useful to find effective extraction time, the favorable sappanwood to solvent ratio, and reasonable brazilin yield. The parameter estimation in the extraction process was successfully developed in the Vernonia cinerea leaf extract using different extraction methods and time [16]. The present study aimed to investigate the effect of temperature, time, and sappanwood heartwood-water ratio on the yield of antioxidant compounds obtained from sappanwood heartwood extraction by ultrasound-assisted extraction (UAE). In doing so, the pseudo-first order and pseudo-second-order mass transfer models were compared to find the most suitable kinetic model for UAE of sappanwood heartwood. A. Materials The sappanwood (Caesalpinia sappan L.) heartwood powders were obtained from a local Herbal Market in Yogyakarta, Indonesia, with a moisture content of about 2.14% ( Figure 1). The powders were passed through 80 and 100 mesh sieves to obtain heartwood particles with an average diameter of 0.1635 mm for further use in ultrasound-assisted extraction (UAE) experiments with distilled water as solvent. The distilled water was selected since it is an edible solvent that can dissolve sappanwood extract and is environmentally benign. B. Methods This study was conducted through a series of experimental steps, including sappanwood heartwood powder preparation, ultrasound-assisted extraction (UAE) of antioxidant compounds (BUC 65L, B-One Ultrasonic Cleaner, China), filtration, oven-drying, and UAE mass transfer kinetics model analysis. The kinetics model was then used for determining the optimum extraction time at various temperatures and solid-liquid ratios. C. Sappanwood Antioxidant Compounds Extraction The extraction of total antioxidant compounds from the heartwood of sappanwood powder was carried out using 250 mL distilled water as a solvent in an ultrasonic extraction system ( Figure 2), in which ultrasonic wave irradiation was used as the agitation mode (40 kHz). The solid to liquid ratios were varied at 1:5, 1:6, 1:7, and 1:8 (g/mL), whereas the studied temperatures were 30°C, 40°C, 50°C, and 60°C, respectively. The extraction was performed at 5 min, 10 min, 15 min, 20 min, and 25 min to enable the extraction kinetics evaluation. At the end of sonication, the suspension was equilibrated to room temperature and further filtered through a vacuum filtration process employing a Buchner funnel connected on a filtering flask with a side tube connected to a vacuum pump. Upon completing the filtration process, the filtrates were dried in an electric oven at 105°C to the attainment of constant weight. The total yield of the antioxidant extract from sappanwood was calculated using equation 1. D. Extraction Kinetic Model The mass transfer model is important to describe the physical phenomena of the aqueous ultrasound-assisted extraction (UAE) of brazilin from sappanwood heartwood powders. This model was previously used to represent the kinetics of solid-liquid extraction of Tilia sapwood [17]. Here, pseudo-first-order and pseudo-second-order mass transfer models were compared in order to find the proper model. The pseudo-first-order model was derived as expressed in equations 2 and 3. While the pseudo-second-order model was also developed, as seen in equation 4 and 5. E. Statistical Analysis The kinetic parameters were evaluated by linear regression. The validation of the mass transfer kinetics model used in this study was performed by evaluating the value of R 2 and Root Mean Square Deviation (RMSD). The R 2 value close to 1 and the lowest RMSD value were considered indicators for the model's suitability to represent kinetic of ultrasound-assisted extraction of antioxidant compounds from sappanwood heartwood. F. DPPH Analysis An antioxidant activity based on the DPPH (1,1-diphenyl-2-picrylhydrazyl) analysis was carried out to observe the antioxidant capacity of the sappanwood extract according to the method developed by Anggraini et al. [18] with some modification. The DPPH analysis was performed by dilution of 0.25 mL of sappanwood extract in 4.75 mL of methanol. A carefully measured 0.2 mL of the dilute solution was taken and was further added into 6 mL of DPPH solution. The mixture was then incubated for 30 minutes at ambient temperature. The analysis of this solution was performed using UV vis spectrophotometer. The color absorbance values of each sample were analyzed to obtain the percentage (%) discoloration as an indication of the antioxidant activity of sappanwood extract. A. The Effect of Extraction Temperature on Yield of Sappan Extract This study was conducted to investigate the influential operating parameters of ultrasound-assisted extraction of sappanwood total antioxidant extract, including the extraction temperature and sappanwood powders' mass to solvent ratio. The effect of extraction temperature versus solid-liquid ratio is presented in Figure 3. That confirmed it all parameters affected the concentration of total antioxidant extract. The total antioxidant extract concentration increased following the increase of extraction temperature at all solid-liquid ratios. Figure 3 showed that the extraction rate was slower at lower temperatures (30°C to 40°C). As expected, the extraction rate accelerated significantly at temperatures beyond 40°C and achieved the highest antioxidant extract concentration at 60°C. Several aspects influenced the yield of extract compounds from plant materials, such as the extraction method, particle size, storage conditions, and the presence of interfering substances [19], [20]. Therefore, extracts of plant materials still contained the mixture of various phenolics compounds group and the others that were soluble in the solvent. The heartwood contained flavonoids compounds that are watersoluble, namely, brazilin, protosappanin, and hematoxylin [1]. The brazilin was sensitively oxidized by oxygen from the air to form brazilein. When the heartwood sappanwood was quickly extracted by water, the extract was enriched by brazilin. On the other hand, when the extraction process was delayed, the part of brazilin was oxidized. Thus, brazilein content increases appreciably [7]. Figure 3 also demonstrates that the effect of extraction temperature was significant for each solid-liquid ratio. Theoretically, the yield of total antioxidant extract continued to increase as the extraction temperature raised because a higher temperature induced a higher driving force of the dispersed extracting material in the solvent due to an increase in the number of collisions between the extracting materials and solvent. This case accelerated the rates of mass transfer of the solute [20]- [23]. However, bioactive compounds like brazilin may undergo degradation at a temperature higher than 60°C. Therefore, it was plausible that a temperature of 60°C was previously reported as the optimum aqueous extraction temperature of brazilin [15]. Meanwhile, in this research, the highest extraction yield was obtained at a solid-liquid ratio of 1: 5 (g/mL) and the extraction temperature of 60°C. However, the ultrasoundassisted extraction was still good at room temperature rounding 30°C with the extract yield around 3.0%. This achievement was higher than that of a previous study using conventional extraction without ultrasound with an extract yield of around 2% [2]. Figure 4 shows the effect of extraction time on the yield of the sappanwood extract. For all cases, prolonging extraction time increased the yield of the extract. However, after 15 minutes, the effect was limited. The sappanwood extract was bounded in the tissue of the heartwood matrix. It took time to release the extract to the solvent since the bounded extract needed more energy to be broken [19], [24]. The first time, the energy destroyed the linked extract in the solid matrix. After that, the extract diffused to the solvent via the pores of sappanwood layers. However, using ultrasound, more vibration energy can be provided [24], [25]. As a result, more sappanwood extract engaged in the solid matrix can be cracked and diffused, passing the particle layers to water as solvent. For comparison, the conventional sappanwood extraction without ultrasound yielded 2.0% extract for 15 minutes [2]. Meanwhile, at the same extraction time, the yield can be enhanced up to 3.5% while assisting by ultrasound. Thus, the improvement results closed to double. B. The Effect of Extraction Time on Yield of Sappan Extract Prolonging extraction time (longer than 15 minutes), the increase of sappanwood extract yield was not significant anymore. This is because the remained extract in the wood was still bounded in the matrix of wood particles. The bounding can be broken using the other organic solvent such as ethyl acetate, or diaion resin, and the extract's yield was higher [26]. However, the residue of the solvents in the extract can trigger impacts on human health, requiring extra purification processes. Ethyl acetate exposure at above threshold limit can cause irritation (nose, eyes, and throat) and even unconsciousness [27]. In comparison, the use of diaion resin requires a higher purification cost [26]. C. Fitting Model to Experiment The kinetics of the overall ultrasound-assisted extraction (UAE) process was developed according to the pseudo-firstorder and pseudo-second-order mass transfer (Figures 5 and 6). The models were validated with the experimental data at solid to solvent ratio 1:5 and various extraction temperatures. In Figure 5 part (a), the models were fitted by experiment data at operational 30 o C. While the extraction temperatures of 40, 50, and 60 o are presented in Figure 5 part (b), (c), and (d), respectively. For all cases, the pseudo-second-order sounds the better accuracy in describing the process extraction. The data resulted from the model close to experimental data at every extraction time. In comparison, the pseudo-first-order model was precise in the first 5 minutes only. Figure 6 demonstrates the models fitting in different solid to solvent ratios under extraction temperature 30°C. For all cases, it was clear that after 15 minutes, extending extraction time did not increase the concentration of sappanwood extract significantly. With less solvent, the sappanwood extract can be more concentrated. In contrast, with the excess solvent, the sappanwood concentration becomes lower in which caused the problem in the extract purification. Based on the experiment and models, it can be recommended that the favorable extraction time was about 15 minutes, with a solid to solvent ratio 1:5. Figure 6, part (a) represents the concentration of sappanwood extract diluted in solvent every extraction time obtained by pseudo-first-order, pseudo-second-order, and experiment under solid to solvent ratio 1:5. Results showed that the extract concentration obtained by pseudo-secondorder model closed to experiment at every extraction time. Meanwhile, the pseudo-first-order was accurate in the first five minutes only. The same results were also obtained at various solid to solvent ratios as depicted in Figure 6 part (b), (c), and (d) for solid to solvent ratio 1:6, 1:7, 1: 8, respectively. Table 1 presents the pseudo-first-order kinetic parameters and goodness of fit at the temperature of 30°C to 60°C and the solid-liquid ratio of 1:5 to 1:8 g/mL. The value of R 2 and RMSD for the pseudo-first-order mass transfer model exhibited good conformity between experimental data and calculated values. Table 1 also shows that the k values increased significantly (p <0.05) as the temperature rises, indicating the increase of extraction rate. Based on the R 2 , the pseudo-first-order was good enough to describe the process transport in sappanwood extraction, especially in the first 5 minutes. After that, the model accuracy was lower. D. Parameter Estimation of Kinetic Models Meanwhile, the pseudo-second-order kinetic model shows the more consistent result, as depicted in Table 2. For all conditions and extraction time, the value of R 2 was higher than 0.99 with lower RSMD values. Considering those values and yield profiles illustrated in Figures 5 and 6, the pseudosecond-order was more favorable to describe the sappanwood extraction assisted with ultrasound. The phenomena can be described in two aspects. Firstly, the ultrasound force vibrated the linkage of brazilin and solid extract components in the surface wood particle-matrix intensively during the extraction time. The vibration of the ultrasound wave broke the extract bounded at the surface. As a result, the brazilin and solid extract are released and diffused to the solvent/water quickly. The introduction of ultrasound was very meaningful since, unlike the other organic solvents, the distilled water as a solvent had limited capability to break the matrix and dissolve the wood extract [26]. More sappanwood extract or brazilin can be diluted by ultrasound vibration compared to conventional without ultrasound [2]. Secondly, the vibration attacked the sappanwood extract at the inside part of the solid particle. Here, the linkage was stronger involving the intermolecular components. With this intensive force, the matrix has been cracked and broken for a longer time. Hence, the brazilin and solid extract components can release to the water. However, the diffusion of the extract was slow since it passed the multi-components layer in the tissue of solid particles. After 15 minutes, the amount of sappanwood extract dissolved into solvent did not increase significantly. The remaining extract is located in the deeper layer or close to the core of particles, requiring more ultrasound power with a shorter wavelength for breaking. Perhaps, the smaller wood particles' size can shallow the layer in which was able to speed up the extract diffusion. Based on the above phenomena and comparing with the other extraction process such as Tilia sapwood, the pseudo-second-order was a better option to represent the phenomena in antioxidant extraction from sappanwood [17]. The pseudofirst-order was suitable for quick extraction only. It was good in the first of 5 minutes where the surface extraction process of brazilin and solid wood extract occurred. E. Effect of Extraction Temperature and Solid-Liquid Ratio on The Antioxidant Activity of Total Sappanwood Extract The total antioxidant activity of the sappanwood extract at various solid to solvent ratios and extraction temperature was observed as tabulated in Table 3. The results showed that increasing extraction temperature from 30°C to 40°C led to an increase in the total antioxidant activity of the sappanwood extracts. This was due to a higher extraction of freely accessible brazilein in the heartwood of sappanwood as the oxidative product of brazilin. Further increase in extraction temperature to 50°C caused a reduction in the antioxidant activity of the sappanwood extract since less residual brazilein should exist in the inner part of the heartwood of sappanwood and possible thermal degradation brazilein as a phenolic compound [15]. However, as the extraction temperature was increased to 60°C, a higher total antioxidant activity. This was likely due to the extraction of brazilin as a stronger antioxidant compound located in the inner part of sappanwood heartwood. Brazilin has been reported to exhibit the highest DPPH (1,1-diphenyl-2-picrylhydrazyl) radical scavenging and ferric reduction activities than standard vitamin E, brazilein sappanchalcone, protosappanin B and C [28]. In this study (with ultrasound-assisted extraction), lowering temperature and amount of solvent are suggested to find a better quality of extract. IV. CONCLUSION An ultrasound assisted extraction (UAE) of antioxidant compounds from sappanwood heartwood powder has been successfully performed. A lower value of solid-solvent ratio and a higher temperature resulted in the highest concentration of total antioxidant compounds extract from sappanwood. Considering the extract yield, antioxidant activity, and energy introduced for heating, the reasonable extraction process condition with UAE was reached at 30°C and the solidsolvent ratio of 1: 5 g/mL for 15 minutes. Higher extraction temperature increases the extract yield, but the higher temperature also implies more energy cost required for heating. Meanwhile, the excessive solvent can result in more extract. However, it will be costly in the solvent separation since much water must be evaporated to find dry sappanwood extract. The pseudo-second-order model was a suitable approach to represent batch solid-liquid extraction of brazilin from heartwood powder of sappanwood. The model was meaningful for determining the effective extraction time at various operational temperatures. In brief, a shorter extraction time was adequate to obtain a high extract yield in ultrasoundassisted extraction of brazilin from the heartwood of sappanwood. Implementation of this finding on a commercial scale will increase the antioxidant extraction efficiency, reduce the processing cost and subsequently increase the economic value of sappanwood as an agricultural commodity. NOMENCLATURE concentration of total extract mg/L at given extraction time kinetic constant /minutes $ temperature of extraction °C % extraction time. minutes & the dry weight of the sappanwood powders kg & the weight of total dried extract of sappanwood kg Subscripts ' equilibrium 1 pseudo-first-order 2 pseudo-second-order
v3-fos-license
2021-04-04T06:16:22.902Z
2021-03-28T00:00:00.000
232774461
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.mdpi.com/1996-1944/14/7/1655/pdf", "pdf_hash": "f0b892372f78b80668ed6a0bdc8239a2c1a1a0d2", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:1602", "s2fieldsofstudy": [ "Materials Science" ], "sha1": "e35d147c287a5abbe3c2a924c3f66e29fd09b119", "year": 2021 }
pes2o/s2orc
Comparison of the Tribological Properties of the Thermal Diffusion Zinc Coating to the Classic and Heat Treated Hot-Dip Zinc Coatings The presented studies are focused on the wear resistance and friction coefficient changes of the thermal diffusion (TD) zinc coating deposited on steel. The aim of research was to evaluate the variation in coating properties during dry friction as a result of the method of preparation of the basis metal. The measured properties were compared to those obtained after classic hot-dip (HD) zinc galvanizing—heat treated and untreated. Thermal diffusion zinc coatings were deposited in industrial conditions (according to EN ISO 17668:2016-04) on disc-shaped samples. The results obtained during the tribological tests (T11 pin-on-disc tester) were analysed on the basis of microscopic observations (with the use of optical and scanning microscopy), EDS (point and linear) analysis and microhardness measurements. The obtained results were similar to effects observed after heat treatment of HD zinc coating. The conducted analysis proved that the method of initial steel surface preparation results in changes in the coating’s hardness, friction coefficient and wear resistance. Introduction Thermal diffusion (sherardizing) is a diffusion zinc coating method which is increasingly used as an alternative to hot dip zinc galvanizing, as the corrosion protection of the different small elements (fasteners, wires, bolts, screws, nails, springs, etc.). Due to important advantages (environmentally friendly process, no chromate treatment, surface ready for varnishing and vulcanization, no risk of hydrogen embrittlement), this method is constantly being developed and improved [1][2][3][4][5]. For example, in the paper [3,4], an innovative solution was proposed-the forced recirculation of the reactive atmosphere. Structural elements are made of a wide range of materials that require various types of corrosion protection. For example, fasteners are manufactured from different metallic materials ranging from common steel, alloy steel, stainless or corrosion resistant steel to aluminium alloys and titanium [6]. Pressure to reduce the production cost means that more and more often structural elements are made of less advanced materials that guarantee only the appropriate mechanical properties. Additional functional properties, such as corrosion resistance and wear resistance, are obtained by applying appropriate coatings, whose thicknesses vary over a wide range-from nanometers [7] to several hundred micrometers [8]. To increase the wear resistance, harder and harder coatings are applied, with a hardness up to 1700 HV [9,10]. Hardness greater than 40 GPa was reported for systems based on TiN NbN, TiN VN and TiN/ZrN layers [11]. Zinc is one of the cheapest elements among those traditionally used in the production of anticorrosion coatings (Zn, Cu, Ni, Cr) [12], and moreover, processes of Zn coating deposition are very simply-they do not require large financial outlays [13]. Generally, zinc coatings applied to different elements are obtained via four methods: hot-dip galvanizing, electro-galvanizing, zinc lamella and sherardizing (thermal diffusion) [14][15][16]. In the case of some structural elements, where a very good surface representation is necessary, the requirements concern the limitation of the coating thickness. This applies among others to the bolts designed for joining structural elements [14]. In addition to corrosion resistance, an important parameter of fasteners is friction coefficient. In the case of too low a friction value, there is a potential risk of self-loosening of the joint. If the friction coefficient is too high, there is a risk of too low clamping forces, resulting in a joint failure due to incomplete tightening or complete fracture of the bolt. The requirements of a proper thread match between the bolt and nut limit the application of a hot-dip zinc galvanizing of bolts, especially with a small diameter. However, in some cases, the high-temperature (ab. 535 • C) hotdip zinc galvanizing [17] that allows for removal of the excess zinc from the surface of bolts is applied, but such a treatment temperature can result in issues with the material of the bolts-steel tempering and losing the mechanical properties. As an alternative to electro-galvanizing, lamellar or thermal diffusion processes can be applied. According to Kania [14], the corrosion resistance of electro-galvanized bolts decreases quickly due to the small coating thickness. Moreover, the application of this method may result in contamination of the natural environment and potential hydrogen embrittlement of the steel [18][19][20]. Although lamella zinc technology increases its market share, especially in the area of fasteners, i.e. bolts, screws, nuts, springs, etc., the comparative tests of coatings (hot-dip, galvanic and lamellar) conducted in SO 2 and NaCl environments showed that the hot-dip galvanized coating has the best anticorrosion properties [21]. Thus, because better results are reported even when zinc coatings on bolts are applied using sherardizing [14], in this paper, the hot-dip and thermal diffusion zinc coatings' properties are compared. The anticorrosion and tribological properties of hot-dip and sherardized zinc coatings depend on the microstructure observed on the coating cross section- Figure 1. to different elements are obtained via four methods: hot-dip galvanizing, electrogalvanizing, zinc lamella and sherardizing (thermal diffusion) [14][15][16]. In the case of some structural elements, where a very good surface representation is necessary, the requirements concern the limitation of the coating thickness. This applies among others to the bolts designed for joining structural elements [14]. In addition to corrosion resistance, an important parameter of fasteners is friction coefficient. In the case of too low a friction value, there is a potential risk of self-loosening of the joint. If the friction coefficient is too high, there is a risk of too low clamping forces, resulting in a joint failure due to incomplete tightening or complete fracture of the bolt. The requirements of a proper thread match between the bolt and nut limit the application of a hot-dip zinc galvanizing of bolts, especially with a small diameter. However, in some cases, the hightemperature (ab. 535 °C) hot-dip zinc galvanizing [17] that allows for removal of the excess zinc from the surface of bolts is applied, but such a treatment temperature can result in issues with the material of the bolts-steel tempering and losing the mechanical properties. As an alternative to electro-galvanizing, lamellar or thermal diffusion processes can be applied. According to Kania [14], the corrosion resistance of electrogalvanized bolts decreases quickly due to the small coating thickness. Moreover, the application of this method may result in contamination of the natural environment and potential hydrogen embrittlement of the steel [18][19][20]. Although lamella zinc technology increases its market share, especially in the area of fasteners, i.e. bolts, screws, nuts, springs, etc., the comparative tests of coatings (hot-dip, galvanic and lamellar) conducted in SO2 and NaCl environments showed that the hot-dip galvanized coating has the best anticorrosion properties [21]. Thus, because better results are reported even when zinc coatings on bolts are applied using sherardizing [14], in this paper, the hot-dip and thermal diffusion zinc coatings' properties are compared. The anticorrosion and tribological properties of hot-dip and sherardized zinc coatings depend on the microstructure observed on the coating cross section- [22], (b)-thermal diffusion [23], (c)-Fe-Zn equilibrium system [24,25]. The microstructure of the zinc galvanizing coatings in relation to the Fe-Zn system; (a)-hot-dip [22], (b)-thermal diffusion [23], (c)-Fe-Zn equilibrium system [24,25]. According to the Fe-Zn diagram [24][25][26] (Figure 1c), there are three phases occurring in the hot-dip zinc coating -Г(Fe 3 Zn 10 ), δ (FeZn 10 , FeZn 7 ), and ζ (FeZn 13 )-and an iron solid solution in zinc-η, which is formed on the outer surface as it is pulled out of the bath (Figure 1a). The current model [22] suggests that the sequence of the zinc coating growth is as follows: first, the Г 1 phase is observed; next, within a few seconds, a sublayer of compact phase δ c and palisade phase δ p is created. There are a lot of factors that can influence the reactivity of steel (the quality/roughness of the galvanized surface [27][28][29][30][31], the kind of galvanized material [32][33][34], the alloyed elements added to the zinc bath [34][35][36][37][38][39], the metallurgical process parameters [40][41][42]) and thereby change the microstructure of the zinc coating. The coating microstructure obtained after thermal diffusion is similar to hot-dip zinc coating, but there is no η phase- Figure 1b [43,44]. However, there is also some controversy regarding the appropriate coating structure. Evans [23] claims that the outer layer is a mixture of ζ and zinc, with 7-10% iron content in the form of FeZn 7 . The second alloy layer δ contains 25% of iron in the form of Fe 11 Zn 40 . The inner layer creates an Г phase with 50% iron content. According to Jiang [1], the sherardized coatings are composed of the loose outer layer (ζ-FeZn 13 phase) and the dense inner layer (δ-FeZn 7 phase) with higher hardness. Konstantinov [45] analyses the two-phase structure: (Г + δ). On the other hand, Wortelen [44] stated that after sherardizing the coating structure is composed of Г, Г 1 , δ 1 and ζ. Furthermore, an investigation conducted by Kania [14] confirmed the presence of Г 1 (Fe 11 Zn 40 ) and δ 1 (FeZn 10 ) phases, although according to the Fe-Zn equilibrium system, phases Г and ζ are also stabile. The tribological properties of the zinc coating are closely correlated with its microstructure and are resultant of the properties of the phases visible in the coating cross section. The chemical formula and hardness values available in literature of the Fe-Zn intermetallic phases of the hot-dip and thermal diffusion zinc coatings are presented in Table 1. Zinc coatings (hot-dip, galvanic, lamellar, sherardized) show a considerable differentiation of the hardness [16,48,49]-the lowest values (50 HV) are measured after the hot-dip galvanizing. Tribological properties are in direct correlation with the hardness and microstructure of the applied coating. Thus, different methods are used to improve zinc coating wear resistance by increasing its hardness. In article [50], heat treatment was applied to increase the wear resistance of hot-dip (HD) zinc coating. The coating structure formed after the conducted experiment was similar to that observed in thermal diffusion coating, i.e., there was no pure η phase and the created coating was composed of δ and ζ phases. As a result of the structure changes, the hardness of the coating increased fivefold to values close to those measured in the case of thermal diffusion (TD) coatings. Considering the above analysis, the aim of this paper was to compare the wear resistance of TD zinc coating to classic and heat treated HD coatings. Additionally, the experiment was focused on determination of the relation between the coating's microstructure and measured friction coefficient value and possibilities of adjusting it to the requirements. Materials and Methods During the investigation, the pin-on-disc test was applied to measure the changes of the instantaneous and average values of the friction coefficient on the zinc coating cross section in the friction pair (zinc coating/steel pin). The applied test also allowed the rate of wear of the tested coatings to be determined [51,52]. The tribological investigations with application of the T11 device consisted of testing the steel pin/zinc coating couple in dry friction conditions and calculating the friction coefficient. To conduct the experiment, surfaces of the tested disc-shaped samples were subjected to friction with a Ø 4 steel rod, with a constant load of F = 9.8 N, which moved in circles on the surface of the samples at a rate of n = 45 rotations/min for a duration of 30 min. The friction coefficient was measured every 0.2 s. The thermal diffusion process was conducted in industrial conditions according to EN ISO 17668:2016-04 [53] in the mixture of zinc powder (99% Zn, 0.009% Pb, 0.006% Cd, <0.005% Fe, average grains size 3-4 µm) in rotary chambers that rotated at a rate of 5-10 turns per minute, at a temperature of 400 • C, for a period of 4 h. The disc-shaped samples' surface before TD zinc deposition was prepared in different ways-grinded with sandpaper with gradations 30 (TD30), 60 (TD60), 120 (TD120), 240(TD240), sandblasted (SB) and turned (T). Samples used for comparison were hot-dip galvanized (marked as untreated-HDUT) according to EN ISO 10684 [54]-a process of etching in 12% HCl, fluxing and dipping in a Zn bath with Al (0.002%), Bi (0.055%) and Ni (0.058%), at a temperature of 460 • C within 1.5 min, followed by cooling in water. In addition, the heat treated HD galvanized samples were used for comparison (HDHT-temperature T = 430 • C, τ = 7 min [50]). The following parameters were analysed during investigations: the wear resistance -disc-shaped samples weight loss, the friction coefficient (T11 pin-on-disc tester); the microstructure of the zinc coating structure and steel using an Axiovert 100 A optical microscope (Zeiss Group, Oberkochen, Germany) and an EVO 25 MA Zeiss scanning electron microscope with an EDS attachment (Zeiss Group, Oberkochen, Germany); and microhardness changes in the cross section of both the coating and the subsurface layer of steel (Vicker's HV 0.02, Mitutoyo Micro-Vickers HM-210A device 810-401 D, Mitutoyo Corporation, Kanagawa, Japan). Additionally, the surface roughness was measured using an optical Phase View ZeeScan system (Phase View, Paris, France). The test samples were carefully prepared in order to avoid the overheating and spalling during cutting (hand cutting, hot embedded, grinded and polished). Metallographic Observations and Microhardness Distribution The zinc coating thickness measured during the microscopic observations was verified through measurements in a wider range with the use of the magnetic induction methodan electronic PosiTector 6000 tester (DeFelsko Corporation, Ogdensburg, NY, USA). The thickness of the coating on the disc-shaped samples, after hot dip and thermal diffusion galvanizing, was in the range of 45-55 µm. The TD zinc coating morphology presented in Figure 2 is in accordance with the literature data [22][23][24]. It is very difficult to distinguish between the different phases in the coating using SEM observation- Figure 2a. Only the linear and point EDS analysis shows the existence of two areas- Figure 2b, Table 2. The outer layer has a higher Zn content than the inner one adjoined to the basis metal-steel. Taking into account that the first point of the EDS analysis was several microns away from the outer surface and the trend in the course of the Zn linear analysis, which was clearly downward near the surface, it may be assumed that a δ phase (FeZn 7 or FeZn 10 ) was present in the outer layer [16,45,46]. The chemical composition of the zone close to the basis metal suggests that a Г 1 phase is located in this area [14,16,46]. The TD zinc coating morphology presented in Figure 2 is in accordance with the literature data [22][23][24]. It is very difficult to distinguish between the different phases in the coating using SEM observation- Figure 2a. Only the linear and point EDS analysis shows the existence of two areas- Figure 2b, Table 2. The outer layer has a higher Zn content than the inner one adjoined to the basis metal-steel. Taking into account that the first point of the EDS analysis was several microns away from the outer surface and the trend in the course of the Zn linear analysis, which was clearly downward near the surface, it may be assumed that a δ phase (FeZn7 or FeZn10) was present in the outer layer [16,45,46]. The chemical composition of the zone close to the basis metal suggests that a Г1 phase is located in this area [14,16,46]. The tribological properties (weight loss, friction coefficient) of the TD coating were compared to analogical data concerning HD and heat treated HD coatings. The classical HD coating structure is composed of four phases, whereas in the structure after heat treatment, three phases are visible (η phase is missing). The typical microstructure of a tested HD zinc coating formed by phases η, ζ, δ and Г1 [8], is shown in Figure 3a. The tribological properties (weight loss, friction coefficient) of the TD coating were compared to analogical data concerning HD and heat treated HD coatings. The classical HD coating structure is composed of four phases, whereas in the structure after heat treatment, three phases are visible (η phase is missing). The typical microstructure of a tested HD zinc coating formed by phases η, ζ, δ and Г1 [8], is shown in Figure 3a. The TD zinc coating morphology presented in Figure 2 is in accordance with the literature data [22][23][24]. It is very difficult to distinguish between the different phases in the coating using SEM observation- Figure 2a. Only the linear and point EDS analysis shows the existence of two areas- Figure 2b, Table 2. The outer layer has a higher Zn content than the inner one adjoined to the basis metal-steel. Taking into account that the first point of the EDS analysis was several microns away from the outer surface and the trend in the course of the Zn linear analysis, which was clearly downward near the surface, it may be assumed that a δ phase (FeZn7 or FeZn10) was present in the outer layer [16,45,46]. The chemical composition of the zone close to the basis metal suggests that a Г1 phase is located in this area [14,16,46]. The tribological properties (weight loss, friction coefficient) of the TD coating were compared to analogical data concerning HD and heat treated HD coatings. The classical HD coating structure is composed of four phases, whereas in the structure after heat treatment, three phases are visible (η phase is missing). The typical microstructure of a tested HD zinc coating formed by phases η, ζ, δ and Г1 [8], is shown in Figure 3a. Most of the data [22,55] confirm that there are only three phases in a HD zinc coating after heat treatment: Г (23.5-28.0 wt% Fe), δ (7.0-11.5 wt% Fe) and ζ (6.0-6.2 wt% Fe)- Figure 4 [46,56]. During the heat treatment, the δ and Γ phases grow at the expense of the ζ phase [57], and at higher temperatures, the ζ layer disappears and in its place the δ phase grows reaching to the surface of the coating [24]. The conditions for the growth of individual phases here are similar to the TD process but the Zn amount in the coating is constant and the coating thickness is stable during the treatment [50]. ER REVIEW 6 of 13 Most of the data [22,55] confirm that there are only three phases in a HD zinc coating after heat treatment: Г (23.5-28.0 wt% Fe), δ (7.0-11.5 wt% Fe) and ζ (6.0-6.2 wt% Fe)- Figure 4 [46,56]. During the heat treatment, the δ and Γ phases grow at the expense of the ζ phase [57], and at higher temperatures, the ζ layer disappears and in its place the δ phase grows reaching to the surface of the coating [24]. The conditions for the growth of individual phases here are similar to the TD process but the Zn amount in the coating is constant and the coating thickness is stable during the treatment [50]. The profiles of the hardness changes on the cross sections of zinc coating and subsurface steel area are presented in Figure 6. Analysis of the obtained results showed that there are no essential differences between measured hardness values of TD coatings deposited on the steel surface with various surface conditions. The hardness values in the coatings' outer layer were in the range 370-385 HV 0.02, while the values measured in the inner layer were in the range 325-345 HV 0.02. The downward trend in hardness changes was Microscopic examinations (both optical- Figure 5-and scanning microscope- Figures 2a and 4a) showed that the outer sublayer of TD coating was slightly cracked and porous to a depth of 10 micrometres, whereas there were no discontinuities, porosities, cracks or surface degradation visible as a result of the conducted heat treatment of the hot-dip zinc coating. R REVIEW 6 of 13 Most of the data [22,55] confirm that there are only three phases in a HD zinc coating after heat treatment: Г (23.5-28.0 wt% Fe), δ (7.0-11.5 wt% Fe) and ζ (6.0-6.2 wt% Fe)- Figure 4 [46,56]. During the heat treatment, the δ and Γ phases grow at the expense of the ζ phase [57], and at higher temperatures, the ζ layer disappears and in its place the δ phase grows reaching to the surface of the coating [24]. The conditions for the growth of individual phases here are similar to the TD process but the Zn amount in the coating is constant and the coating thickness is stable during the treatment [50]. Microscopic examinations (both optical- Figure 5-and scanning microscope- Figures 2a and 4a) showed that the outer sublayer of TD coating was slightly cracked and porous to a depth of 10 micrometres, whereas there were no discontinuities, porosities, cracks or surface degradation visible as a result of the conducted heat treatment of the hot-dip zinc coating. The profiles of the hardness changes on the cross sections of zinc coating and subsurface steel area are presented in Figure 6. Analysis of the obtained results showed that there are no essential differences between measured hardness values of TD coatings deposited on the steel surface with various surface conditions. The hardness values in the coatings' outer layer were in the range 370-385 HV 0.02, while the values measured in the inner layer were in the range 325-345 HV 0.02. The downward trend in hardness changes was observed over the entire cross section of the coatings. The highest average hardness values were measured in the outer coating layer for the surface grinded using sandpaper with The profiles of the hardness changes on the cross sections of zinc coating and subsurface steel area are presented in Figure 6. Analysis of the obtained results showed that there are no essential differences between measured hardness values of TD coatings deposited on the steel surface with various surface conditions. The hardness values in the coatings' outer layer were in the range 370-385 HV 0.02, while the values measured in the inner layer were in the range 325-345 HV 0.02. The downward trend in hardness changes was observed over the entire cross section of the coatings. The highest average hardness values were measured in the outer coating layer for the surface grinded using sandpaper with gradation 30 and turned (385 and 383 HV 0.02). The lowest hardness values over the entire cross section of the coating were measured for the steel surface grinded with sandpaper with gradation 240 and sandblasted. cross section of the coating were measured for the steel surface grinded with sandpaper with gradation 240 and sandblasted. The presented hardness changes are due to the changes in the microstructure (Г1+δsuggested by results of EDS analysis), caused by diffusion of iron from the steel surface into the coating (Figure 2, Table 2). The hardness values measured by Pokorny [25] show that the δ phase is generally about 10% harder than the Г phase; the obtained hardness values of the δ phase were even in the range 330 to 460 HV. According to data [16,46], the δ phase in TD coating is harder by about 15% than the Г phase. In the analysed results of the current study, the difference is within the range 10-12%, but the coating micro-cracks may affect the measured hardness values and can have a decisive importance here. For the compared samples (hot dip galvanized: untreated (UT) and heat treated in 430 °C (HT430 °C)), results are consistent with the literature data concerning the individual phases that were obtained (Figure 6b) [50,58]. The outer area of the TD coating is 300 HV, which is 0.02 harder than the analogic layer of the HD UT sample and 90 HV 0.02 harder than the HD HT430 °C coating. The structure observed in the compared coatings (Figures 2-5) deposited on the tested disc-shaped samples corresponds well with the measured microhardness distribution in the coatings and the cross section of subsurface steel layers. The steel area close to the zinc/steel surface is slightly softer (in comparison to the UT sample), as a result of overheating. At a distance of 75 µm from the steel surface, the measured average hardness The presented hardness changes are due to the changes in the microstructure (Г 1 +δsuggested by results of EDS analysis), caused by diffusion of iron from the steel surface into the coating (Figure 2, Table 2). The hardness values measured by Pokorny [25] show that the δ phase is generally about 10% harder than the Г phase; the obtained hardness values of the δ phase were even in the range 330 to 460 HV. According to data [16,46], the δ phase in TD coating is harder by about 15% than the Г phase. In the analysed results of the current study, the difference is within the range 10-12%, but the coating micro-cracks may affect the measured hardness values and can have a decisive importance here. For the compared samples (hot dip galvanized: untreated (UT) and heat treated in 430 • C (HT430 • C)), results are consistent with the literature data concerning the individual phases that were obtained (Figure 6b) [50,58]. The outer area of the TD coating is 300 HV, which is 0.02 harder than the analogic layer of the HD UT sample and 90 HV 0.02 harder than the HD HT430 • C coating. The structure observed in the compared coatings (Figures 2-5) deposited on the tested disc-shaped samples corresponds well with the measured microhardness distribution in the coatings and the cross section of subsurface steel layers. The steel area close to the zinc/steel surface is slightly softer (in comparison to the UT sample), as a result of overheating. At a distance of 75 µm from the steel surface, the measured average hardness values were as follows: 195 (for HD UT samples), 165 (TD samples) and 175 HV 0.02 (HD HT430 • C samples). Friction Coefficient Measurements The coating showed higher abrasion resistance with increasing of the initial steel surface roughness, which was reflected by a reduction in weight loss (Table 3, Figure 7). The difference was particularly significant in the case of TD30 and TD60 samples. The difference in weight loss between the heat treated HD and TD samples was very small (max. 0.004 g), whereas the weight loss of HD UT zinc galvanized samples was 4-7 times higher with reference to both TD and HD HT samples. Table 3. The roughness and friction coefficient measured on the surface of the disc samples. TD30 TD60 TD120 TD240 SB T HD The base steel surface roughness exerts influence also on formed thickness of the zinc coatings. Coating thickness increases with the increase of steel surface roughness as a result of the higher reactivity of the basis metal (Figure 7b). The higher increment of the coating thickness was observed when comparing TD240 and TDSB coatings (the biggest difference in roughness). Roughness (S a ), µm The comparison of the investigated coating's appearance observed after the "pin-ondisc" test is presented in Figure 8. The assessment of the external appearance of the TD coatings in "macro" scale revealed that there are no visible cracks and discontinuities on the surface of the samples (before the friction test). The presence of the cracks in the upper part of the coating was confirmed only via microscopic observations (Figures 2a and 5a), but occurrence of the transverse cracks is characteristic of the intermetallic Fe-Zn phases [14]. The coatings' colour (dark grey) is similar to that seen on the HD HT samples- Figure 8a,d. As a result of the friction test, the regular groove was rubbed over the entire circuit of the tested coatings. The friction products formed on the coating surface during the test had a "coarse powder" shape with granularity depending on steel surface development (higher roughness-coarse grains). The HDUT coating was much lighter and the rubbed away particles were shaped like flakes, up to 0.4 mm long (Figure 8c). This confirmed the higher plasticity of this coating with comparison to TD and HDHT coatings. Figure 7. The disc-shaped samples' weight loss after the pin-on-disc friction test (a) and comparison of the thickness of the TD and HD zinc coatings (b). The base steel surface roughness exerts influence also on formed thickness of the zinc coatings. Coating thickness increases with the increase of steel surface roughness as a result of the higher reactivity of the basis metal (Figure 7b). The higher increment of the Figure 9 shows the course of changes in the instantaneous values of the friction coefficient of the TD zinc coatings. In the process of friction, three main stages can be distinguished. In the initial period of cooperation, the friction coefficient increased rapidly and, after forming a contact, dropped down to the value 0.18-0.24. In the second stage, its value gradually increased and finally stabilized (third stage) in the range of 0.27-0.43. The average values of obtained friction coefficient were in the range 0.20-0.39 (Table 3). The measured friction coefficient values correspond well with the microhardness and weight loss trends of changes presented in Figures 6 and 7. The TD coating layer was composed of a mixture of δ and Г phases in different proportions. It is probable that increasing the degree of development of the base steel surface and its roughness causes both an increase in the thickness and the hardness of the coating. The above changes may be caused by the increase of the steel reactivity and in consequence the extended range of occurrence of the harder δ phase. Therefore, the friction coefficient of the coatings TDT, TD30 and TDSB shows lower values (0.18-0.33) for an extended period of time than the coefficient determined for the coatings TD120, 240 and SB, which constantly shows a strong upward trend and stabilizes only after about 1000 s, at the level 0.42-0.44. [14]. The coatings' colour (dark grey) is similar to that seen on the HD HT samples- Figure 8a,d. As a result of the friction test, the regular groove was rubbed over the entire circuit of the tested coatings. The friction products formed on the coating surface during the test had a "coarse powder" shape with granularity depending on steel surface development (higher roughness-coarse grains). The HDUT coating was much lighter and the rubbed away particles were shaped like flakes, up to 0.4 mm long (Figure 8c). This confirmed the higher plasticity of this coating with comparison to TD and HDHT coatings. Figure 9 shows the course of changes in the instantaneous values of the friction coefficient of the TD zinc coatings. In the process of friction, three main stages can be distinguished. In the initial period of cooperation, the friction coefficient increased rapidly and, after forming a contact, dropped down to the value 0.18-0.24. In the second stage, its value gradually increased and finally stabilized (third stage) in the range of 0.27-0.43. The average values of obtained friction coefficient were in the range 0.20-0.38 (Table 3). The measured friction coefficient values correspond well with the microhardness and weight loss trends of changes presented in Figure 6, 7. The TD coating layer was composed of a mixture of δ and Г phases in different proportions. It is probable that increasing the degree of development of the base steel surface and its roughness causes both an increase in the thickness and the hardness of the coating. The above changes may be caused by the increase of the steel reactivity and in consequence the extended range of occurrence of the harder δ phase. Therefore, the friction coefficient of the coatings TDT, TD30 and TDSB In the case of the reference samples (HDUT and HDHT), the tendency to change the hardness along the cross section of the coating is quite different (in comparison to TD coating- Figure 6b)-the hardness increases from the outer surface into the coating. In the HD UT coating the η phase was relatively soft (about 55 HV 0.02) and probably was partially removed during the first stage of friction (grinding-in of the pin-and-disc sample contact). The appearance of a mixture of the ζ and δ phases resulted in a reduction in the coefficient of friction from the value of 0.29 (η) to approximately 0.25 (ζ + δ)- Figure 10. After the HT (430 °C), the subsurface coating layer was composed of a mixture of η and ζ phases [53]. Therefore, the value of the friction coefficient was lower-0.28-and decreased slightly to 0.21, as the layers closer to the steel surface were rubbed. In the case of the reference samples (HDUT and HDHT), the tendency to change the hardness along the cross section of the coating is quite different (in comparison to TD coating- Figure 6b)-the hardness increases from the outer surface into the coating. In the HD UT coating the η phase was relatively soft (about 55 HV 0.02) and probably was partially removed during the first stage of friction (grinding-in of the pin-and-disc sample contact). The appearance of a mixture of the ζ and δ phases resulted in a reduction in the coefficient of friction from the value of 0.29 (η) to approximately 0.25 (ζ + δ)- Figure 10. After the HT (430 • C), the subsurface coating layer was composed of a mixture of η and ζ phases [53]. Therefore, the value of the friction coefficient was lower-0.28-and decreased slightly to 0.21, as the layers closer to the steel surface were rubbed. hardness along the cross section of the coating is quite different (in comparison to TD coating- Figure 6b)-the hardness increases from the outer surface into the coating. In the HD UT coating the η phase was relatively soft (about 55 HV 0.02) and probably was partially removed during the first stage of friction (grinding-in of the pin-and-disc sample contact). The appearance of a mixture of the ζ and δ phases resulted in a reduction in the coefficient of friction from the value of 0.29 (η) to approximately 0.25 (ζ + δ)- Figure 10. After the HT (430 °C), the subsurface coating layer was composed of a mixture of η and ζ phases [53]. Therefore, the value of the friction coefficient was lower-0.28-and decreased slightly to 0.21, as the layers closer to the steel surface were rubbed. Conclusions (1) The method of the base steel surface preparation affects the friction coefficient value, thickness and wear resistance of the TD zinc coating. (2) In the applied test conditions, the value of the friction coefficient of the TD coating varied within the range 0.20-0.39, with a coating thickness of 44.5 to 47.5 µm, respectively. (3) The measured friction coefficient values correspond well with the microhardness profile determined on the cross section of the TD coating and the weight loss trend of changes obtained during the "pin-on-disc" test. With increasing of the coating's hardness, both the TD coating's coefficient of friction and the weight loss are reduced. (4) The lower values of the friction coefficient were measured for samples with higher roughness of the base steel surface. The observed changes may be caused by the increase of the steel reactivity and in consequence extending the range of occurrence of the harder δ phase. (5) The changes in properties of compared coatings are due to the differentiation in the microstructure (verified by results of EDS analysis), caused by specific growth or diffusion conditions during individual coating formation. The TD coating was composed of δ (outer) and Г (inner) phases. The microstructure of a tested HD zinc coating was formed by phases η, ζ, δ and Г1, whereas in a HD zinc coating after heat treatment, only three phases occurred: ζ, δ and Г. (6) The TD coating (δ+Г) showed higher abrasion resistance (in comparison to HD UT coating-η+ζ+δ), which was expressed in a reduction in weight loss measured during the tribological test. In the conducted test the HD zinc coating weight loss was four times greater. (7) The abrasion resistance of the TD zinc coating (δ+Г) is similar to the HD HT coating (ζ+δ+Г)-the measured difference in the weight loss was a maximum of 0.004 g.
v3-fos-license
2023-01-27T14:19:28.553Z
2020-09-07T00:00:00.000
256282927
{ "extfieldsofstudy": [], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://aricjournal.biomedcentral.com/track/pdf/10.1186/s13756-020-00811-9", "pdf_hash": "fd1aea61580309f71b4c427b79acfb65baaf0f04", "pdf_src": "SpringerNature", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:1603", "s2fieldsofstudy": [ "Medicine" ], "sha1": "fd1aea61580309f71b4c427b79acfb65baaf0f04", "year": 2020 }
pes2o/s2orc
Parental self-medication with antibiotics for children promotes antibiotic over-prescribing in clinical settings in China Self-medication with antibiotics (SMA) is one of the most dangerous inappropriate antibiotic use behaviors. This study aims to investigate the impact of parental SMA for children before a consultation on their doctor’s subsequent antibiotic prescribing behavior, including intravenous (IV) antibiotic use in the clinical setting of China. A cross-sectional survey was conducted between June 2017 and April 2018 in three provinces of China. A total of 9526 parents with children aged 0–13 years were investigated. Data from 1275 parents who had self-medicated their children and then visited a doctor in the past month were extracted and analyzed. One-third (410) of the studied children had parental SMA before the consultation and 83.9% of them were subsequently prescribed antibiotics by doctors. Children with parental SMA were more likely to be prescribed antibiotics (aOR = 7.79, 95% CI [5.74–10.58]), including IV antibiotics (aOR = 3.05, 95% CI [2.27–4.11]), and both oral and IV antibiotics (aOR = 3.42, 95% CI [2.42–4.84]), than children without parental SMA. Parents with SMA behaviors were more likely to request antibiotics (aOR = 4.05, 95% CI [2.59–6.31]) including IV antibiotics (aOR = 2.58, 95% CI [1.40–4.76]), and be fulfilled by doctors (aOR = 3.22, 95% CI [1.20–8.63]). Tailored health education for parents is required in both community and clinical settings to discourage parental SMA for children. The doctors should not prescribe unnecessary antibiotics to reinforce parents’ SMA behaviors. We recommend expanding the current IV antibiotics ban in outpatient settings of China to cover outpatient pediatrics. Introduction Antimicrobial resistance (AMR) is recognized as one of the biggest threats facing global health; inappropriate use of antibiotics, including antibiotic misuse and overuse in both community and clinical settings, is a major contributor to AMR [1,2]. Self-medication with antibiotics (SMA) is one of the most dangerous and prevalent inappropriate antibiotic use behaviors, with a particularly high prevalence in low-and middle-income countries (LMICs) [3][4][5]. Despite this, antibiotics are the most commonly prescribed medicine for children [6], especially in LMICs where inappropriate antibiotic prescribing is prevalent [7]. A few studies have demonstrated the association between the SMA behavior before a consultation and doctors' practices in clinical settings. A qualitative study conducted across nine European cities revealed that, when noticing their patients had started using antibiotics before the consultation, doctors would advise patients to complete their course of antibiotics, even if they thought antibiotics were unnecessary [8]. Another quantitative study conducted in a city in Poland indicated that doctors were more likely to prescribe antibiotics (aOR = 4.11) when patients had self-medicated with antibiotics before the consultation [9]. However, previous studies on this topic were conducted in only one country, where the prevalence of SMA and inappropriate antibiotic prescribing was relatively low, and the children were not specifically targeted. Our study aims to examine the impact of parental SMA for children on antibiotic prescribing in clinical settings in China, where both SMA and inappropriate antibiotic prescribing are prevalent. In China, antibiotics have been pervasively used for children in both the community and clinical settings. In the community setting, it was reported that 59.4% of urban and 62% of rural parents had self-medicated their children with antibiotics in the past year [10,11]. The antibiotics that parents used for self-medicating their children were mainly from over-the-counter purchases (35.3%), and also leftover antibiotics from previous prescriptions (63.1%) [12]. Although the Chinese government has officially banned non-prescription dispensing of antibiotics since 2004 [13], consumers in community and online pharmacies are still able to obtain antibiotics for self-medication without prescriptions [14,15], which could be mainly due to fierce competition in the pharmacy market, consumers' irrational expectations, the Food and Drug Administration's limited capacity for supervision and minimal penalty for violating the regulation [16]. In the clinical setting, it was estimated that the average antibiotic prescription rate for children was 67.8% [17]. Moreover, among those who were prescribed antibiotics for upper respiratory infection (URI), 52.9% were given intravenous (IV) antibiotics [18]. Antibiotic overprescribing is mainly blamed on doctors' perverse economic incentives, lack of knowledge, and inadequate training [19][20][21]. However, the risk factors for SMA from the patient side have rarely been studied. Two previous studies found that patients with better antibiotic use knowledge were less likely to be prescribed antibiotics when seeing a doctor [22,23]. However, few studies have investigated the impact of parental SMA for children before a consultation on their doctor's subsequent antibiotic prescribing behavior. Consequently, this study aims to investigate the impact of parental SMA for children before a consultation on antibiotic prescribing behaviors, including IV antibiotic use in China. Study design and population The reported data in this study came from a crosssectional survey conducted between June 2017 and April 2018 in China. Parents with children aged 0-13 years were recruited across three purposefully selected Chinese provinces -Zhejiang, Shaanxi, and Guangxirepresenting different geographic areas and varying economic development stages. Detailed methods, including sampling and recruitment, have been published elsewhere [12]. A representative sample of parents was obtained with a multistage stratified random cluster sampling procedure. The sampling procedure was conducted in four stages-provinces, prefecture-level cities, urban, and rural areas. The sampling sites included primary schools for parents whose children aged 6 to 13 years old, kindergartens for parents whose children aged 3 to 5 years old, and vaccination sites of community health centers for parents whose children aged 0 to 2 years old. The total sample size was 9526. Among them, 1275 parents had self-medicated their children and then visited a doctor in the past month. Measures The items used in this study included three main parts: 1) the sociodemographic characteristics of parents and children-parents' gender, education level, location of residence, medical education background, average household income, and their children's gender and age, 2) parental self-medication behavior (with or without using antibiotics) for their children, and 3) children's clinical consultation outcomeswhether children were prescribed antibiotics, route of antibiotics given if prescribed, if parents requested antibiotics, and, if requested, if their requests were fulfilled by doctors. Statistical analysis Descriptive analyses were used to present weighted frequencies and percentages of factors of interest. Chisquare tests were conducted to examine the differences in hospital consultation experience between parents who self-medicated their children with antibiotics (SMA parents) and parents who self-medicated their children without using antibiotics (non-SMA parents). Multivariable logistic regression was adopted to examine the impact of parental SMA behavior for children on the outcome of their hospital consultation experience after controlling for socio-demographic factors. SPSS version 21.0 was used for statistical analyses. The significance level (type 1 error rate) was set at 0.05. Results As is shown in Table 1, a total of 1275 children's parents (451 from Guangxi, 438 from Shaanxi, 386 from Zhejiang) had self-medicated their children and then visited a medical institution in the preceding month. Half of the sampled children (51.6%) were boys, with an average age of 5.1 years (SD = 3.1), and 11.9% of them had parent with medical background. Most of the respondents were mothers (82.1%); 46.3% had a junior college and above education level; half of them had a monthly average household income over 5000 RMB (US$769), and 58.2% lived in urban areas. Of all the children, 410 (32.2%) had been selfmedicated by their parents with antibiotics before the consultation. Table 2 shows that a total of 693 (54.4%) children were prescribed antibiotics by doctors. Out of 1275 children, 448 (35.1%) were prescribed a route of oral antibiotics only, 76 (6.0%) with IV antibiotics only, and 169 (13.3%) with both oral and IV antibiotics. One hundred (7.8%) parents asked for antibiotics (4.2% for oral and 3.7% for IV antibiotic use) during a consultation and 71% of them had this request fulfilled. After controlling for sociodemographic characteristics, our logistic regression ( Discussion To our knowledge, this is the first study to investigate the impact of parental SMA for children on doctors' antibiotic prescribing in clinical settings in China. Our results demonstrated that children with parental SMA prior to a consultation were more likely to be prescribed antibiotics both oral and IV antibiotics during a clinical visit. Additionally, parents who have self-medicated their children before a consultation were more likely to ask for antibiotics, including IV antibiotics, during the consultation, and their requests were more likely to be fulfilled by doctors. Consistent with the study in Poland [9], our study showed that SMA before consultation promoted inappropriate antibiotic prescribing in clinical settings. However, our study predicted an adjusted odds ratio in China (aOR = 7.79) which was a double of that in the Polish study (aOR = 4.11). This difference might due to our specific focus on children as the study population, and a higher prevalence of SMA in China. The huge difference in odds ratio between the Polish study and ours indicates that parental SMA for children before a consultation promotes doctors' over-prescribing. This deserves further study especially in other LMICs, where both the prevalence of SMA and antibiotic prescription rate are high. Our study found that SMA parents were more likely to ask for antibiotics and their requests were more likely to be fulfilled by the doctors, which explains how SMA triggered antibiotic prescriptions in clinical settings. Consistent with previous studies, parents who self- medicated their children with antibiotics were more likely to expect to receive antibiotics when seeing a doctor [24]. Given the currently tense doctor-patient relationship in China [25], defensive medicine (medical practice based on fear of legal liability rather than on patients' best interests) [26] has worsened the situation [27]. It is difficult for doctors to refuse patients' requests in China. A qualitative study suggested that doctors would choose to fulfill patients' or their caregivers' requests for antibiotics to avoid a quarrel, even antibiotics were unnecessary [28]. In China, IV antibiotics are frequently prescribed for self-limiting conditions [29]. In our study, 35.4% of those who had been prescribed antibiotics received IV antibiotics. This finding is consistent with previous studies that demonstrated IV antibiotics were frequently prescribed for outpatient pediatrics in China [18]. Our results showed higher prescription rates of IV antibiotics (alone or in combination with oral antibiotics) among children with parental SMA than those among children without parental SMA. However, most of our participants reported self-limiting symptoms, including cold (85.6%) and sore throat (53.9%), with some overlap between symptoms, for which antibiotics were unnecessary [30]. All of these children were self-medicated by their parents prior to a consultation, yet only 32.2% were given antibiotics. Therefore, differences in disease severity cannot fully explain the differences in antibiotic prescription rates. This phenomenon might partly be due to prevalent beliefs among parents [31] and doctors [32] in China that IV antibiotics are more effective than the oral ones. In addition, we found that requests of IV antibiotics from SMA parents during consultations could also contribute to the high demand of IV antibiotic prescriptions. To solve the problem of IV antibiotic overuse and misuse in clinical outpatient settings, several provinces and cities have been piloting a ban of IV antibiotics in secondary and above hospital outpatient settings since 2016, but the ban did not include outpatient pediatrics [33,34]. We propose expanding this ban to include pediatrics if it is expanded to a nationwide ban in the future. Our study has several implications for future policies and interventions. From the doctors' perspective, the finding that parental SMA behaviors influenced doctors' prescription decisions suggests that future education training on doctor-patient communication is needed to prevent doctors from being influenced by parental SMA behaviors before the consultation. In addition, considering that parental antibiotic requests trigger inappropriate prescriptions, we recommend developing better communication practices between doctors and patients and cultivating among doctors an understanding of parents' antibiotic expectations. In our study, 71% of parents who requested antibiotics were satisfied by their doctors, but most of the reported symptoms or diseases were self-limiting conditions, for which antibiotics are not recommended [30]. Previous studies have shown that parents who expected antibiotics for their children but received an explanation and contingency plan without antibiotics were as satisfied as those whose expectation was met [35]. Moreover, a reduction in antibiotic prescriptions was resulted from a French antimicrobial stewardship program that encouraged a consultation with doctors to explain antibiotic safety and establish a trusting doctor-patient relationship [36]. A systematic review indicated that antibiotics were expected by parents because they believed that antibiotics were effective for treating their children 's illness, relieving their symptoms, reducing their likelihood of re-consultation, and preventing infection to other family members [37]. Thus, it is important to encourage doctors to understand the reasons for parents' antibiotic requests and expectations and to improve doctor-patient communication through tailored antimicrobial stewardship programs. Though it is challenging for pediatricians to distinguish a feverish child from a self-limiting viral infection since s/he could have a serious bacterial infection [38]. Instead of refusing to prescribe antibiotics, it has been recommended that doctors withhold antimicrobial treatment for non-severe infections of children [39]. There is currently a great opportunity to address this problem, as the Chinese government has launched a zero-mark-up policy aiming to eliminate the economic incentives for over-prescribing [40]. However, China is currently facing a pediatrician shortage [41]; pediatrics has been reported as the most crowded department in many hospitals and pediatricians were under much higher pressure from patients (i.e. children's parents) than were doctors in other departments [42]. Consequently, further studies are needed to determine possible ways to improve doctor-patient communications in China. From the patients' perspective, future interventions targeting the factors that trigger SMA and their requests or expectations would help reduce inappropriate antibiotic use in both community and clinical settings. SMA in LMICs is promoted by easy access to antibiotics from over-the-counter pharmacies. Unlike high-income countries, antibiotics in LMICs are easily obtained in community settings without a prescription [5,43]. Additionally, SMA is promoted by lack of antibiotic use knowledge [44], previous recovery experiences [45], and keeping antibiotics at home [46]. Consequently, campaigns targeting patients and the general public in community settings that seek to eliminate these factors would contribute to lowering the rate of inappropriate antibiotic prescribing in clinical settings. Moreover, from the relationship between doctors and patients, previous prescriptions were proved to be one of the triggers of SMA [45]. Thus, the doctors should not prescribe unnecessary antibiotics to reinforce parents' SMA behaviors. Our study revealed that parental SMA behaviors influenced doctors' prescription decisions. However, to our best knowledge, there are no clear clinical guidelines for doctors to intervene when interacting with patients who overuse or misuse antibiotics at home. Doctors are placed in a vulnerable and confused position when facing SMA patients/parents. Limitations The findings of this study should be evaluated in light of its limitations. First, we did not ask about the frequency of being prescribed antibiotics during a consultation, thus the prescription rate and IV antibiotic prescription rate might be underestimated for children who might have been taken to more than one health facility by their parents. Second, we could only determine the number of prescriptions that contained both oral and IV antibiotics without knowing whether these were oral transmitted into IV, IV transmitted into oral, or oral and IV simultaneously. Third, our study relied on self-reporting, which could contain recall bias. However, in order to limit recall bias, we asked for children's symptoms within the last month in our study, whereas previous studies have asked for symptoms within the last six months or even year [10,11,44,45]. Conclusions Parental SMA behavior for their children before a clinical consultation was significantly associated with subsequent antibiotic prescriptions by doctors. Additionally, parents who had conducted SMA for children were more likely to ask for antibiotics, including IV antibiotics, during consultations. Tailored health education for parents is required in both community and clinical settings to discourage parental SMA for children. Future antimicrobial stewardship programs should target both the doctors and the parents by tailored health education and promoting doctor-patient communication. The doctors should not prescribe unnecessary antibiotics to reinforce parents' SMA behaviors. We highly recommend expanding the ban on antibiotics in outpatient settings to cover outpatient pediatrics. Abbreviations AMR: Antimicrobial resistance; SMA: self-medication with antibiotics; LMICs: low-and middle-income countries; URI: upper respiratory infection; IV: intravenous; SMA parents: parents who self-medicated their children with antibiotics; non-SMA parents: parents who self-medicated their children without using antibiotics; SD: standard deviation; OR: odds ratio; aOR: adjusted odds ratio
v3-fos-license
2022-09-30T15:09:06.546Z
2022-09-28T00:00:00.000
252617878
{ "extfieldsofstudy": [], "oa_license": "CCBY", "oa_status": "HYBRID", "oa_url": "https://doi.org/10.1177/17506980221122154", "pdf_hash": "6b18ba361ca8ff105d70eff551a56b9b3adaa994", "pdf_src": "Sage", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:1604", "s2fieldsofstudy": [ "Sociology", "Political Science" ], "sha1": "ba9021eb081ece288f74504954be3e23f2f4be49", "year": 2023 }
pes2o/s2orc
Racialised regimes of remembrance: The politics of trivialising and forgetting the murders of Black children in Brazil This article starts from the notion of collective memory as a source of power and meaning and draws from the concepts of activist memory to reflect on the existence of a racialised regime of memory in Brazil. Considering the social struggles involving Black people and the decades of fights for voice and justice, this investigation will deliberate on the media practices and general public recollections around the death of Black children under the optics of Hall’s concept of racialised regimes of representation. Employing an online survey and content analysis, this work uncovers evidence of a different set of practices to report and remember the death of White and Black children and considers the impact of those practices by analysing the remembrance rates on the survey. Memory is the social phenomenon (Halbwachs, 1992) that binds people together. It is, then, more than mere records of the past, but a 'source of both power and meaning in the present' (Simko, 2016: 458). There are many questions about the preservation of memories of marginalised and prejudiced groups. Often ignored or depicted in a negative way by the official narrative, they end up creating their own dynamics to 'engage with personal, collective, shared and cultural memories in connective ways in order to preserve their heritage' (Garde-Hansen, 2011: 6). These dynamics eventually become part of that heritage themselves. According to Merrill et al. (2020), although early social and collective memory studies were not concerned with issues involving activism and protest, this scenario began to change in the 1980s. Scholars in the field started to give greater emphasis to activism, turning to the analysis of counter-memories and eventually evolving to study mnemonic resilience and the 'role that memory and commemoration play within the political processes of conflict transformation, resolution and reconciliation ' (Merrill et al., 2020: 3). Recently, memory activism that we can refer as the mnemonic practices involved in the construction of counter-memories by marginalised groups to challenge the status quo and offer a narrative of their own history and memories, gained influence within the subject. It is easily observed that marginalised groups constantly have their history told by the dominant group through narratives dominated by misrepresentation, perpetuating ideas and prejudices that favour the maintenance of power, be it economic, political or social. We are now witnessing an explosion of social movements claiming these narratives back to the marginalised groups (Amaral, 2021;Custódio, 2017;de Oliveira Maia, 2017;Souza and Maia, 2016) by means of protests, counter-narratives that gain strength aided by digital tools and counter-memories that defy the version previously considered official. In Brazil, black movements for black pride and black history, or even the massive use of the hashtag #nóspornós, meaning 'us by us' referring to favela residents, who are mostly black (Meirelles and Athayde, 2014), telling their own narratives of the events that happen inside the favelas, are examples of this desire (and the struggle) to take back the narratives about their people, their culture and their home. This article will engage with Hall's (1997) reflections on racialised regimes of representation and with the concept of activist memory aiming to ascertain the existence of a racialised regime of memory and memorialisation in Brazil. Employing surveys and content analysis of media articles, this work brings evidence of a different set of practices to report and remember the deaths of Black children when compared to White children. Anchored on Hall (1997), it converses with the notions of 'letting disappear', proposed by Denyer Willis (2021), as well as the 'White fear of Black souls' (Chalhoub, 1988) and the optics of 'anti-black cities' (Alves, 2018), to navigate the intricate threads surrounding reporting and remembering the violent loss of Black children's lives in contemporary Brazil. First, presenting robust background research that shed light on the irrefutable marginalisation of Black people in Brazil, this article then goes on to present data collected through a mixed questionnaire with closed and open questions, in order to corroborate the existence of such a profound racialised system that it extends even after the death of the concerned individuals. Complementing the questionnaire, an extensive content analysis bringing material from different sources poured more evidence to verify the hypothesis raised by this research, substantiating the existence of a completely different system to report and remember the murders of Black children and the impacts of these differences. Black life in Brazil: free from the slavery whip, trapped in the squalor of the favelas Black people in Brazil exist in a post-slavery society that still resists abandoning rooted prejudices to maintain the status quo through power relations. Although they represent the majority of the Brazilian population (Brazilian Institute of Geography and Statistics (IBGE), 2020) and 133 years have passed since the signing of the law that freed Black people from slavery in the country, this population still suffers from the ills of an unequal society, marked by racialised regimes of representation, violence, prejudice, social invisibility and the difficulty of social and professional ascension among Black people. Before diving further into this discussion, it is necessary to better understand how the ethnic-racial characteristics of the Brazilian population are defined. According to the IBGE (Brazilian Institute of Geography and Statistics), the main provider of geographic information and statistics in Brazil, over 56% of the Brazilian population identifies as Black or Brown (IBGE, 2020). Osorio (2003) explains that in Brazil, in most administrative records, such as birth or death records, the identification of racial belonging is done by self-attribution or, when the subject is not yet able to provide this identification (e.g. in records of birth), by hetero-attribution (usually from a family member). For research purposes, IBGE also uses self-attribution in its surveys (or hetero-attribution when a family member is responding to the others). Since the 1980s, the colour or race categories presented to respondents in those surveys are White, Black, Brown, Yellow and Indigenous; in this classification, according to the interviewers' manual, Brown refers to those who have some miscegenation, while Yellow refers to those of Asian descent and Indigenous to those who descend from Brazilian native people. The main critique of that classification is that in a country where Black people still suffer discrimination and have greater difficulty in social mobility, there is a tendency to reject identifying as Black, inflating the numbers of those who declare themselves to be Brown, a group that suffers less discrimination than Blacks in Latin America (Telles and Lim, 1998), or even White: 'in light of the prevailing Whiteness ideal, it is to be expected that people who have fewer black traits in their appearance tend to consider themselves White' (Osorio, 2003: 13). On that note, it is important to highlight that although that reality still prevails, the recent strengthening of black movements in the country has been causing an increase in the identification of racial belonging among Black and Brown people, through black pride movements and the reconnection with their origins and traditions. This change in the perception and identification of the Black population has already been noticed in the official records (contributing to the recent increase of these populations) but also becomes more evident in the strengthening of the racial identity subjectively constructed and perceived both by the subject and also society. Although the country's claims of an apparent 'racial democracy', the Black population 'continues to be marginalized economically and socially', suffering from dehumanisation in favour of maintaining the present state of affairs, which perpetuates a 'logic of class and colour' in a society of 'dominant (European Whites) and dominated (Black, Indigenous and mestizo)' (da Silva, 2014: 14). The country's cities become, then, the place for 'a racial project produced through a dialectic relation of "terror and civility" represented by the Black threat and "endangered civil society"' (Alves, 2018: 3), in which to defend and maintain the desired order a 'permanent urban warfare against Black Brazilians' is waged, orchestrated by an 'essentially anti-black' civil society (Alves, 2018). The Black population is, for example, the biggest victim of police violence in the country (over 70% of the victims are Black, according to the last report from Fórum Brasileiro de Segurança Pública (2019)). Black and Brown people are the majority in the favelas and peripheries (Meirelles and Athayde, 2014) and also account for 75% of the poorest in Brazil, while Whites account for 70% of the richest (IBGE, 2019). Although 133 years have passed since the end of legal slavery in Brazil, the Black and Brown population still occupies the majority of the manual or domestic work positions. They are underrepresented in higher education, corresponding to only 35% of the universities student body in Brazil (Bermúdez, 2020) and also 'under a cultural subjection', suffering, for decades, of lack of opportunity and voice to narrate their own stories, resulting in 'the deep pain of perceiving themselves marginalised by the history constructed by the dominators' (da Silva, 2014: 15). Recovering the notion that memory is a source of power and meaning (Simko, 2016), the manipulation of collective memory can be considered a tool to maintain structures of power and domination in modern societies. Foucault defended that memory 'is actually a very important factor in struggle . . . If one controls people's memory, one controls their dynamism'. 1 Le Goff (1992) argued that collective memory is an important asset in power struggles and stated that the groups that dominated historical societies were concerned with 'make themselves the master of memory and forgetfulness' (p. 54) and that 'things forgotten or not mentioned by history reveal these mechanisms for manipulation of collective memory' (Le Goff, 1992). Reflecting further on this idea of memory as a source of power and meaning (Simko, 2016) and as a tool for consolidating collective rights (Bizello and Ferreira, 2010), one can question whether the Brazilian national memory, 'synonymous with official history' is in fact representative of 'all social groups' (Bizello and Ferreira, 2010: 259) or whether it operates, in fact, a racialised regime of remembrance. Slavery-based ideals are still very present in the Brazilian social hierarchy. Officially abolished in 1888, the end of slavery did not bring immediate changes to the situation of Black Brazilians. With the transition from Black slave labour to White labour (mainly European), emancipation meant that the Black population was no more enslaved without, however, offering real opportunities for their integration into society. Without a job and still seen as inferior by the White Brazilians, the segregation and social invisibility among this group only increased, pushing Black people to live on the margins and become victims of poverty, criminalisation and violence. When we consider this panorama to analyse the recognition of the importance of Black memory in the country over the years, there is ample evidence of the state's disregard for its preservation. Bizello and Ferreira (2010) offer strong evidence of this neglect when pointing out that the preservation of the memory of Black people in Brazil was anchored in social movements to fight against racism. The memory of Black people on Brazilian soil is practically devoid of written documentation (since the enslaved did not produce documentation) being based exclusively on official records where they appeared as merchandise and the focus was on the transaction (purchase, sale, inheritance, etc.) and not the individual. The strengthening of groups and associations of Black people across the country, united mainly in the fight for rights, increased document production and at this point, some form of organisation and preservation initiative from the state would be expected. What was observed, however, was that 'the custody and preservation of that same documentation suffer from the neglect of the state' (Bizello and Ferreira, 2010: 262), resulting in the erasure of Black memory in the country. The signs of a system that discriminates by race those who deserve to be celebrated or not are abundant when we analyse Black history in Brazil. One piece of evidence is the current school curriculum. It does not contemplate the history of Black Africans prior to being brought to the country as slaves and it is limited to a shallow presentation of the Brazilian slave system, representing Black people as a passive figure in this system and placing little emphasis on Black resistance and the counter-narratives that already appeared at that time (de Oliveira, 2012). Although Law 10.639/2003 obliges the inclusion, in textbooks, of content about African and Afro-Brazilian history and culture for elementary school students, this inclusion, which is still quite superficial, is not yet capable of changing the Eurocentric focus of the current didactic material. This is clearly observed when we reflect, for example, on the fact that there are many relevant but little-known Black names in the country's history (such as Luís Gama, poet, journalist and responsible for the liberation of more than 500 enslaved people and even Dandara, wife of one of the rare Black personalities that are celebrated in the country; Zumbi dos Palmares, having participated with him in the Black resistance in Quilombo dos Palmares) but the celebration of these names are scarce and disproportionate and holidays to celebrate milestones and names in Black history are still exceptions (Bizello and Ferreira, 2010). When analysing the progress of modern society, Black people were excluded from the simplest forms of tribute and remembrance. Even in the preservation of urban structures and Black history landmarks, dozens are the points of interest that have been completely abandoned by the state. Martins and dos Santos Júnior (2017) argue that 'what is chosen to be preserved is part of a project on identity and collective memory' (p. 37) and in Brazil, even street names reveal the neglect of Black history, as Black personalities are systematically forgotten in this form of historical perpetuation. This systematic forgetting of Black personalities and Black history can be reflected under the umbrella of the concept of 'unthinkable history', explained by Trouillot (2015) to elaborate on history and silence. Supported by Bourdieu's notion that the unthinkable is 'that for which one has no adequate instruments to conceptualize' (Trouillot, 2015: 82), Trouillot (2015) draws a narrative to explain the general silence about the Haitian Revolution, 'the most important slave insurrection in recorded history' (p. 72). The author argues that at the time, a slave insurrection was so unthinkable, so unlikely, that contemporary scholars and philosophers did not have the adequate intellectual resources to deal with those events, thus remaining mostly silent: The Haitian Revolution thus entered history with the peculiar characteristic of being unthinkable even as it happened. [. . .] reveal the incapacity of most contemporaries to understand the ongoing revolution on its own terms. They could read the news only with their ready-made categories, and these categories were incompatible with the idea of a slave revolution. (Trouillot, 2015: 73) This inability to understand a slave revolution for freedom, according to the author, was anchored in the widely rooted idea of the inferiority of Black people and their indisputable obedience. Those who rebelled were seen as the exceptions, ill-adjusted specimens, deviants. Accepting the idea of a mass rebellion against slavery, Troillout explains, was to 'acknowledge the possibility that something is wrong with the system' (Trouillot, 2015: 84) which, of course, would not benefit the planters and slave masters in the Americas. Therefore, the idea of a slave insurrection was not only unthinkable even as it happened, but also deemed too damaging to the status quo to be acknowledged, being, thus, relegated to silence. Here, we can draw a parallel between the silence discussed by Trouillot and the erasure of Black history and memory in Brazil. As the Black population is still a source of cheap work today due to the difficulty of social ascension linked to structural racism and cities that are essentially 'anti-black' (Alves, 2018), there is a profound disinterest in telling these stories and celebrating these personalities, since the silence collaborates with the difficulty of social mobility and benefits the elites. The contemporary elite that is interested in continuing to exploit the cheap labour of the descendants of the enslaved that today populate the favelas and suburbs resembles the planters cited by Trouillot (2015), who in light of having to recognise a rebellion for freedom and validate the idea that there was something wrong with the slavery system preferred to remain in silence. Both recognising the structural racism that plagues the country and is at the root of problems such as state violence, and making room for Black history, recognising relevant Black personalities in Brazilian National History would open up opportunities to disturb the social system currently in place from which they benefit largely. Furthermore, it is necessary to consider the state's perspective in (not) recognising the Black genocide occurring in the favelas and suburbs, as recognising these stories (and safeguarding these memories) would also mean recognising that there is something wrong with the very system; just as mundane disappearances are convenient for the state (Denyer Willis, 2021), so is ignoring and relegating state violence against Black bodies to oblivion. Considering the relevance of collective memory in social and political relations, it is also important to reflect on the process not only of erasing Black heroes from the history of Brazil but also of barbarising Black people in general. On that note, Amaral (2021) writes that during slavery and soon after its abolition, the current thinking about life in Africa was of pure barbarism, opposed to the sense of civility in the Americas and Europe, for example. The discourse, to a certain extent used to justify the enslavement of the African Black people (according to Hall (1997) based on the work of Frederickson), was that Black people lived in Africa in complete disorder, cannibalism, and savagery. (Amaral, 2021: 57) Hall (1997) himself pointed out that even the philosopher Hegel declared that 'Africa was "no historical part of the world . . . it has no movement or development to exhibit"' and that in the nineteenth century, 'Africa was regarded as "marooned and historically abandoned . . . a fetish land, inhabited by cannibals, dervishes and witch doctors"' (Hall, 1997: 239). This notion reverberates in the dialectical representation of the city brought up by Alves (2018), in which there is an opposition between 'terror and civility' (in which Black people are terror; Alves, 2018: 3) and also dialogues with Chalhoub (1988) who explains the 'White fear of Black souls' in which the White Brazilian population attributed chaos and danger to Black and poor people and constantly placed themselves in a position to defend civility against the barbarism of the Black and the poor. Although recent laws are aiming at an anti-racist education, in practice the school remains a place that perpetuates the view of Black people as descendants of slaves (and not enslaved people), represented by 'stereotypes of ugliness, rudeness, ignorance, primitivism and aggressiveness' (Madureira, 2020). This representation maintains social divisions that determine that White lives are worth more than Black lives and that 'some people are more human than others' (Costa, 2020). The manipulation of collective memory through neglect of Black history plays a central role in this division that hinders the social rise of Black people in Brazil. As people without memory do not know their own history risking becoming devoid of meaning and power, this neglect affects directly their ability to enter political and economical sectors that were hitherto typically White thus serving the purpose of maintaining the status quo that benefits from cheap labour and unchallenged political power. Having established the racial discrimination in the perpetuation of the Brazilian national history, the very erasure of Black participation in said history (Moreira and Pereti, 2020) as either an asset in its construction or a liability in being a victim of it, and, finally, the impacts of this practices of erasure on social and power relations, we move on to the primary reflection of this article on whether a racialised regime of remembrance is in place in the country. The existence of a hegemonic representation structure in Brazil that reduces non-Whites to 'fixed in Nature by a few, simplified characteristics' (Hall, 1997: 247) stereotypes, a practice that Hall calls 'racialized regimes of representation' (Hall, 1997) is very clear (Amaral, 2021). This regime employs strategies to establish the 'differences' between Whites and Blacks to determine what is 'normal and acceptable' and then 'exclude or expel everything which does not fit, which is different' (Hall, 1997: 258). The very existence of such practices of representation in the mainstream media was addressed recently (Amaral, 2021;Custódio, 2017) and the fact we are discussing an intrinsically racist society (de Oliveira, 2012: 82) brings this study to propose an extension of Hall's definitions and argue for the existence of a racialised regime of remembrance in Brazil. Such hegemonic practices dictate not only those who should be celebrated (mostly White) but also those who deserve to be even remembered and how they should be remembered (if not completely forgotten). It collaborates to the perpetuation of the historical violence against Black and poor people that is not properly addressed and consequently, is quickly forgotten along with those who suffer such violence. This continues a cycle of prejudice, exclusion and abuse that reinforces the social invisibility and dehumanisation of Black and Brown people in Brazil, strengthening legacies of pain and suffering as well as preventing their participation in the democratic arena, hindering their chances to claim rights (Capriglione, 2015). Police violence, for example, is among the biggest public security crisis in Brazil but there is a gap in memory studies knowledge related to it. Existing studies on memories of state Violence in Brazil are mainly focused on the period comprising the country's military dictatorship (1964-1985) but little is produced on memories of police violence during democratic times. Not coincidentally, Black people are currently the overwhelming majority of victims of police violence, raising doubts about whether the lack of interest is not linked to socio-racial issues and leading to the question: how are we remembering the violence of democracy in the anti-black cities (Alves, 2018) of Brazil? Media and memory -mediating our future recollections In the present time, it is essential to consider the role of the media in the recording and circulation of memories. Garde-Hansen (2011) argues that 'our engagement with history has become almost entirely mediated' (p. 1) and that 'media and events of historical significance are inseparable' (Garde-Hansen, 2011), therefore, we can recognise that the media, in all its forms, is responsible for reporting, archiving and disseminating memories. The role of the media in building our memories can easily be made tangible when one thinks of significant events in recent history such as the 9/11 attacks on the Twin Towers, for example. Most of those who are old enough to have followed the event while it happened through news coverage will remember the image of the second plane crashing into the tower when referring to it. That footage, caught by live cameras, was globally hammered over and over and even today it is still one of the most striking images of the attack. It is clear then the power of the media not only to report but in selecting and prioritising images and information being able to imprint the same image in the collective imagination, practically programming collective memories framed by news coverage. We can also consider that this power is enhanced the less global the event is, since local events tend to have less attention than global ones and fewer people coming forward to challenge the media version, implying greater power for them to write history without arising counter-narratives. Counter-narratives are a vital tool for misrepresented and disadvantaged groups (Amaral, 2021) and recently, the conviction of the police officer who killed George Floyd has raised yet another debate: what if the assault that led to Floyd's death had not been caught on video? Protesters in North America were seen holding the question written on posters during marches for justice for Floyd and turning the discussion back to Brazil, recent cases have shown the strength of raw footage as evidence in cases of police violence (Amaral, 2021). In 2015, for example, police officers fired without warning at unarmed youths who were playing in the favela of Palmeirinha, in Rio de Janeiro. One of them, Alan Lima, that was just 15 years old, died and the other, Chauan Cezario, 19 years, was shot in the chest but survived and was arrested on the spot. The police officers' version was that the young men were local drug dealers and shot at the police vehicle, which responded with fire in self-defence (Amaral, 2019). Alan, however, was recording a video with his cell phone and ended up filming his own death. The video, which went viral on social media, shows a group of unarmed youth talking and riding their bikes when, suddenly, the police arrives shooting, killing Alan and wounding Chauan. There is no evidence in the video that the boys were involved in any wrongdoing and they definitely did not attack the police or have guns. This video became a key part of the investigations, playing a central role in the release of Chauan from Police custody, and the redemption of the boys in the mainstream media (that previously published the official version offered by the police, portraying the victims as criminals) and, finally, in the arrest of the police officers involved. But, what if there was no flagrant on video? Statistics clearly show that cases of police violence against Black and poor people in Brazil tend to be ignored by the public, misrepresented by the media and overlooked by the justice system. In Rio de Janeiro, one of the Brazilian counties with the highest number of police killings, only 3.7% of these killings (registered as 'resistance followed by death' or 'homicide resulting from opposition to police intervention') produced a lawsuit in 2015 (Misse, 2011) and 'out of a total of 220 investigations of police killings opened in 2011 in Rio, after four years, only one case led to a police officer being charged' (Amaral, 2019: 168). This contributes to most cases of police violence being forgotten by society -that is when they even reach some visibility apart from the movements of mothers who fight for justice and memory. In a society in which crimes against Black people are routinely overlooked (over 75% of homicide victims in the country are Black; Black women accounted for 68% of all murdered women (Cerqueira et al., 2020)) and ingrained practices and prejudices teach that some lives are worth more than others, it cannot be surprising that the deaths of White middle-class children cause more commotion than the deaths of Black and poor children. Methodology, data and results For this study, an online survey of closed questions was applied with only 'yes' or 'no' answers to the question 'Do you remember (name of the child)?' followed by an open question where the respondent was asked to give details of what they remembered about that specific child. The online survey was answered by 301 people from all regions of the country. However, dealing in a country as big as Brazil, the concept of local news is very stretched, therefore the survey focused on respondents from the south and southeast regions where all cases covered by this investigation occurred (85.7% of respondents are from the south and southeast regions of Brazil). Seven cases of children who were brutally killed were selected to be presented in the survey, three White children and four Black children: Maicon de Souza Silva, Black, 2 years old, killed by the Police in the Acari favela in 1996 while playing with another children in front of his house; João Hélio Fernandes, White, 6 years old, killed in 2007 by criminals who stole his family's car on gunpoint and started the car before his mother finished freeing him from his seat resulting in João being stuck by his feet on his seatbelt and hanging outside the car, being dragged through the streets; Isabella Nardoni, White, 5 years old, killed in 2008 by her father and stepmother when she was thrown from her father's sixth floor apartment window; Bernardo Boldrini, White, 11 years old, killed in 2014 with an intentional medication overdose by his stepmother aided by his father; Eduardo de Jesus, Black, 9 years old, killed in 2015 with a rifle shot to the head fired by the police while sitting at the door of his house in Complexo do Alemão; Marcos Vinícius, Black, 14 years old, gunned down on his way to school during a police operation in Maré in 2018 and Ágatha Félix, Black, 8 years old, killed by a stray bullet while on a public transport in 2019 in Complexo do Alemão. In the global analysis of the survey results, the first and most relevant information to come up is that while the three White children were remembered by more than 70% of the respondents, Isabella Nardoni being remembered by over 90% of them, none of the Black children reached the 50% recall mark. In fact, except for Ágatha Félix, the most recent death of all presented (occurred in 2019) and remembered by 40.5% of respondents, none of the three Black boys was remembered by more than 10% of the respondents, while João Hélio, who died in 2007, was remembered by 73.75% of respondents, Bernardo Boldrini by 74.42% and Isabella Nardoni by an impressive 98.34%. Isabella died in 2008, while Eduardo, who died in 2015, was only remembered by 4.98% of the respondents. Another important piece of information to note is that the cases of Black children presented involved police violence and were rarely remembered as shown in the chart below. The White children, however, were murdered by their parents (Isabella and Bernardo) or, in the case of João Hélio, killed during a robbery. In the case of Maicon, when analysing the answers to the open question asking for more details about the case, it is clear that of the 25 participants who answered 'yes' to the question 'Do you remember Maicon de Souza Silva?' only eight actually remembered the case, that is, only 2.65% of respondents. The others have mistaken it for other cases. Of these eight, seven are aged in the 30-50 age group and one is aged between 25 and 29 years, indicating a tendency of cases involving Black and poor children to be forgotten over time: Maicon was murdered in 1996, no respondents below 24 years of age remembered the case and only 2.8% of respondents between 25 and 29 years of age did. When considering the death of João Hélio, which occurred in 2007, 66.67% of respondents aged 15-24 and 65.71% of respondents aged 25-29 said they remembered him. Isabella Nardoni, who died in 2008, was remembered by 100% of respondents below 29 years old. This evidence points to a tendency for the cases involving White children to survive in the popular imagination even with the passage of time, being remembered even by those who were not even adults when the fact occurred, while the opposite happens with Black children. When looking at the open-ended questions to which the participant gives details of what they remember about the children, it is also possible to ascertain that the respondents remember more details about the White children. When asking about the Black victims, except for Ágatha Félix, the most recent death discussed and to which people gave answers a little more detailed, the survey received many vague answers, some just an array of three or four words such as 'death by a police officer', 'murdered by the police' and 'stray bullet'. When describing the deaths of White children, however, respondents recalled more details about the crimes such as the neighbourhood where the crime occurred or where the victim lived, specific details like the fact that the victim had asked for help from family members or from the Child Protective Services, the involvement of politicians in the trial of the culprits and even how they felt following the case at the time of the event. Some examples of the answers are below, showing the stark contrast: Q: What do you remember about João Hélio/Bernardo/Isabella? A: 'The brutal way he was killed during the car robbery he was in. I was pregnant with my first child at the time, and I was extremely impacted by the violent world in which my child was going to be born . . . I felt a sense of impotence that still disturbs me today'. (41-50 years old) A: 'A child from the south of the country killed by his stepmother and his father. He was neglected by his family, he was constantly asking neighbours for help, he even went to Child Protective Services and the Public Ministry for help and was ignored'. (30-35 years old) A: 'Isabella was a 5-year-old child who was spending her days at her father and stepmother's house and was assaulted by both, suffocated and thrown out of the apartment window by her father's hands, who have said at the beginning (of the investigations) that the child had cut the protective net with a pair of scissors and fallen . . .'. (30-35 years old) A: 'The coldness with which her father and stepmother commented on the crime against her. And I felt the pain of the girl's mother, even without ever having gone through anything like what she went through'. Two cases that occurred in Rio de Janeiro in 2021 only a few weeks apart are very illustrative of what has been discussed in this article. In both, children were beaten to death by their guardians, one child was White and one was Black. The first is the boy Henry Borel, 4 years old, beaten to death in the luxurious apartment where he lived with his mother and stepfather on the 8th of March 2021. The second is the case of the girl Ketelen Vitória Oliveira da Rocha, 6 years old, who died on 24 April 2021, after spending 6 days in a coma for being beaten by her mother and her partner. When reporting the death of Henry, an upper-middle-class White boy, the mainstream media uses his name extensively in the headlines. The case came to be called the 'Henry case' (as happened to similar ones in the past such as the 'Nardoni case' and the 'Bernardo case') evidencing that just the first name is enough to identify the boy. However, when reporting the death of Ketelen, a poor Black girl, her name is rarely mentioned in the headlines, often replaced by 'dead six-yearold girl', 'tortured 6-year-old girl' or simply 'tortured girl'. On 27th April, a keyword search was carried out using the names of the victims in the main news outlets in the country in order to observe the headlines. The media companies surveyed were 'O Globo' and 'Folha de São Paulo', the largest newspapers in the country, and 'UOL' and 'G1', two of the main online news agencies. The headlines of the first five articles on the list of results were considered, in each of the sources. When searching for 'Henry Borel', Henry's name is cited in 14 of the 20 headlines found. When searching for 'Ketelen Vitória', Ketelen's name is mentioned only twice out of a total of 17 headlines found on that day and, interestingly, Henry's name is mentioned once (even though the name searched was Ketelen's). Father of dead girl after aggressions from her mother and stepmother asks for justice 5 (Continued) Henry case: Accused of murdering his stepson, Jairinho was violent in his childhood 6 Dies the 6-year-old girl assaulted by her mother and stepmother in Porto Real 7 Henry case: 'We had to give a convincing answer', says secretary of Civil Police 8 Henry case: 'Fiction play', says Jairinho's lawyer about Monique's letter 9 Folha de São Paulo -News headlines (first five in 27 March 2021) Search for 'Henry Borel' Search for 'Ketelen Vitoria' In 2 months, two mothers are arrested on suspicion of crimes against their children 10 In 2 months, two mothers are arrested on suspicion of crimes against their children 10 Chamber of Rio starts the process of impeachment of Dr Jairinho 11 Mother and stepmother to be indicted for torture and death of Ketelen 12 Jairinho's lawyer says that Henry's mother's letter is a fictional play 13 'I'm on medication', says the father of a 6-year-old girl who died after assaults 14 Anthropologist points out the problem of underreporting of cases of domestic violence against children 15 Dies the 6-year-old girl who was hospitalised in a serious condition in RJ after suffering aggression 16 Henry's mother says she was drugged by Jairinho and found the boy already in the bed 17 Mother and stepmother will answer for qualified torture after the death of a 6-year-old girl 29 Henry Borel: see differences between Monique Medeiros' first testimony and the letter written in jail 30 Body of 6-year-old girl tortured by mother and stepmother is buried in Rio 31 Henry Borel's mother admits lies and reports attacks by Dr Jairinho in a letter 32 The 6-year-old girl tortured by her mother and stepmother in the south of the state was buried this Sunday 33 Henry Borel: letter from Monique admits lies and reports attacks by Dr Jairinho; see main excerpts 34 A 6-year-old girl, tortured by her mother and stepmother, is buried in Porto Real 35 Rio City Council's Ethics Council asks for councillor Jairinho's mandate to be revoked 36 'The heart is heavy' says the father of the 6-year-old child who died after assaults 37 (Continued) Conclusion This study brings to light interesting aspects of racialised regimes of memory and commemoration in Brazil. It is quite clear from the data obtained from the closed questionnaire, the poorly detailed open responses and the media coverage of the recent cases detailed earlier that it does not matter what type of death they suffer (whether the result of a crime, police violence or violent parents), Black children receive less attention and cause less commotion resulting in them being, therefore, less remembered. Highlighting that all Black children presented in the survey were victims of police violence also evidenced the bias of trivialisation of these deaths by state agents, which is directly connected to the ideological militarisation of public security in Brazil (Valente, 2016). Even though some of the common attempts of justifying this normalisation, such as 'parents must love above all, the police provide a service', for example, it is noticeable how public opinion is intimately connected to the discourse that trivialises a constant state of exception (Agamben, 2005) in large Brazilian cities. This discourse is offered by the mainstream media, which 'socially organizes and reverberates the language of urban violence' (Palermo, 2018). The state, which holds the monopoly of violence, in a tacit agreement with the media that still has ownership of the public discourse in countries like Brazil, guides this narrative in order to legitimise state violence in the name of the peace and quiet of the 'standard citizen' (opposed to the 'standard suspect' (Amaral, in press)). Police violence becomes trivialised through the discourse that criminalises Black, poor and peripheral people, culminating in the banalisation of the death of victims of the police, even children. Those children often lose the presumption of innocence (inherent characteristic to their own condition of being a child) becoming themselves victims of the rooted criminalised discourse towards those groups. Eduardo de Jesus, for example, was a victim of fake news after his death when photographs of an unidentified boy holding a rifle were circulated in online apps claiming that the boy pictured was Eduardo and implying he was not an innocent boy killed by the police while seating in his door frame, but a criminal child who was shot during an exchange of fire. This offers evidence that the discourse that criminalises the favela and places victims as criminals simply because they are there (in the favelas) is so ingrained that it does not spare anyone, not even the children. Why do we, as a society, feel shocked by the middle-class White child who is brutally murdered by his parents but not pity the peripheral Black child who is brutally murdered by the police? And still, even when a Black child is brutally murdered by their parents, as in the case of the girl Ketelen Vitoria, why does the media coverage follow a pattern opposite to that reserved for White children? Murdered Black children have no name or face. They become 'beaten boys', 'battered girls', 'tortured children'. Meanwhile, the White middle-class children become 'the boy Bernardo', 'the boy João Hélio'. The state is interested in forgetting Eduardos, Marcos, Ketelens and Maicons so that there is no significant pressure from the people for accountability, for the recognition of the problem by the Government, be it the problem of state violence or the one of extreme poverty and the inefficiency of child protection services. Denyer Willis (2021) discusses the existence of a 'letting disappear' policy in Brazil, which the author argues serves the objective of preserving the status quo, as well as the power to use violence (reserved to the state): Better still for that project now, it seems, is the analogous category of fading away: the Inuit, migrant, and the urban poor who disappear. Disappearance offers political order the ability to step back from, on the one hand, having to account for direct lethal violence on these bodies, and, on the other, from the minimalist but costly techniques of maintaining the condition of 'being disturbed'. Power doesn't have to kill, nor bear the price tag of cumulative hospital stays or an 'Indian Residential School' if people cease to be known. Mundane disappearance is convenient. (p. 302) When considering the memory of murdered Black children, one can observe a very similar panorama to that described by Denyer Willis, one of 'letting forget', of ignoring the deaths until these children are forgotten, faded, victims of whatever killed them in the first place but also of the state's neglect. The official discourse is one of negligence and forgetfulness. Through the lack of compassion and interest in these deaths, the government trivialises them, practising the idea of 'letting forget' and setting the tone of the discourse. This discourse is followed by the mainstream media that is finely tuned with the official narrative, changing it only according to the perspective presented by the public security agencies themselves (Castro, 2015) and, in turn, assuming its role as the most dynamic piece in the ideological structure of a class, capable of influencing public opinion (Gramsci, 2001) and setting the agenda of people's conversations, interests and even, as evidenced by this article, compassion. The last link in the chain is the people themselves, who, when reacting disinterested and anesthetised to these deaths, by not connecting to the pain of these victims in the same way they connect to the pain of White victims, allow, albeit unintentionally, that these children soon become mere numbers in a sad statistic, faceless and nameless briefly after their brutal deaths. Necropolitics, the power the state has to decide who can live and who can die (Mbembé, 2003), is here mixed with a guided stupor in which people are constantly in a mood of acceptance of high levels of violence, of normalisation of the excessive everyday brutality in large Brazilian cities and the banalisation even of the death of those who normally attract the absolute presumption of innocence and compassion: children. This leads us, finally, to question the ways to experience democracy in Brazil. The democracy in which middle and upper-class Whites live in Brazil is not the same democracy experienced in the favelas, composed mostly of Black residents. Steeped in violence and tyranny within the heart of the large cities, navigating between the rule of the state (during the military dictatorship up to the late 1980s), to the rule of the powerful criminal organisations that command drug trafficking in the country and recently perishing under the power of the militias, people are born and die in the favelas without ever experiencing democracy in its full. In response, grassroots activist memory movements started to emerge. The documentary 'Our dead have a voice' (2018), by directors Fernando Souza and Gabriel Barbosa, for example, was produced with the support of the movements Rede de Mães e Familiares na Baixada Fluminense and Rede de Comunidades e Movimentos contra a Violência. Both groups work to promote support for victims of state violence and although not focused exclusively on memory, actions aimed at discussing the memory of victims of violence, many of them Black children, are becoming increasingly stronger. Actions with an exclusive focus on history and memory are also beginning to gain strength. An example is the 'Museu da Maré', a social museum created by residents of Maré (favela) to preserve memories and challenge the misrepresentation of the favela and its residents. Also in Maré, the Núcleo de Memórias e Identidades dos Moradores da Maré, a more academic initiative of the non-governmental organization (NGO) Redes da Maré, aims to consolidate the history and memory of Maré residents through research, publications and seminars. In conclusion, this study presents compelling data to argue the existence of a racialised regime of memory and memorialisation in Brazil, in which a set of social, ideological and political factors, guided by the mainstream media, the dominant classes and the state, decide not only those who will be celebrated but even those who will be remembered (and, therefore, the ones that can be forgotten) and how they will be remembered, based on their race, social status or place of residence. Funding The author received no financial support for the research, authorship and/or publication of this article.
v3-fos-license
2019-05-16T13:07:18.728Z
2015-03-01T00:00:00.000
58918082
{ "extfieldsofstudy": [ "Economics" ], "oa_license": "CCBY", "oa_status": "HYBRID", "oa_url": "https://www.scipress.com/ILSHS.50.146.pdf", "pdf_hash": "c635e46c62baea4ec3845dc462d4acd76409c63d", "pdf_src": "MergedPDFExtraction", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:1606", "s2fieldsofstudy": [ "Economics", "Business" ], "sha1": "b0fe831b21e2398a1180e23034a884fa21b98625", "year": 2015 }
pes2o/s2orc
Tobin’s q, RoA, Diversification and Risk This study aims to explain the link between corporate diversification, firm performance and risk. To test the research hypotheses, a sample of 63 companies listed in Tehran Stock Exchange over the period 2008-2012 was taken. We construct two models with Tobin’s q, RoA, Size, Debt, Growth and Standard deviation of stock returns. Analysis of the research models is based on panel(data) analysis. In these models the presence or absence of effects models (fixed or random) is reviewed and finally the best model is estimated. Inference is based on significant level or p-value, thus likely that any value or significance level of the test is less than 0.05 is rejected at the 95 percent confidence level. The results indicate that there is no significant relationship between diversification strategy, firm performance and risk. INTRODUCTION Diversification is one significant method that firms use to maintain their competitiveness and enhance their profitability. Firms seek diversification strategy in order to achieve value creation through economic of scope, financial economies, or market power (Chen and Yu, 2012). Since the 70th the academic research tried to check the relation between diversification and firm performance (Kahloul,2010). Previous studies showed different foundings about the relation between diversification and firm performance. Some studies found a negative relation between diversification strategy and performance(e.g., Berger and Ofek,1995;Wernerfelt and Montgomery, 1988; Martin and Sayrak,2003) while others found a positive relation(e.g; Maksimovic and Phillips, 2007;Villalonga,2004). The diversification strategy constitutes a field of investigation for management risk researches (Kahloul,2010).It's often perceived by both practical and academicians that corporate diversification always reduces firm risk. However, the literature contains little empirical evidence on the important of corporate diversification on firm risk (Anderson et al.,2011). THEORETICAL BACKGROUND The theme of the diversification-performance relation, probably one of the most studied in the literature, is yet far from being exhausted (Palich and al, 2000). Since the 70th the academic research tried to check the relation between diversification strategy and firm performance. Nowadays, the problematic of the firm activity perimeter evolution is an interesting subject so much in industrial economy, in strategy or in finance. Researches show that some factors have been caused the trend toward diversification during 1960 and 1970. Studies indicate that diversity in companies cause to reduce internal management costs. Rapid growth of management science led to this idea that extract of management is not employing experimental knowledge of one specific industry, but to apply the tools and principles of management. Overall at the global level management principles suggest that professional management can control different companies in financial terms. Subsequent research indicates companies during 1980 and 1990 have again have turned into centralization. Studies show that factors such as increased volatility and turmoil in the industry, managers' centralization to increase share value, to accommodate growth and the emergence of new ideas about the management of the company, has been caused refocusing the company. The present paper is interested to the double impact of the performance and risk. In Lang and Stulz (1994) and Kahloul and Hallara (2010) an objective continuous measure of the strategic diversity of the firm based on the Herfindahl index has been used. In the present work, we also use the Herfindahl index. Our work is concentrated on panel modeling analyses. LITERATURE REVIEW The literature of corporate diversification and the puzzle surrounding whether diversification gives rise to discount or premium, was previously surveyed by three prominent articles: Martin and Sayrak (2003), Stein (2003) and Maksimovic and Phillips (2007). Martin and Sayrak (2003) survey the literature on corporate diversification through two separate channels: cross sectional studies of the link between corporate diversification and firm value on one hand and longitudinal studies in patterns of corporate diversification through time on the other. Their survey suggests that diversification discount may not be the result of corporate diversification after all. In contrast, diversification discount may result from measurement issues or simply because of sample bias. Stein (2003) studies the strand of literature which questions the efficiency of corporate investment in the presence of asymmetric information and agency problems. His focus was mainly on the literature, which addresses the issue of efficient capital allocation across firms through external capital markets and within firm allocation of capital through its internal capital market. Theoretical literature on diversification discount argues that firms diversify in order to reduce risk. Mansi and Reeb (2002) argue in their empirical paper that diversification discount arises due to the risk-reducing tendencies of the conglomerates. They further argue that diversification reduces shareholder value on the one hand but increases the bondholder value due to the reduction in risk. As a result it may be expected that more diversification discount exists in firms with debt as compared to all equity firms. After using the Berger and Ofek (1995) methodology they find a discount of 4.5% in firms with more than average debt levels whereas no discount is found for all equity firms. This result suggests that debt is an important factor in determining firm diversification. They also show that using book values of debt instead of market values of debt for calculating excess value undervalues diversified firms. Finally they try to examine the joint impact of diversification on debt and equity holders. Their results show that diversification reduces shareholder value, increases bondholder value but has no impact on total firm value. Tobin's q became the most common measure of firm performance after Lang and Stulz (1994). They use three different measures of diversification to compare the q ratio of single segment firms with multi-segment firms for various levels of diversification. The first two measures are Herfindahl indices constructed from sales and assets. The third measure is the number of segments in the firm since more diversified firms have more segments. Lang and Stulz (1994) use crosssectional regressions for each year from 1978 to 1990. They use a dummy variable to estimate the statistical contribution to q of diversification. However, they argue that since this method does not take into account the industry effects, a firm belonging to an industry with low q will automatically International Letters of Social and Humanistic Sciences Vol. 50 147 have lower q irrespective of diversification. This short coming is corrected for by using industryadjusted measures of discount. Berger and Ofek (1995) use asset and sales multiplier14 instead of Tobin's q in order to measure the value effect of diversification. In order to show the possible association between value loss and diversification they estimate pooled regressions using multi-segment dummy and control for firm size, profitability and growth opportunity of the firm. Khanna 2011) reviewed that whether corporate diversification decreases or increases the risk of the diversifying firm is an important empirical question. They investigate this issue using a sample of diversifying acquisitions and various risk measures. They find that corporate diversification tends to decrease the risk of some firms but increase the risk of many others, and on average corporate diversification does not lower firm risk. These findings call into question the notion that corporate diversification strictly reduces firm risk. Raei et al(2015) examined the relation between diversification, performance and risk. They used ROE as a proxy of performance and Herfindah as a proxy of diversification. They found any relation between diversificand with firm performance and risk. THE PROPOSED STUDY Based on the developments of the literature, several hypotheses are developed. The first hypothesis is stated: H 1 : There is a significant relationship between diversification strategy and firm performance. The second hypothesis is as follows: H 2 : There is a significant relationship between diversification strategy and risk. In this paper we use Herfindahl index for measure of diversification. The HERFINDAHL coefficient for firm i in year t is calculated as follows: HERF i,t =⅀ (SSALE/SALE) 2 Where: HERF i,t : sales revenue according to HERFINDAHL indicator for firm i in year t. SSale: seles a certain portion of the company Sale: The total sales (i.e., the total sales of parts) HERF variable for one part companies equal to 1. For companies that are more than one part is less than 1. So, the smaller coefficient indicates a greater extent of corporate diversification. Also Tobins'q and ROA as measures of firm performance are as follows: Tobins'q = Total market value of firm/ Total asset value ROA = Net profit/Total assets The risk (STD): The total risk of the firm is estimated from the market data. It is appreciated by the standard deviation of stock returns. R E T R Volume 50 In this paper three variables are control variables: The size of firm (SIZE): It is essential to control the size of the firm sample that is supposed to act on the performance. We kept as variable of control the size of every firm measured by the logarithm of the total asset of the group. The growth of the firm (GROWTH): The growth of the company is one of the explanatory factors the most important of the performance of firms. This variable is measured by the average variation of the turnover on the reporting period. That is: The debt (DEBT): The variable of the debt is measured as the ratio of the total debts and the shareholders equity. We choose 63 firms from companies listed in Tehran Stock Exchange. The data used in the testing model is extracted from TSE that provide from 2008 to 2012. The panel model is used in this paper. The results As it is mentioned, the sample is composed of 63 companies listed in Tehran Stock Exchange. The methodology for analyzing data is based on panel modeling. In the first, the Herfindahl index is based on distribution of the sales by activity which allows the definition of the diversification strategy level for each firm. The data being in both time series and cross sectional. We have thus made regression using a panel data. Table 1, hereafter resumes the descriptive statistics of the variables on the whole period (2008-2012): mean, median, maximum, minimum and standard deviation. The diversification variable is measured by Herfindahl index (HERF). The performance is evaluated by the ROE. The risk is measured by standard deviation of the return equity (STD). The log of total asset measures the size of the firm (SIZE). The (DEBT) variable corresponding to total debt/ total equity, presents the debt of the firm. The growth (GROWTH) is measured by the variation of the turnover of the company. The number of observation is 315. International Letters of Social and Humanistic Sciences Vol. 50 different years is not normal but the values of these variables logarithm in these years are more than 0/05. So their distribution has been normal and Meaningful value for ROA variable during studied years is more than 0.05. The results of fixed effect show that probability of significant of F is equal 0.000. This result means that there is a significant model. The coefficient of determination is equal to 0.07. The t-statistics for HERF is equal to 1.56(meaningless), for SIZE is equal to 0.6(meaningless), for DEBT is equal to -4.37(meaningful and negative), for GROWTH is equal to -0.78(meaningless)). The below table shows the results: Volume 50 The results of model 1 indicate that there is no significant relationship between Tobin's q and Herfindahl. The supposed model 1 to test the hypotheses is as follow: The results of fixed effect show that probability of significant of F is equal 0.000. This result means that there is a significant model. The coefficient of determination is equal to 0.695. The t-statistics for HERF is equal to 1.676(meaningless), for SIZE is equal to -1.3(meaningless), for DEBT is equal to -0.19(meaningless), for GROWTH is equal to -0.60(meaningless)). The table 6 shows the results: International Letters of Social and Humanistic Sciences Vol. 50 151 The supposed model to test the hypotheses is as follow: The results of random effect show that probability of significant of F is equal 0.901. This result means that there is no a significant model. The coefficient of determination is equal to 0.003. The tstatistics for HERF is equal to -0.51(meaningless), for SIZE is equal to 0.7(meaningless) and for DEBT is equal to 0.45 (meaningless). The below table shows the results: The results of hypothesis 2 indicate that there is no significant relationship between diversification strategy and risk. CONCLUSION This paper has examined the relationship between diversification strategy, firm performance and risk. For examined the firm performance, we used Tobin's q and ROA. The results shown that there is no significant relationship between diversification strategy, firm performance and risk.
v3-fos-license
2020-01-22T14:13:56.141Z
2020-01-22T00:00:00.000
210840183
{ "extfieldsofstudy": [ "Medicine", "Biology" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.frontiersin.org/articles/10.3389/fpls.2019.01676/pdf", "pdf_hash": "b9d8842db7d5252afe26a5c44997ce17e47e2784", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:1607", "s2fieldsofstudy": [ "Agricultural and Food Sciences", "Engineering", "Environmental Science" ], "sha1": "b9d8842db7d5252afe26a5c44997ce17e47e2784", "year": 2020 }
pes2o/s2orc
Drought Resistance by Engineering Plant Tissue-Specific Responses Drought is the primary cause of agricultural loss globally, and represents a major threat to food security. Currently, plant biotechnology stands as one of the most promising fields when it comes to developing crops that are able to produce high yields in water-limited conditions. From studies of Arabidopsis thaliana whole plants, the main response mechanisms to drought stress have been uncovered, and multiple drought resistance genes have already been engineered into crops. So far, most plants with enhanced drought resistance have displayed reduced crop yield, meaning that there is still a need to search for novel approaches that can uncouple drought resistance from plant growth. Our laboratory has recently shown that the receptors of brassinosteroid (BR) hormones use tissue-specific pathways to mediate different developmental responses during root growth. In Arabidopsis, we found that increasing BR receptors in the vascular plant tissues confers resistance to drought without penalizing growth, opening up an exceptional opportunity to investigate the mechanisms that confer drought resistance with cellular specificity in plants. In this review, we provide an overview of the most promising phenotypical drought traits that could be improved biotechnologically to obtain drought-tolerant cereals. In addition, we discuss how current genome editing technologies could help to identify and manipulate novel genes that might grant resistance to drought stress. In the upcoming years, we expect that sustainable solutions for enhancing crop production in water-limited environments will be identified through joint efforts. INTRODUCTION Today, agriculture is facing an unprecedented challenge. Arable land is being reduced by soil erosion and degradation, desertification, and salinization, destructive processes that are being further accelerated by climate change. This could jeopardize global food production, which will need to be maximized to cope with the world´s growing population and to match the food security goals established by United Nations. More than ever, drought is a major threat to agriculture worldwide. The Food and Agriculture Organization (FAO) of the United Nations documented that between 2005 and 2015, drought caused USD 29 billion in direct losses to agriculture in the developing world, with the 2008-2011 drought in Kenya alone accounting for USD 1.5 billion (FAO, 2018). In addition, more than 70% of the world´s available fresh water is being used in irrigation (Organization for Economic Cooperation and Development, 2017). To cope with these challenges, plant breeders will need to begin producing novel crop varieties that have increased yield, that are tolerant to abiotic stresses, and that have improved water and nutrient uptake efficiencies (Fita et al., 2015). In agronomy, drought can generally be defined as a prolonged lack of water that affects plant growth and survival, ultimately reducing crop yield. In plant science, the broadest definition of drought stress coincides with the definition of water deficit, which happens when the rate of transpiration exceeds water uptake (Bray, 1997). This could be the result of a lack of water, but also of increased salinity or osmotic pressure. From a molecular biology perspective, the first event during drought stress is the loss of water from the cell, or dehydration. Dehydration usually triggers signals that are osmotic and hormone related, with abscisic acid (ABA) mainly involved in the latter (Blum, 2015). These signals are followed by a response that could be broadly categorized into three main strategies: i) drought escape (DE), ii) dehydration avoidance, and iii) dehydration or desiccation tolerance (Kooyers, 2015;Blum and Tuberosa, 2018). DE is the attempt of a plant to accelerate flowering time before drought conditions hinder its survival. This response is common to annual plants including the model species Arabidopsis thaliana (Arabidopsis), and is exploited by cereal plant breeders (Shavrukov et al., 2017). In dehydration avoidance, the plant is able to maintain a high relative water content (RWC% = [fresh mass − dry mass]/[water saturated mass − dry mass] × 100) even during water scarcity. This is achieved by physiological and morphological responses that include the reduction of transpiration via ABA-mediated stomatal closure, the deposition of cuticular waxes, and the slowing down the plant´s life cycle. Dehydration avoidance usually leads to survival through delaying plant growth, and thus senescence and mortality. This strategy evolved as a response to moderate, temporary drought stress in which the plant undergoes a developmental stand-by until the next rainfall (or irrigation). While effective in increasing plant survival rate, dehydration avoidance often comes with growth and yield penalties, which are, of course, major negative traits for crop breeders (Skirycz and Inzé, 2010). On the other hand, in dehydration tolerance, the plant is able to maintain its functions in a dehydrated state, usually by regulation of plant metabolism to increase the production of sugars, osmoprotectants, antioxidants, and reactive oxygen species (ROS) scavengers (Hu and Xiong, 2014). These responses are usually activated by gibberellic acid (GA) signaling through the modulation of the GA-signaling molecule DELLA, a pathway that integrates multiple hormone-and stress-related pathways (Vandenbussche et al., 2007;Navarro et al., 2008;Colebrook et al., 2014). Ultimately, drought resistance is determined by how a plant efficiently and timely senses changing environmental conditions, adopting and combining the aforementioned strategies in response to diminished water availability. Plant breeders have identified physiological traits that result from drought responses and contribute to the adaptation of plants in water-limited conditions. Understanding the molecular and physiological mechanisms behind these traits is essential for improving crops through biotechnology. In this review, we describe some of the drought resistance traits of the model plant Arabidopsis that have the potential of being transferrable to crops, focusing on strategies that involve the manipulation of cell-and tissue-specific responses. As these strategies open up opportunities to uncouple drought resistance from the commonly associated growth and yield penalties, we will discuss their biotechnological application in cereal species. MAJOR TRAITS CONTRIBUTING TO DROUGHT RESISTANCE Early Flowering and Drought Escape The molecular control of flowering time is complex, and has been highly studied in Arabidopsis (Michaels and Amasino, 1999;Simpson and Dean, 2002) as well as in many other plant species (Corbesier et al., 2007). During the developmental switch from the vegetative to the reproductive stage, the photoperiodic light signal from the environment is perceived by leaves, where the FLOWERING LOCUS T (FT) protein is synthesized. FT is loaded into the phloem and transported to the shoot apical meristem (SAM) where it initiates floral transition (Andrés and Coupland, 2012). It is now known that in the SAM, FT forms a complex with the bZIP protein FD in specific cells beneath the tunica layers in which FD is expressed, with these cells then originating the floral primordia (Abe et al., 2019). When Arabidopsis is exposed to drought conditions, it can activate the DE response. DE is one of the main defense mechanisms against drought in Arabidopsis, and it integrates the photoperiodic pathway with drought-related ABA signaling (Conti, 2019). DE has mainly been studied in an evolutionary context in natural populations (McKay et al., 2003;Franks et al., 2007), and the molecular mechanisms that regulate it have only been unraveled recently. It is known that, to trigger DE, the key photoperiodic gene GIGANTEA (GI) needs to be activated by ABA (Riboni et al., 2013;Riboni et al., 2016). A recent breakthrough was the discovery that the ABRE-BINDING FACTORS (ABF) 3 and 4, which act on the master floral gene SUPPRESSOR OF OVEREXPRESSION OF CONSTANS1 (SOC1) in response to drought, are involved in this process. The mutants abf3 abf4 are insensitive to ABA-induced flowering and have a reduced DE response (Hwang et al., 2019). However, the precise molecular mechanisms that link ABA to GI and ultimately to DE are still rather obscure, and different crop species might have evolved unknown pathways that trigger DE in different environments ( Figure 1A). From an agronomic perspective, DE and early flowering varieties with faster life cycles are interesting because an anticipated switch to the reproductive stage might allow grain filling before the onset of seasonal terminal drought. Furthermore, a shorter crop season reduces the need for agricultural inputs (e.g., fertilizers, pesticides) and might facilitate double cropping (i.e., the farming of two different crops in the same field within the same year). On the other hand, crops that switch too early to flowering will have their yield reduced. Despite DE being an emerging research field in crop science, there are not any biotechnologically improved crops that exploit DE as a drought resistance trait. Still, it has been proposed that DE can be used to obtain quick-growing, early-flowering cereal varieties, which would be especially useful in temperate regions like the Mediterranean area where terminal drought is expected to affect plants toward the end of the crop season (Shavrukov et al., 2017). Furthermore, it has been recently shown that OsFTL10, one of the 13 FLOWERING LOCUS T-LIKE (FTL) genes annotated in the rice genome, is induced by both drought stress and GA, and when overexpressed in transgenic rice plants confers early flowering and improves drought tolerance (Fang et al., 2019). However, as these transgenic rice lines were not tested in a field trial, it is unknown whether engineering FTL genes could deliver cereal varieties with superior drought performances and good yield in both dry and well-watered conditions. Nonetheless, the manipulation of the DE pathway could be an innovative and valid strategy especially in the context of highly variable water availability. As DE involves specific tissues (leaf, phloem) and cell types (phloem companion cells, FD-expressing SAM cells), it might be possible to devise strategies aimed at developing drought-resistant plants via manipulation of these plant components, adjusting DE to the different environmental conditions. Leaf Traits: Senescence, Stay-Green, and Leaf Area Senescence is a developmental stage of plant leaves that leads to the arrest of photosynthesis, the degradation of chloroplasts and proteins, and the mobilization of nitrogen, carbon, and other nutrient resources from the leaves to other organs. As most cereals are monocarpic annual species, these resources are directed to developing seeds, and senescence therefore plays a relevant role in crop yield. Environmental stresses like temperature, lack of nutrients, and drought might initiate senescence prematurely, affecting seed nutritional composition and crop yield (Buchanan-Wollaston, 1997;Distelfeld et al., 2014). In crops threatened by terminal drought, the ability to sustain photosynthetic activity longer by delaying or slowing down senescence could be an effective strategy to avoid yield losses. As such, leaf senescence has been extensively studied in crops ( Figure 1B). Plant breeders commonly refer to the trait that confers extended photosynthetic activity as stay-green, also defined as green leaf area at maturity (GLAM). This trait is well studied in sorghum [Sorghum bicolor (L.) Moench], a dry climate-adapted cereal in which a number of stay-green quantitative trait loci (QTLs) have been identified (Vadez et al., 2011). However, the genes underlying these QTLs have not yet been identified (Harris-Shultz et al., 2019). Stay-greenness in sorghum is a complex trait, and it is also connected with the perennial tendencies of some varieties (Thomas and Howarth, 2000). Other plant species achieve stay-green characteristics via substantially different pathways that include disabling chlorophyll catabolism (like in the case of Gregor Mendel's green peas, Armstead et al., 2007), and altering the responses to plant hormones. Indeed, some stay-green genes have also been identified in Arabidopsis and rice (Hörtensteiner, 2009), notably the Stay-Green Rice (SGR) genes and their homologs in Arabidopsis SGR1, SGR2, and SGR-like (SGRL). The respective molecular pathways have been elucidated, with the phytohormones ethylene, ABA, cytokinin (CK), and strigolactone (SL) having a prominent role in stress-induced leaf senescence (Abdelrahman et al., 2017). The connection between ethylene and leaf senescence is long known (Bleecker et al., 1988;Grbić and Bleecker, 1995), and numerous attempts to improve photosynthetic activity and drought performance by manipulating ethylene biosynthesis have been published in dicots (John et al., 1995) and cereal plants (Young et al., 2004). The first biotechnologically produced plant ever to reach the market with improved drought resistance due to reduced ethylene sensitivity and delayed senescence was produced by Verdeca and named HB4 ® Drought Tolerance Soybeans (Bergau, 2019). HB4 is a modified version of the homeodomain-leucine zipper (HD-zip) transcription factor (TF) HaHB4 from sunflower (Helianthus annuus). It is expressed under the control of the native soybean HaHB4 promoter, which is stress inducible (Waltz, 2015). Although HaHB4 does not have conserved homologs in Arabidopsis, upon ectopically expressing HaHB4 in this model species, it was discovered that the TF acts at the intersection between the jasmonic acid and ethylene pathways (Dezar et al., 2005;Manavella et al., 2008). Interestingly, HB4-expressing soybean has increased yield in both water-limited and well-watered conditions. As shown in extensive field trials, this same gene confers similar drought tolerance properties without yield penalties when transferred to bread wheat , with the transgenic wheat having an unaltered quality and nutritional content when compared with its parental nontransgenic variety Cadenza (Ayala et al., 2019, Figure 2C). As such, it is likely that the HB4 cassette could confer drought resistance to other cereals. It is worth pointing out that the success of HB4 is due to the exploitation of drought-responsive promoters rather than of constitutive strong promoters. Using a rather different approach, Monsanto expressed the bacterial cold shock protein B (CSPB) under the control of the constitutive rice ACTIN1 promoter. The expressed CSPB protein bears RNA-binding motifs named cold shock domains (CSDs) that act as RNA chaperones and regulate translational activity. In the analyzed transgenic plants, chlorophyll content and photosynthetic rates were improved (Castiglioni et al., 2008). These transgenic plants were tested in 3-year field trials in two different locations, and yields were on average 6% higher than for the control plants in water-limited conditions ( Figure 2B). Although the molecular mechanisms are not fully understood, improved performances in water-limited conditions have been linked to a transient reduction in leaf area that leads to reduced water use and improved overall water use efficiency (WUE). This temporary dehydration avoidance does not negatively affect yield due to an improved ear partitioning, which is probably also a consequence of reduced stress exposure during vegetative growth (Nemali et al., 2015). This work led to the first biotechnologically improved crop for drought tolerance, called Genuity™ DroughtGard™ by Monsanto (Figure 2, event code MON-87460-4, ISAAA, 2019). Even though this result was achieved by the constitutive expression of a bacterial TF, we speculate that leaf-specific or meristem-specific genes expressed in specific developmental stages could lead to similar results. Stomatal-Mediated Drought Responses Stomata, which are openings on the surface of the aerial portion of plants, are enclosed by two specialized guard cells that can open and close the pore by changing their turgor pressure. Stomata are vital for CO 2 uptake in photosynthetic organs and are finely regulated by a molecular pathway that allows plants to acquire CO 2 while minimizing water loss. Manipulating stomatal number, size, and regulation was one of the earliest strategies adopted by scientists in attempt to produce drought-resistant plants, and recent advances in Arabidopsis and crops to this effect are thoroughly reviewed in Bertolino et al., 2019 (Figure 1C). The main hormone signal that triggers stomatal closure in water-limited conditions is ABA (Sussmilch and McAdam, 2017). In Arabidopsis, expression of the CLAVATA3/ EMBRYO-SURROUNDING REGION-RELATED 25 (CLE25) gene is upregulated in the root vascular tissues upon drought stress. The CLE25 peptide is translocated to the leaves where it binds to BARELY ANY MERISTEM (BAM) receptors, which, in FIGURE 2 | Drought tolerance genes that have been discovered or tested in model species and translated successfully into crop species. All of these genes have been expressed in engineered cereal crops and have been tested in field trials. Major agronomical traits, including yield, have been assessed, and conditions and drought performances have been successfully improved without negatively affecting plant growth or crop yield. (A) Hahb4: The sunflower transcription factor Hahb4 was expressed in soybean under the control of the native stress-inducible promoter of a homologous gene. Transgenic plants have reduced ethylene sensitivity, delayed senescence, increased osmoprotectant content, and an increased yield in the presence or absence of drought stress (Waltz, 2015). These plants are currently on the market as Verdeca Drought Tolerance Soybeans HB4 ® . The same Hahb4 has also been transferred to bread wheat under the control of the constitutive promoter of maize ubiquitin 1 with similar promising results Ayala et al., 2019). (B) CspA, CspB: Maize plants overexpressing Escherichia coli CspB have high chlorophyll content, an improved photosynthetic rate, and reduced leaf area during vegetative growth. The best performing lines were commercialized as Genuity ® DroughtGard™ by Monsanto (now Bayer) in 2010 (Castiglioni et al., 2008;Nemali et al., 2015). (C) NF-YB1, NF-YB2: Maize plants overexpressing ZmNF-YB2 have higher stomatal conductance and chlorophyll content, and delayed senescence. These lines were not assessed in the field for performance under well-watered conditions and were never introduced to the market (Nelson et al., 2007). (D) TPS/TPP, TsVP: Carbon allocation, root/shoot ratio. In maize, floral-specific expression of T6P phosphatase (TPP) altered carbon allocation and improved yield in both well-watered and water-limited field trials (Nuccio et al., 2015). Also, in maize, the constitutive expression of the TsVP gene from the halophyte Thellungiella halophila under the control of the endogenous ubiquitin promoter increased total soluble sugars and proline under osmotic stress. Improvements in dehydration tolerance were assessed in a small-scale field trial (Li et al., 2008). (E) OsNACs, OsERF71, HVA1, DRO1. In rice, expression of the transcription factor OsNAC5 under the control of the root-specific promoter RCc3 improved drought and high salinity resistance by enlarging the root diameter. Yield improvements in normal and stress conditions were assessed in a 3-year field trial in three different locations (Jeong et al., 2013). Similar results were obtained with OsNAC9 and OsNAC10 (Jeong et al., 2010;Redillas et al., 2012). Root-specific expression. Also, in rice, the expression of the barley HVA1 under the control of a synthetic ABA-inducible promoter enhanced root growth, leading to better water use efficiency and abiotic stress tolerance. as confirmed by a small-scale field trial (Chen et al., 2015). The DRO1 allele from deep-rooting rice cultivars increases gravitropic response and root depth, increasing rice yield in both drought and normal conditions (Uga et al., 2013;Arai-Sanoh et al., 2014). (F) AtOSR1, ARGOS8: Arabidopsis ORGAN SIZE RELATED1 (AtOSR1) 1 and its maize homolog ZmARGOS1 improve dehydration avoidance in both plant species by reducing ethylene sensitivity (Shi et al., 2015); Moderate constitutive expression of ARGOS8, which was obtained by promoter swapping using clustered regularly interspaced short palindromic repeats/CRISPR-associated protein 9 (CRISPR/Cas9) homology-directed recombination, improved drought tolerance in a field trial under stress conditions without affecting yield in well-watered control experiments (Shi et al., 2017). Commercialization of these lines is under evaluation by the developer Corteva Agriscience™ (former DuPont Pioneer). turn, induce ABA accumulation in leaves leading to stomatal closure (Takahashi et al., 2018). The manipulation of ABA sensitivity to increase stomatal responses in response to drought could help plants to survive. However, diminished photosynthetic activity due to limited CO 2 uptake is usually detrimental to carbon assimilation and negatively impacts crop yield. In addition, water evaporation through stomatal openings prevents plants from overheating. As drought in a natural environment is likely to be accompanied by warm temperatures, reducing stomata capacity might not be a sustainable approach to enhance drought resistance while securing yield and biomass production. For instance, a series of rice mutants of the ABA receptors pyrabactin resistance 1-like 1 (pyl1), pyl4, and pyl6 have improved yield but are more sensitive to drought (Miao et al., 2018), a result that resonates with the improved drought resistance but reduced yield of the transgenic plants that overexpress PYL5 . In an early attempt to produce drought-resistant plants, it was observed that the constitutive expression of AtNF-YB1 in Arabidopsis improved the survival rate of the transgenic plants (Nelson et al., 2007). NUCLEAR FACTOR Y (NF-Y) are heterotrimeric TFs that regulate multiple developmental pathways , including stomatal responses via modulation of the ABA signaling pathway (Bi et al., 2017), with conserved functions in Arabidopsis and cereals during both flowering (Siriwardana et al., 2016;Goretti et al., 2017) and DE (Hwang et al., 2019). One maize homolog of AtNF-YB1, ZmNF-YB2, was constitutively expressed under the control of the rice actin 1 promoter. Maize transgenic plants showed an improved survival rate in a greenhouse experiment, confirming the functional conservation between Arabidopsis and maize NF-YBs. In field trials, the transgenic plants were also drought resistant due to a combination of higher stomatal conductance, cooler leaf temperatures, higher chlorophyll content, and delayed onset of senescence (Nelson et al., 2007). Nevertheless, even if these transgenic lines show promising results in field trials, with the best performing line having a 50% increase in yield relative to controls under severe drought conditions, these lines were never introduced to the market, maybe because the yield in wellwatered conditions was negatively affected ( Figure 2A). The trade-off between stomatal conductance and drought resistance could be avoided by manipulating stomatal kinetics, or more precisely, by improving the speed of stomatal responses (McAusland et al., 2016). Recently, enhanced plant stomatal kinetics was achieved by expressing a synthetic, blue lightinduced K + channel 1 (BLINK1) under the control of the strong guard cell-specific promoter pMYB60 (Cominelli et al., 2011). This effectively accelerated stomatal responses, producing plants that responded faster to changing light conditions. Arabidopsis WUE (i.e., the biomass per transpired water) was improved without reducing carbon fixation rates, resulting in a 2.2-fold increase in total biomass in the transgenic plants grown in water-deficit conditions when compared with the control plants (Papanatsiou et al., 2019). Whether this approach would be efficient in crops in an open field, or whether the increased biomass would correspond to a better yield, is yet to be established. Overall, engineering the physiological behavior of stomata represents a remarkable innovation in Arabidopsis that has yet to be applied to crops. Cuticular Wax Production Aerial plant organs have an external cuticle layer of which waxes are a major component. This hydrophobic barrier physically protects the epidermis against a plethora of external factors including UV light, cold temperatures, fungal pathogens, and insects, and also regulates permeability and water loss. However, despite the fact that a number of studies in Arabidopsis and crops have shown a connection between drought stress and changes in cuticular wax content, composition, and morphology, many of the key genes involved in wax metabolism, regulation, and transport still need to be characterized (Xue et al., 2017;Patwari et al., 2019). Cuticular wax composition has been studied both in Arabidopsis and crop species; wax composition not only varies between plant species, but also between specific tissues or organs within the same plant. In the most well-studied model, the biosynthesis of cuticular waxes occurs in epidermal cells where de novo synthesized C16-C18 fatty acids produced in plastids are exported by acyl-acyl carrier proteins (acyl-ACP). These proteins are subsequently hydrolyzed by the fatty acyl-ACP thioesterase B (FATB) and the C16-C18 fatty acids are imported into the endoplasmic reticulum following activation by long-chain acyl-coenzyme A (acyl-CoA) synthetases, which are encoded by the long-chain acyl-CoA synthetase genes LCAS1 and LCAS2. The carbon chains are then elongated with C2 units from the malonyl-CoA by the fatty acids elongase complex. This complex biosynthesizes C20-C34 very-long-chain fatty acids (VLCFA) that are modified via two different pathways, namely the alcohol-forming pathway and the alkane-forming pathway. These pathways produce the aliphatic compounds of cuticular waxes. While the alcohol-forming pathway produces very-longchain (VLC) primary alcohols and wax esters, the alkaneforming pathway produces VLC aldehydes, VLC alkanes, secondary alcohols, and ketones (Yeats and Rose, 2013). Besides these ubiquitous wax compounds that are common to almost all plant species, there are a plethora of specialty wax compounds that vary in carbon number, terminal carbon oxidation state, and the presence and oxidation state of secondary functional groups, with about 125 different compounds identified in over 100 plant species whose biosynthetic pathways are not yet fully described (Busta and Jetter, 2018). All wax components are synthesized in the endoplasmic reticulum and need to be exported to the plasma membrane and then secreted from the cell wall of the epidermal cells where they constitute the cuticle (Fernández et al., 2016). The secretion of wax molecules from the plasma membrane to the extracellular matrix in Arabidopsis is known to be mediated by the ATP binding cassette (ABC) transporters, CER5 (from eceriferum, waxless mutants) and WBC11 (Pighin et al., 2004;Bird et al., 2007). On the other hand, the intracellular trafficking that governs the transport of wax constituents is not fully understood, but involves more than a single mechanism. Gnom-like 1-1 and echidna mutants (gnl1-1 and ech), which are defective in vesicle trafficking, show a decrease in surface waxes, thereby indicating that endomembrane vesicle trafficking is required in wax transport (McFarlane et al., 2014). In addition, membrane-localized lipid transfer proteins (LTPs) may be involved in wax delivery to the cuticle through the hydrophilic cell wall. In fact, Arabidopsis LTPG1 and LTPG2 genes have been characterized (DeBono et al., 2009;Kim et al., 2012). Novel proteins with yet unknown molecular functions involved in extracellular wax transport are also being discovered in monocots through the characterization of mutants. One such example is maize GL6 . A comprehensive coverage of cuticular wax biosynthesis and deposition can be found in the review articles by Bernard and Joubès (2013), and Lee and Suh (2015). Cuticular waxes can be regulated post-translationally, posttranscriptionally, and transcriptionally. In terms of posttranslational regulation, the CER9 gene, which encodes a putative E3 ubiquitin ligase, plays a role in the homeostasis of cuticular wax biosynthetic enzymes through ubiquitination and degradation of proteins in the endoplasmic reticulum. Arabidopsis cer9 mutants showed an increase in lipid deposition and drought tolerance, suggesting that it has a negative role with regards to the regulation of cuticular wax biosynthesis (Lü et al, 2012). CER7, on the other hand, which encodes an exosomal exoribonuclease that was proposed to play a role in the degradation of small RNA species that negatively regulate the CER3 transcript (an enzyme involved in wax biosynthesis), is part of the post-transcriptional mechanisms of regulation. However, transcriptional mechanisms are considered to be the main regulator of wax biosynthesis (Yeats and Rose, 2013). Accordingly, most of the biotechnological approaches that have attempted to improve drought performance by manipulating cuticular wax levels focus on TFs that control the overall process rather than on overexpressing multiple components of the biosynthetic pathways. In Arabidopsis, overexpression of the TF WAX INDUCER1/SHINE1 (WIN1/ SHN1) was found to activate wax biosynthesis, increase wax deposition, and confer drought resistance in a survival rate experiment (Aharoni et al., 2004;Broun et al., 2004). In studies performed in apple and mulberry, WIN1/SHN1 homologs have been shown to have conserved functions, and therefore might similarly increase drought tolerance (Sajeevan et al., 2017;Zhang et al., 2019). In rice, overexpression of the WIN1/SHN1 homolog OsWR1 improves drought tolerance at the seedling stage (Wang et al., 2012). While constitutive expression of OsWR2 dramatically increased cuticular wax deposition (48.6% in leaves) and improved dehydration avoidance, yield was negatively affected, with a reduction of 30% in seed number per panicle . On the other hand, in wheat, the overexpression of the OsWR1 ortholog TaSHN1-cloned from the drought-tolerant genotype RAC875-was able to improve drought tolerance in a survival experiment without an evident loss in yield under controlled conditions. These transgenic plants have an altered wax composition and a lower stomatal density (Bi et al., 2018, Figure 1D). EsWAX1, a novel TF that was isolated from the halophyte Eutrema salsugineum, improves cuticular deposition and drought tolerance when ectopically expressed in Arabidopsis, but also leads to detrimental effects on plant growth and development. However, when expressed under the control of the stress-inducible Arabidopsis RD29 promoter, EsWAX1 is able to improve the rate of drought survival without causing any major negative pleiotropic effect (Zhu et al., 2014). Even if seed number or yield was not assessed, the use of drought-responsive promoters helps overcome the undesirable effects of ectopic overexpression. Another wellknown TF controlling wax biosynthesis is the Arabidopsis ABA-responsive R2R3-type MYB TF, MYB96. MYB96 is highly expressed in stem epidermal cells, is activated by drought, and binds directly to the promoter of multiple wax biosynthetic genes to upregulate their transcripts and increase wax production (Seo et al., 2011). Overexpressing Arabidopsis MYB96 in the close relative Brassicaceae biofuel crop Camelina sativa led to an increase in wax biosynthesis and deposition, and also improved drought survival of the transgenic camelina plants . Taken together, these results show relevant advances in the quest to obtaining drought-resistant plants by manipulating cuticular wax biosynthesis. Important differences between Arabidopsis and crop species in terms of wax composition, localization, and quantity need to be considered when attempting to transfer drought resistance traits. Furthermore, an excessive wax production might have negative effects on plants because of the high amount of carbon resources that need to be redirected from seeds to leaves, and because of the reduced CO 2 permeability of the wax-covered leaves. As such, it is essential that the any biotechnologically induced increase in wax production occurs in specific cell types and in response to dehydration rather than constitutively. Carbon Allocation Plants are photosynthetic organisms able to fix atmospheric carbon into macromolecules essential for growth and survival. Thus, it is evident that carbon metabolism and allocation are highly regulated and this regulation has a vital role in plant resilience to stresses and crop yield. In cereals, carbon is the main determinant of crop yield, and carbohydrates from cereals are the primary source of calories in the human diet (Lafiandra et al., 2014). One of the main pathways that regulates carbon allocation in plants is the trehalose 6-phosphate (T6P)/SNF1-related/ AMPK protein kinases (SnRK1) pathway. T6P is a nonreducing disaccharide present in trace quantities in plants, and it acts as a signal for sucrose levels. The T6P/SnRK1 pathway has been unraveled through studies in Arabidopsis that led to the identification and characterization of the TREHALOSE PHOSPHATE SYNTHASE (TPS) and TREHALOSE PHOSPHATE PHOSPHATASE (TPP) genes. This pathway has also been linked to auxin and ABA signaling . T6P is known to act as a signaling molecule during flowering, and Arabidopsis tps1 mutants are extremely late to flower (Wahl et al., 2013). Increasing the intracellular content of the disaccharide T6P is a well-known strategy for improving drought tolerance in plants (Romero et al., 1997). While T6P is present in trace amounts in most of temperate plants, it accumulates in resurrection plants (Wingler, 2002). However, manipulating T6P levels through the expression of T6P regulatory or biosynthetic genes under the control of strong constitutive promoters significantly alters plant growth and development, and might negatively affect crop yield (Guan and Koch, 2015). In rice, the overexpression of Escherichia coli T6P biosynthetic genes under the control of an artificial ABAinducible promoter derived from the rbcS (RuBisCO) leafspecific promoter, avoided the negative effect of ectopic T6P biosynthetic gene overexpression. In a laboratory-scale experiment, rice transgenic plants were drought tolerant, with improved photosynthetic activity and reduced photo-oxidative damage under drought conditions (Garg et al., 2002). In maize, the catabolic enzyme T6P phosphatase (TPP) has been specifically expressed in female floral components using the rice floral promoter gene Mads6 (MCM1, AGAMOUS, DEFICIENS, and serum response factor). This reduced the concentration of T6P in female reproductive tissues, increased the sucrose content in the whole developing spikelet, and affected the T6P/SnRK1 regulatory pathway. Effects on drought resistance were assessed in extensive field trials, and the yield was consistently improved in well-watered, mild, and severe drought conditions, with no obvious impact on plant or ear morphology (Nuccio et al., 2015). Crop yield and stress resilience has also been increased using chemical treatments that stimulate T6P production in Arabidopsis and wheat. As plants are impermeable to exogenous T6P, synthetic precursors were produced and used as treatments that triggered a lightinducible endogenous production of T6P (Griffiths et al., 2016). Trehalose was also shown to accumulate in the roots of plants with augmented brassinosteroid (BR) signaling, together with other osmoprotective sugars like sucrose and raffinose. Furthermore, T6P-related gene expression was specifically upregulated in the root phloem cells (Fàbregas et al., 2018, Figure 1E). In turn, by mediating BR signaling, sugars act as signaling molecules in Arabidopsis to control different aspects of root system architecture, such as primary root elongation, lateral root development, and root directional response Zhang and He, 2015). Notably, manipulation of BR signaling results in an increase in osmoprotectant metabolites including proline, an amino acid long known for conferring drought and salinity tolerance (Kishor et al., 1995). Thus, accumulation of sugars and proline might also be a valid strategy to achieve dehydration tolerance in cereals, as shown by transgenic maize plants that constitutively express the vacuolar H + -pyrophosphatase (V-H + -PPase) gene (TsVP) from the halophyte Thellungiella halophila. In a small-scale field experiment, these transgenic plants showed a higher yield under drought conditions than the control plants (Li et al., 2008, Figure 2E). Altering sugar distribution via the T6P pathway is a promising biotechnological approach for producing droughttolerant plants, with the best results being obtained when manipulation is directed to specific tissues like developing reproductive structures (Nuccio et al., 2015;Oszvald et al., 2018) and seeds (Kretzschmar et al., 2015;Griffiths et al., 2016). Notably, seed-specific manipulation of T6P might increase drought tolerance as well as resistance to flooding (Kretzschmar et al., 2015). As most of the plant sugar trafficking happens through the phloem, shoot and root vascular tissues are also candidate targets for T6P manipulation (Griffiths et al., 2016;Fàbregas et al., 2018). Root Traits Roots are the main plant organ dedicated to the uptake of water, and are the first place where a lack of water is perceived. As such, an abundance of studies have examined root responses to dehydration. The most relevant root traits capable of improving drought tolerance and their biotechnological applications have recently been reviewed by Koevoets et al. (2016) and by Rogers and Benfey (2015), respectively. Here, we will focus on the solutions offered by manipulation of the BR pathway, and will provide a brief overview on the most promising biotechnological strategies aimed at improving drought resistance through manipulating root-related traits. BRs are a class of plant hormones that are widely involved in plant growth and development, as well as in stress responses. Along with other plant hormones, BRs play a key role in root growth. As BR levels are finely regulated to permit proper root development, BR metabolism and signaling are clear targets for the manipulation of root responses (Singh and Savaldi-Goldstein, 2015;Planas-Riverola et al., 2019). Indeed, exogenous application of BRs has been extensively tested on a variety of crops with variable outcomes (Khripach et al., 2000). However, from a genetic perspective, the only BR-related mutant widely used in agriculture is the barley uzu mutant, which carries a single amino acid substitution in the BR receptor HvBRI1, homolog of the Arabidopsis BR receptor Brassinosteroid insensitive-1 (BRI1) and displays a semi-dwarf phenotype (Chono et al., 2003). Recently, the triple mutant of wrky46, wrky54, and wrky70-positive regulators of BR signaling Arabidopsis group III WRKY TFs-was shown to be drought resistant. Due to a significant upregulation and downregulation of dehydration-induced and dehydration-repressed genes, respectively, these TFs operate as negative regulators of drought tolerance . BR biosynthetic dwarf and semi-dwarf mutants were also shown to be drought tolerant (Beste et al., 2011). Somehow, contrasting with these results, it has recently been demonstrated that the overexpression of vascular-specific BR receptor BRI1-LIKE 3 (BRL3) increases the survival rate of Arabidopsis plants exposed to severe drought stress. Interestingly, these transgenic plants do not show reduced growth, which is typically associated with drought-resistant BR mutants, and retain the same RWC as wild-type plants. As previously mentioned, these transgenic plants displayed an osmoprotectant signature (proline, trehalose, sucrose, and raffinose) in response to drought, with the corresponding biosynthetic and metabolic genes upregulated in the root phloem. This might suggest that BRs are involved in dehydration tolerance as well as in dehydration avoidance (Fàbregas et al., 2018, Figure 1F). BRs are also involved in hydrotropism, with the receptors BRI1-LIKE 1 (BRL1) and BRL3 having a prominent role that is independent of the pathway. Interestingly, BRL3 is structurally and functionally very similar to BRI1, but its expression is confined to the root stem cell niche while that of BRI1 is ubiquitously found in the root. This suggest that the BR-related drought responses in roots could be led by BR receptors in specific cells, such as the root meristematic region and vascular tissues (Fàbregas et al., 2018). In crops, increments in the BR biosynthetic pathway were shown to improve both stress tolerance-including dehydration and heat stress-and seed yield in the oil crop Brassica napus (Sahni et al., 2016). Furthermore, in wheat, the overexpression of the BES/BZR family TF gene TaBZR2, a positive regulator of BR signaling, enhanced the expression of wheat glutathione S-transferase 1, TaGST1. These transgenic plants showed an increase in ROS scavenging and a drought-resistant phenotype without being dwarf (Cui et al., 2019). The seemingly opposed behavior of BRengineered plants could be partly explained by the drought stress experimental setup. Most BR biosynthetic and signaling mutants exhibit an evident dwarf phenotype. However, in drought survival experiments, dwarf plants often show a passive drought resistance phenotype, and it is challenging to dissect whether a phenotype is due to a direct genetic effect on droughtrelated gene expression, or if it is a dehydration avoidance mechanism due to limited water consumption. Still, the manipulation of BR pathways retains its full potential with respect to the development of stress-tolerant varieties, particularly if directed to specific cell types to avoid unnecessary ectopic expression, and because of the involvement of these pathways in many agriculturally relevant traits such as grain shape and size, cell elongation and plant height, leaf angle, and root development (Espinosa-Ruiz et al., 2017;Martins et al., 2017;Tong and Chu, 2018). Unfortunately, translating root responses from Arabidopsis to cereals is particularly challenging as the root system and architecture differ greatly among plant species. Nevertheless, interesting results were obtained in cereals by engineering root responses. In rice, expressing the TF OsNAC5 under the control of the rootspecific promoter RCc3 (Xu et al., 1995) improved drought resistance by increasing root diameter. Specifically, enlarged metaxylematic vascular tissues permitted the transgenic plants to have a better water flux. The use of a tissue-specific promoter was paramount for the success of this experimental approach. Indeed, when expressed under a constitutively strong promoter, the same OsNAC5 was not able to increase yield under drought because of a reduction in the grain filling rate (Jeong et al., 2013, Figure 2F). Another example is the rice NAC family, which is well known for its effect on root architecture and stress responses. Several genes of this family have been overexpressed (OsNAC9, Redillas et al., 2012) or expressed under the control of the root-specific promoter RCc3 (OsNAC10, Jeong et al., 2010), with similar effects on drought and stress tolerance. Another superfamily of stress-related TFs, the APETALA2/ethylene responsive element binding factors (AP2/ERF), has been extensively studied in attempt to enhance root traits and achieve improved drought tolerance. AP2/ERF TFs participate in drought and cold stress responses (Shinozaki et al., 2003) and the overexpression of AP2/ERF genes increases stress tolerance in Arabidopsis (Haake et al., 2002). The Arabidopsis HARDY (HRD) gene, a AP2/ERF TF, was identified as a dominant mutant with increased root density, and its ectopic expression was able to improve the survival rate of both Arabidopsis and rice (Karaba et al., 2007). Ectopic HRD expression also alters leaf morphology, with thicker deep-green leaves in Arabidopsis and increased shoot biomass in rice contributing to an improved WUE of the transgenic plants. However, the increased WUE was measured as an increase in biomass and no data regarding seed production and yield were reported. Similar promising results were obtained in the fodder dicot Trifolium alexandrinum by constitutively expressing Arabidopsis HRD. This transgenic plant had a larger biomass in drought and salt stress conditions as tested in a controlled environment and in field trials (Abogadallah et al., 2011). In bread wheat, the expression of the Arabidopsis AP2/ERF TF DREB1A gene under the control of the stress-inducible RD29A promoter delayed leaf wilting in a water withholding experiment in a controlled environment (Pellegrineschi et al., 2004). In rice, a root-specific droughtresponsive AP2/ERF TF OsERF71 was cloned and expressed either in the whole plant using the rice GOS2 promoter (de Pater et al., 1992), or specifically in the roots using the RCc3 promoter. Both transgenic lines proved to be drought resistant. In addition, the root-specific expression was able to improve grain yield in drought conditions. OsERF71 can bind to the promoter of the key lignin biosynthesis gene OsCINNAMOYL-COENZYME A REDUCTASE1, and it was proposed that changes in cell wall and root structure were the basis of the drought-resistant phenotype (Lee et al., 2016). However, OsERF71 overexpression has a much wider impact on plant transcriptional regulation; it induces the oxidative response and DNA replication, and reduces photosynthesis, thereby diverting more resources toward survival-related mechanisms (Ahn et al., 2017). Native OsERF71 expression is induced by ABA, and in turn regulates the expression of ABA-related and proline biosynthesis genes in drought stress conditions (Li et al, 2018b). Another set of studies aimed at improving abiotic stress tolerance in rice found that the expression of barley late embryogenesis abundant (LEA) protein HVA1 under the control of a synthetic ABA-inducible promoter increased the root system expansion. LEA proteins are encoded by stress-responsive genes, and barley HVA1 and its rice homolog LEA3 are well known for being regulated in roots in response to ABA, salt, and abiotic stresses. LEA proteins could work as osmoprotectants by maintaining cell functionality and conferring dehydration tolerance. In this rice study, the synthetic promoter 3xABRC321, which carries a series of ABA-responsive elements, drove the expression of HVA1 in response to abiotic stress specifically in the root apical meristem and lateral root primordia. In turn, both primary and secondary root growth was significantly promoted through an auxin-dependent process. These transgenic rice plants showed a better WUE and abiotic stress tolerance in a small-scale field trial (Chen et al., 2015). In rice, QTL DEEPER ROOTING 1 (DRO1), which controls root angle, was studied using shallow-and deep-rooting cultivars, and was identified by developing a near-isogenic line homozygous for the allele conferring the deep-root trait. DRO1 is expressed in the root meristematic region, is controlled by auxins, and regulates the gravitropic response. The DRO1 deep-root allele from the cultivar Kinandang Patong (DRO1-kp) contains a 1-bp deletion that results in a premature stop codon, shortening the Cterminal domain of the protein that it encodes. DRO-kp lines have an enhanced gravitropic response that leads to deeper roots and drought avoidance, and ultimately improves rice yield under drought conditions (Uga et al., 2013). Furthermore, as shown in paddy field trials, the yield of DRO-kp lines is also improved in normal growth conditions (Arai-Sanoh et al., 2014). Although DRO1 does not have a clear homolog in Arabidopsis, it has homologs in other monocots like maize. The C-terminal position of the stop codon in the DRO1-kp alleles makes DRO1 an ideal target for CRISPR-based gene targeting. Genome Editing for Drought-Resistant Crops During the past 10 years, genome editing technologies like zinc fingers nucleases (ZFNs), transcription activator-like effectors nucleases (TALENs), and homing meganucleases (also known as meganucleases) have enabled scientists to produce targeted genetic modifications in organisms of choice (Arnould et al., 2011;Bogdanove and Voytas, 2011;Carroll, 2011). Innovative cloning approaches like the Golden Gate system made the assembly of these tools more straightforward (Cermak et al., 2011); however, genome editing protocols were still relatively time consuming and labor intensive. With the advent of the engineered clustered regularly interspaced short palindromic repeats/CRISPR-associated protein 9 (CRISPR/Cas9) system to perform targeted mutagenesis, genome editing became accessible to most research laboratories (Gasiunas et al., 2012;Jinek et al., 2012). In CRISPR-based genome editing, specificity to the target sequence is conferred by a programmable short fragment of RNA called guide RNA (gRNA), and the Cas9 protein itself does not require any structural modification to change target recognition. This is similar to what happens in the case of ZFNs and TALENs (Cong et al., 2013). CRISPR/Cas9 is derived from the bacterial immune system against viral infections. It was first observed by sequencing the DNA of E. coli, where a short series of DNA repeats are separated by spacer sequences (Ishino et al., 1987). The spacer sequences are DNA from viral invaders that the bacteria store as a sort of immune memory (Mojica et al., 2000;Mojica et al., 2005). These CRISPR sequences are transcribed and processed into short CRISPR RNAs (crRNAs), which are composed of a variable spacer portion and a conserved protospacer repeat, and subsequently associated with a transactivating CRISPR RNA (tracrRNA). The ribonucleoprotein complex composed by Cas9, crRNA, and tracrRNA are finally directed toward the invading DNA complementary to the spacer (Jansen et al., 2002;Bolotin et al., 2005;Pourcel et al., 2005). This system was engineered in such a way that the crRNA and tracrRNA were fused together in a unique fragment, the gRNA. Simply modifying the 20 nucleotides corresponding to the spacer is sufficient to target the Cas9 to a sequence of choice (Cong et al., 2013). The implementation of CRISPR-based genome editing technologies in plant science opened up a wealth of opportunities to plant scientists and plant breeders alike Shan et al., 2013). The most straightforward application of CRISPR/Cas9 is the production of out-of-frame loss-of-function mutants. Interestingly, loss-of-function mutations are the most frequent kind of genomic modification that happened during the domestication of crops. In fact, from a genetic perspective, crop domestication was achieved by stacking loss-of-function mutants in key genes controlling traits like seed shattering, flowering time, seed color, or size (Meyer and Purugganan, 2013). By targeting these genes, scientists have been able to swiftly retrace thousands of years of crop improvement in a process known as de novo domestication (Zsögön et al., 2017;Li et al., 2018c). Ideally, this approach could assist with the rapid improvement of highly resilient, locally adapted species to obtain new commercially relevant crops that still retain the unaltered stress resistance characteristics of their wild-type relatives. For some crops, de novo domestication could be more efficient than breeding into modern commercial varieties since stress resistance traits, often controlled by multiple, sometimes unknown, genes, have been lost during crop domestication. Genome editing technologies might also help speed up molecular breeding and crop improvement for so-called orphan crops, plants that are critical to local food security but are less relevant on a global scale (e.g., sweet potato, chickpea, or sorghum) (Lemmon et al., 2018;Li et al., 2018a). The rationale behind these approaches is similar to that of the de novo domestication strategy: improving a resilient, locally adapted, and highly specialized crop might provide better results than attempting to restore stress tolerance in currently used elite varieties in which complex multigenic traits were lost over the domestication process (Khan et al., 2019). One of the main limits to genome editing in crops is plant transformation efficiency, which hampers the delivery of genome editing material into the target cells. For most wild or orphan species, genetic transformation has never been attempted, and in the remaining cases, current protocols are only efficient for a small subset of the lab amenable varieties. Nonetheless, the high potential of genome editing in plant sciences is driving the development of more efficient crop transformation methods ( Figure 3A). DuPont Pioneer scientists have successfully used CRISPR/ Cas9 to engineer drought tolerance by swapping the native promoter of the ARGOS8 gene for the promoter of maize GOS2. The maize GOS2 promoter was identified from the rice homolog GOS2 (de Pater et al., 1992), and in this case conferred a moderate ubiquitous expression to ARGOS8. In field trials, these cis-genic lines showed increased yield under drought conditions (Shi et al., 2017). ARGOS were previously studied as negative regulators of ethylene signaling in both Arabidopsis FIGURE 3 | A general frame for translating research in Arabidopsis thaliana to crops to improve drought performance in cereals. (A) Translation of promising genes/ traits in crops: In recent years, the improvement of genome editing technologies has enabled targeted genetic modifications of organisms of choice, and has opened up a wealth of opportunities to plant scientists and plant breeders alike. Genome editing technologies might also help drive the development of more efficient crop transformation methods. (B) Development of cell-specific stress response promoters for monocots: It has been shown that the use of moderate constitutive, tissuespecific, and drought-responsive promoters could limit unintended pleiotropic effects in terms of growth or yield penalty while maintaining the improved trait (Waltz, 2015;Fàbregas et al., 2018;Papanatsiou et al., 2019). (C) Analysis of cell-specific drought responses: As of today, cell-specific promoters available for cereals are limited. By performing transcriptomic studies coupled to FACS (fluorescence-activated cell sorting) in the crops of interest, it could be possible to identify novel cellspecific promoters. Subsequently, these promoters could be exploited and introduced into crops. Drought stress research will greatly benefit from tissue-specific -omics. (D) Correct experimental design and phenotyping in Arabidopsis: Many Arabidopsis drought-stress experiments are performed without recording traits such as yield, plant biomass, relative water content (RWC), etc. Moreover, near-lethal conditions do not reflect in crop performances in open fields. Future studies should consider all these aspects. Recording more detailed data, plus validating these promising results in crops, should be a priority for any research group working in Arabidopsis. (E) High-throughput plant phenotyping (HTPP) to corroborate Arabidopsis results and test novel crop genes: HTPP for drought has been implemented in Arabidopsis, and it is in development for many crops. HTTP will help increase and improve the reproducibility and quality of data from drought adaptation studies. The widespread adoption of HTTP platforms could represent a valid intermediate step between laboratory conditions and open, large-scale field trials. and maize. The constitutive overexpression of ZmARGOS1, ZmARGOS8, and Arabidopsis ARGOS were shown to decrease ethylene sensitivity in transgenic Arabidopsis plants (Shi et al., 2015). ARGOS genes and their molecular mechanisms are conserved in both model and crop species. However, constitutively high expression had a negative effect in cool and high humidity conditions . The use of the maize GOS2 promoter enabled a moderate constitutive expression that delivered drought resistance without affecting yield in normal or humid conditions (Shi et al., 2017). This work shows that combining genome editing with promoters of tailored activity levels can provide the basis for successfully producing droughtresistant crops ( Figure 2D). Tissue-Specific Promoters to Drive Drought Tolerance Basic plant science research, as well as most traditional breeding and biotechnological approaches, are based on loss-of-function or gain-of-function mutants, or on the constitutive expression of a gene conferring a certain trait. As an example, mutations in the MILDEW RESISTANCE LOCUS O (Mlo) genes confer broadspectrum resistance against fungal pathogens to a large number of plant species including major cereals like wheat and barley (Kusch and Panstruga, 2017). Similarly, Bt crops constitutively expressing bacterial toxins from Bacillus thuringiensis are used worldwide to protect crops from pathogens (ISAAA Brief, 2017). However, improving resistance to abiotic stresses does not seems to follow the same pattern (Todaka et al., 2015). Dehydration avoidance based on a reduction in size or density of stomata comes with a growth or yield penalty (Bertolino et al., 2019). Manipulating hormone signaling pathways by means of ectopic overexpression of its components, or alternatively, by knocking them out, often has an undesired pleiotropic negative effect on overall plant growth and development. In contrast, it has already been shown in maize that the use of moderate constitutive promoters instead of strong promoters could limit this undesired effect while maintaining the improved trait (Shi et al., 2017) ( Figure 2B). The use of tissue-specific promoters to drive gene expression in particular cells upon drought stress stands as a promising solution to break the deadlock between drought resistance and yield penalties. Several emerging studies show that when a tissuespecific promoter is used, it is possible to reap the benefits of the expressed genes while avoiding any major alteration to overall plant phenotype. This is the case for the guard cell-specific promoter pMYB60, which was used to express the synthetic protein BLINK1 in stomata (Papanatsiou et al., 2019), and the rbcS leaf-specific promoter that was used to express T6P biosynthetic genes in leaves (Garg et al., 2002). Similarly, the use of stress-inducible promoters has proven to be an effective strategy to improve drought performances without penalizing yield in soybean and wheat (Waltz, 2015;Gonzalez et al., 2019). Importantly, these transgenic plants were tested extensively in field trials, and the transgenic HB4 soybean is one of the very few biotechnologically improved drought-resistant plants ever to be introduced on the market (ISAAA, 2019). Our recent findings revealed that the overexpression of the vascular AtBRL3 receptor confers drought tolerance without any evidence of growth penalty (Fàbregas et al., 2018), and thereby opens up new and exciting possibilities to address the societal demand for producing "more crop per drop" and to ensure global food security goals in the upcoming years. It would be interesting to assess what the effect would be of expressing the drought resistance genes that have been isolated over the years under the control of a promoter that is both stress-inducible and cellspecific, similarly to what was demonstrated in rice by expressing the barley LEA protein HVA1 under the control of a synthetic promoter (Chen et al., 2015). The main drawback of this approach is the limited availability of crop promoters that allow such specific gene expression. This hurdle could be overcome by performing transcriptomics in crops under normal and stress conditions that accurately differentiate between tissues. For example, in a study performed in rice, metabolomic and transcriptomic profiling was performed using samples representing developed leaves and the SAM region exposed to progressively harsher drought conditions. Different responses from the plant were recorded. Mild stress induced stomatal responses, decreased auxin and CK levels, and thus plant growth, while more severe stress resulted in the production of ABA and the remobilization of sugars (Todaka et al., 2017). Differentially regulated genes identified by this and similar studies will hopefully lead to the isolation of tissuespecific, drought-responsive promoters. Species-specific responses were also observed in this study; in contrast to what previously reported in Arabidopsis, moderate drought stress did not activate ethylene-responsive genes in rice (Skirycz et al., 2011). Thus, as drought response pathways might differ significantly from those of Arabidopsis, it will be important to perform transcriptomic studies directly on the crops of interest. This might also help identify novel, species-specific components of the drought response. Once a sufficient number of promoters are identified and tested in crops, a virtuous circle might be triggered in which transgenic cereals expressing tissue-specific markers would enable tissue-specific transcriptomics. This in turn could lead to the discovery of novel, cell type-and response-specific promoters that might provide innovative solutions to plant biotechnologists. Using fluorescence-activated cell sorting (FACS), a large number of plant seedlings expressing cell typespecific fluorescent markers could be grown in the desired experimental conditions, and then protoplasts prepared and sorted by flow cytometry to collect cells for -omics studies (Birnbaum et al., 2005). In Arabidopsis, methods to perform RNAseq transcriptomics with as few as 40 cells isolated using FACS have been developed (Clark et al., 2018). In fact, efficient protoplast preparation protocols that enable quick preparation and sorting of protoplasts-avoiding major transcriptional changes-are already available for rice , and FACS has already been used to study stress responses in this crop (Evrard et al., 2012). Similarly, a protocol to prepare protoplasts for FACS has already been developed for maize (Ortiz-Ramirez et al., 2018). An alternative approach for single-cell isolation is INTACT (isolation of nuclei tagged in specific cell types), where cell type-specific nuclei are isolated by affinity purifying a transgenic label targeted to the cell nucleus (Deal and Henikoff, 2011). INTACT does not require specialized instruments for cell sorting and might be preferred for chromatin studies (Deal and Henikoff, 2010). However, so far it has been tested exclusively on the model plant Arabidopsis. Both methods need transgenic plants expressing a fluorescent marker or a nuclear protein label. In cereals, depending on the species, plant transformation might still be challenging and a protoplast preparation protocol might be tedious, time consuming, and may significantly alter the transcriptional responses. In fact, so far most of the tissue-specific studies performed in monocots have used laser capture microdissection to isolate tissues. However, it is expected that cereal-adapted protocols will soon be developed to enable advanced transcriptomics. Regardless of the method of choice, drought stress research in cereals will greatly benefit from tissue-specific -omics, especially considering how just a few relevant cell types appear to control major responses (Efroni and Birnbaum, 2016) ( Figure 3C). In Arabidopsis, this field of research is quickly developing and has led to even more exciting approaches that allow high-throughput single-cell RNA sequencing (scRNA-seq). In scRNA-seq, protoplasted cells are encapsulated in individual droplets and each cell transcriptome is individually analyzed. Cells are then bioinformatically organized into tissues and cell types based on the presence of marker genes, which in the case of Arabidopsis, are well established for each cell type. In turn, novel, highly specific marker genes can be identified. This approach is called Drop-seq, and while it was initially developed for animal cell studies (Macosko et al., 2015), it has been adapted for Arabidopsis root cells to study development (Denyer et al., 2019;Ryu et al., 2019) and responses to treatments (Shulse et al., 2019). Cereal Transformation With the notable exceptions of rice and maize, for which transformation efficiencies can reach up to 100% and 70%, respectively, plant transformation is notoriously challenging in cereal crops and involves time-consuming protocols that often need to be performed by highly skilled technicians . Even when efficient protocols have been developed for the species of interest, high transformation efficiency is usually confined to just a few laboratory varieties (Harwood, 2012). In addition, as these varieties are often obsolete, an introgression program into current elite varieties must follow, thereby further hampering applicability of plant research in plant breeding. The advent of genome editing is rapidly altering this scenario. The wealth of opportunities that are opening up as a result of the rapidly advancing CRISPR-based technologies are driving a new wave of technological development in plant transformation (Kausch et al., 2019). A recent breakthrough by DuPont Pioneer (now Corteva) scientists was achieved by applying morphogenic regulator genes like BABYBOOM (BBM) and WUSCHEL (WUS) as transformation adjuvants (Lowe et al., 2016). This ultimately optimized the idea of using growth-stimulating genes in plant transformation (Ebinuma et al., 1997). Co-delivery of BBM and WUS, either as proteins or coding sequences, together with the target sequences seems to considerably improve transformation efficiency in a number of notoriously recalcitrant species like sorghum and sugarcane, as well as in elite varieties of maize (Lowe et al., 2016;Mookkan et al., 2017). Scientists at the University of California, Berkeley (USA), developed an interesting approach to plant transformation, which is distinct from both Agrobacterium-and biolistic-based systems. In this novel approach, a DNA delivery system makes use of carbon nanotubes (Demirer et al., 2019). While transformation was achieved only transiently in leaves or protoplasts of the target plants, this method has the notable advantage of being genotype independent. The lack of transgene integration could actually represent a critical advantage when CRISPR/Cas9 genome editing is involved. Indeed, it has been proven that transient expression of Cas9 and gRNAs in the target cells is sufficient to produce stable and heritable edits (Zhang et al., 2016). The advantage of this approach is that the first generation of mutants after transformation can carry the desired modifications, with no need to segregate the CRISPR material. This makes it attractive, especially for use in crops that are difficult or impossible to cross. Even though this approach is still in its proof-of-concept stage, it represents an innovative and potentially groundbreaking technology. High-Throughput Plant Phenotyping for Drought Traits Despite the vast amount of information that has been reported to date regarding drought in Arabidopsis, Bayer's (then Monsanto) DroughtGard ® maize, Verdeca's HB4 soybean and wheat, and Indonesian Perkebunan Nusantara's NXI-4T sugarcane are the only biotechnologically improved drought-resistant crops ever introduced onto the market . This gap can only partially be explained by societal and market opposition to genetically modified (GM) crops. The main hurdle in translating Arabidopsis-developed drought-resistant traits into crops is the fact that most of the laboratory-scale, Arabidopsis-based drought studies have limited data collection and phenotyping (Blum, 2014). As a general example, most of the Arabidopsis droughtstress experiments are performed by suspending irrigation for an extended time (12-21 days) followed by re-watering. The survival rate (live/dead plants) is then measured a few days (2-7) after. In this set of experiments, data regarding soil moisture, plant biomass, RWC, and seed yield are often not recorded. These dehydration survival experiments in near lethal conditions do not often reflect in crop performances in open fields, and do not relate to improved yield under drought or normal conditions ( Figure 3D). Furthermore, plants have evolved survival traits to maximize fitness when growth conditions are not ideal, often by decreasing total seed number to ensure the full viability of a limited number of seeds. However, the ultimate goal of plant breeding is to increase or secure plant yield and production, not plant survival (Skirycz et al., 2011;Blum and Tuberosa, 2018). Future experimental planning with more thorough data collection beyond mere survival rate and that includes yield evaluations might overcome this limitation (Zhou et al., 2016). Validating the most promising results in a crop model should become the priority for any research group that works in Arabidopsis. Alternatively, collaborations between Arabidopsis and crop scientists and/or plant breeders should be established to streamline the translation of innovative biotechnological approaches for use in agricultural science. Furthermore, industrial partnerships with plant breeding companies or seed companies could provide both the means and the expertise to test engineered plants in extensive field trials, tests which would otherwise prove unpractical or financially unsustainable in most research laboratories. In parallel, the advent of high-throughput plant phenotyping (HTPP) platforms and the establishment of research infrastructure networks like the EPPN2020 (https:// eppn2020.plant-phenotyping.eu/) will definitively help to increase and improve the reproducibility and quality and quantity of data from drought adaptation studies. HTPP for drought responses has been implemented for Arabidopsis (Granier et al., 2006;Jansen et al., 2009;Skirycz et al., 2011;Fujita et al., 2018) and applied to drought research (Rosa et al., 2019). Outdoor or greenhouse HTPP facilities to study drought performances are being developed for crops (Fahlgren et al., 2015). These facilities are usually capable of capturing multispectral images of the plants, weighing the pots as a gravimetric, indirect measure of soil moisture, and differentially irrigating the plants to allow drought-exposed and control plants to be placed in the same environment. Generally, these systems are based either on a robotized apparatus that moves around the plants performing measurements and irrigation (Gosseau et al., 2019), or exploits a mobile device that scans plants for images while the pots stand on scales (Vadez et al., 2015). Alternatively, the pots are arranged on a conveyor belt system that transports the plants to watering or imaging stations, like in the APPP systems at IPK Gatersleben, Germany (Junker et al., 2014). As reproducibility of results (Editorial, 2016) might deter plant breeders and investors, a widespread adoption of HTPP could not only support the validity of experimental results, but also represent a valid intermediate step before large-scale field trials, especially when a series of different genotypes with comparable drought performances are involved ( Figure 3E). SUMMARY In this review, we highlight that many physiological mechanisms underlying drought-resistance traits are conserved between Arabidopsis and crops. DE, control of flowering time, stomatal responses, T6P pathways, and some root traits are highly conserved among plants. Therefore, Arabidopsis is an excellent model to test drought responsive strategies. Still, when studies performed in Arabidopsis reveal interesting agronomic potential, these results should promptly be translated into laboratoryamenable cereal crops like rice. On the other hand, traits like cuticular waxes, senescence, and stay-green might have significant differences that would need to be carefully assessed using a species-by-species approach. Nonetheless, Arabidopsis could still provide a useful heterologous system to test novel genes discovered in cereal species and their relative molecular responses. As a general frame to help translate research in Arabidopsis into crops, and with the ultimate goal of improving drought performance in cereals, we suggest the following measures to be adopted: a) use an accurate experimental design in Arabidopsis; b) timely translate promising genes/traits in model crops (i.e., rice); c) include HTPP to corroborate Arabidopsis results and to test novel crop genotypes; d) investigate tissue-and cell typespecific drought responses; and e) clone tissue-and cell typespecific, stress-responsive promoters for monocots and make available them to the entire scientific community. It is crucial to strengthen the bridges between Arabidopsis and crop scientists. Moreover, the coordination of research groups and institutes working with Arabidopsis and crop species at the same time will be important in facilitating this process. In addition, academia-industry partnerships could prove instrumental not only for rapidly scaling up promising results, but also for designing potential drought-resistant strategies that might have a high impact on global agriculture.
v3-fos-license
2024-06-07T15:27:10.084Z
2024-06-01T00:00:00.000
270306743
{ "extfieldsofstudy": [], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.mdpi.com/2079-4991/14/11/982/pdf?version=1717589622", "pdf_hash": "445262428a902f5594555c616f07bb2087912349", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:1609", "s2fieldsofstudy": [ "Materials Science", "Physics" ], "sha1": "a61737e07f4b31e497037a2e2bf0978945143dde", "year": 2024 }
pes2o/s2orc
Degradation of Perfluorododecyl-Iodide Self-Assembled Monolayers upon Exposure to Ambient Light Perfluorododecyl iodide (I-PFC12) is of interest for area-selective deposition (ASD) applications as it exhibits intriguing properties such as ultralow surface energy, the ability to modify silicon’s band gap, low surface friction, and suitability for micro-contact patterning. Traditional photolithography is struggling to reach the required critical dimensions. This study investigates the potential of using I-PFC12 as a way to produce contrast between the growth area and non-growth areas of a surface subsequent to extreme ultraviolet (EUV) exposure. Once exposed to EUV, the I-PFC12 molecule should degrade with the help of the photocatalytic substrate, allowing for the subsequent selective deposition of the hard mask. The stability of a vapor-deposited I-PFC12 self-assembled monolayer (SAM) was examined when exposed to ambient light for extended periods of time by using X-ray photoelectron spectroscopy (XPS). Two substrates, SiO2 and TiO2, are investigated to ascertain the suitability of using TiO2 as a photocatalytic active substrate. Following one month of exposure to light, the atomic concentrations showed a more substantial fluorine loss of 10.2% on the TiO2 in comparison to a 6.2% loss on the SiO2 substrate. This more pronounced defluorination seen on the TiO2 is attributed to its photocatalytic nature. Interestingly, different routes to degradation were observed for each substrate. Reference samples preserved in dark conditions with no light exposure for up to three months show little degradation on the SiO2 substrate, while no change is observed on the TiO2 substrate. The results reveal that the I-PFC12 SAM is an ideal candidate for resistless EUV lithography. Introduction Modern nanoelectronics relies on top-down patterning methods involving a repetitive sequence of deposition, photolithography, and etching steps [1][2][3].However, the industry has recently been facing significant challenges in keeping up with Moore's law [4,5] when using these conventional lithography techniques for patterning at critical dimensions.Traditional photolithography uses UV light to transfer patterns from a hard mask onto a photoresist covered-substrate, inducing a chemical change between the exposed and unexposed areas [6].Hard masks are built from materials with high etch contrast relative to the underlying stack.Subsequently, the exposed substrate can undergo additional processes, such as etching or ion implantation, to create the final structure.With each advancement in technology, the intricacy and number of these procedures grow, leading to significant challenges in terms of patterning techniques.Patterning at scales smaller than 10 nm using these top-down techniques faces a number of difficulties, including edge placement errors, decreasing throughput, complexity, pattern collapse, and photoresist non-uniformity [7][8][9][10][11][12][13]. Due to these issues, there is a push towards using extreme UV photolithography.EUV refers to radiation at 13.5 nm (92 eV) [14].These high-energy photons possess significantly more energy than those used in standard UV photolithography, allowing for finer resolution to reach the smaller critical dimensions [15].The move to highly energetic EUV processes opens up the possibility for different chemical reactions to happen during exposure.As EUV sources have a lower power than previous sources, their flux is also lower.Traditional photoresists are not fully compatible with this process; therefore, alternative materials must be studied that have a higher EUV absorption cross-section [16].Iodine readily absorbs EUV photons, so incorporating halogens such as this into photoresist materials can increase EUV absorption [17].Kosto et al. found that a substation of one hydrogen atom on a 2-methylphenol (MPh) molecule for an iodine atom led to a 4.6-fold increase in the EUV photoabsorption cross-section [18]. At the same time, there is interest in developing and moving towards bottom-up deposition methods like area-selective deposition (ASD), removing the need for multiple photolithographic steps [19].ASD enables material deposition on predefined patterns after altering the local surface chemistry [7,20,21].Atomic layer deposition (ALD), generally utilized to deposit the material of interest, is a cyclic process that grows films through successive pulses of a metal precursor with a co-reactant, such as water, in a layer-by-layer manner [22].Selectivity can be achieved via the passivation of certain areas of the surface through the use of self-assembled monolayers (SAMs) [23].The SAMs preferentially adhere to one area or material of a patterned surface, called the non-growth area.They act as both a physical and chemical barrier to block any subsequent deposition on this area while still allowing growth on other areas or materials on the surface [7].SAMs are compact organic monomolecular layers that spontaneously adsorb on a surface, showing large-scale ordering via Van der Waals force once deposited [24].SAMs are comprised of a head group with a strong affinity for the substrate, a backbone chain, and a terminal functional group.SAMs bond to the surface via their head group, with common head group/substrate pairs including alkane-thiols on gold and other noble metals [25][26][27], silanes on silicon dioxide (and some metal oxides), and phosphonates on metal oxides [28].SAMs also offer a diverse array of functionalities, for example, modifying the surface wettability, corrosion resistance [29], adhesion, friction [30], conduction [31], and biocompatibility.One of the many attributes of SAMs for ASD is that they can be easily patterned using soft lithography [32,33] and have been used in many applications, from arrays of single cells to open microfluidics [34][35][36]. Recently, a novel method was introduced for the selective deposition of a hard mask layer within the growth region using ASD.This approach is employed to differentiate between the growth and non-growth regions following exposure of silane-based SAMs to EUV photolithography.To overcome the issues associated with photoresist, the study deposited these SAMs onto a TiO 2 substrate, which is photocatalytic in nature.The photoactive surface aided the decomposition of the SAMs when exposed to EUV, thus producing a contrast in the exposed region [37].Perfluorododecyl iodide (I-PFC12), as shown in Figure 1, is a SAM comprised of an iodine head group and a fluorocarbon backbone chain, making it a promising new candidate for this method.I-PFC12 exhibits intriguing properties, including ultralow surface energy due to a high fluorine content, the ability to modify silicon's band gap, low surface friction, and suitability for micro-contact patterning [38].As a result of the iodine atom, halogen bonding is expected to be the primary driver of the initial adsorption of I-PFC12, while dispersion forces play a key role in ensuring the long-term stability of the monolayers [38,39].Halogen bonding is a non-covalent interaction between an electron-deficient halogen atom (often iodine) and a nucleophile or electron-rich species (often oxygen or nitrogen) [40][41][42][43]. This work investigates the stability of I-PFC12 SAMs on two different substrates, SiO 2 and TiO 2 when exposed to ambient light at different exposure times: from twenty-four hours up to one month.Even though the C-F bond is one of the strongest bonds in organic chemistry, environmental science studies show the degradation and defluorination of per and polyfluorinated chemicals in the presence of TiO 2 and TiO 2 -based photocatalysts [44,45].Testing the stability of I-PFC12 SAMs could help determine their suitability for ASD hard mask applications, removing the need for photoresist materials.Making use of the photocatalytic nature of TiO 2 , it is observed that the SAMs degrade via defluorination.Initially, the optimum vapor deposition parameters of the I-PFC12 were investigated using water contact angle (WCA) and spectroscopic ellipsometry (SE) to determine the hydrophobicity and film thickness of the SAM layer.Changes in the elemental composition of the I-PFC12 exposed to ambient light over time were investigated using X-ray photoelectron spectroscopy (XPS), along with any changes in the bonding environment.It was observed that over time and with exposure to ambient light conditions, fluorine decreases on both substrates, with a more pronounced decrease in the case of TiO 2 .Owing to the photocatalytic nature of TiO 2 , the SAMs degraded quicker on this substrate, making it an ideal choice for hard mask applications.Halogen bonding between the iodine head group and the OH-terminated substrate was investigated; however, there was no iodine observed for either substrate.Interestingly, when stored in a dark container with no ambient light, the I-PFC12 SAMs showed little signs of degradation.Although the iodine was not observed on either substrate in this study, the SAMs still degraded under ambient light, demonstrating a promising result on the use of I-PFC12 SAMs in EUV hard mask applications.and polyfluorinated chemicals in the presence of TiO2 and TiO2-based photocatalysts [44,45].Testing the stability of I-PFC12 SAMs could help determine their suitability for ASD hard mask applications, removing the need for photoresist materials.Making use of the photocatalytic nature of TiO2, it is observed that the SAMs degrade via defluorination. Initially, the optimum vapor deposition parameters of the I-PFC12 were investigated using water contact angle (WCA) and spectroscopic ellipsometry (SE) to determine the hydrophobicity and film thickness of the SAM layer.Changes in the elemental composition of the I-PFC12 exposed to ambient light over time were investigated using X-ray photoelectron spectroscopy (XPS), along with any changes in the bonding environment.It was observed that over time and with exposure to ambient light conditions, fluorine decreases on both substrates, with a more pronounced decrease in the case of TiO2.Owing to the photocatalytic nature of TiO2, the SAMs degraded quicker on this substrate, making it an ideal choice for hard mask applications.Halogen bonding between the iodine head group and the OH-terminated substrate was investigated; however, there was no iodine observed for either substrate.Interestingly, when stored in a dark container with no ambient light, the I-PFC12 SAMs showed little signs of degradation.Although the iodine was not observed on either substrate in this study, the SAMs still degraded under ambient light, demonstrating a promising result on the use of I-PFC12 SAMs in EUV hard mask applications. Substrate Preparation In this study, 300 mm silicon wafers provided by SunEdison Semiconductors were used as the starting substrate for all samples.The Si substrates contained ~1.5 nm of a native oxide.A crystalline TiO2 layer measuring 7.5 nm in thickness was deposited via ALD using Titanium isopropoxide (Ti(OMe)4) as the precursor and water (H2O) as the coreactant.This deposition process was conducted at a temperature of 300 °C.Further details of the substrates and TiO2 deposition process can be found in a previous study [37]. Both TiO2 and the SiO2 were UV ozone-cleaned in a Jelight UV ozone cleaner for 15 min prior to SAM deposition to remove any surface contaminants and to leave the surface rich in hydroxyl groups for the SAMs to adhere to.I-PFC12 SAMs were deposited on both Substrate Preparation In this study, 300 mm silicon wafers provided by SunEdison Semiconductors were used as the starting substrate for all samples.The Si substrates contained ~1.5 nm of a native oxide.A crystalline TiO 2 layer measuring 7.5 nm in thickness was deposited via ALD using Titanium isopropoxide (Ti(OMe) 4 ) as the precursor and water (H 2 O) as the co-reactant.This deposition process was conducted at a temperature of 300 • C. Further details of the substrates and TiO 2 deposition process can be found in a previous study [37]. Both TiO 2 and the SiO 2 were UV ozone-cleaned in a Jelight UV ozone cleaner for 15 min prior to SAM deposition to remove any surface contaminants and to leave the surface rich in hydroxyl groups for the SAMs to adhere to.I-PFC12 SAMs were deposited on both TiO 2 and SiO 2 in a vapor phase in a dedicated Heratherm OM180 oven procured from Thermo Scientific with a vacuum pressure of 9-13 mbar. Multiple depositions were performed to assess the optimum deposition time and temperature that yielded the I-PFC12 layer with the best quality.WCA and SE were used to characterize the surface hydrophobicity and thickness, respectively.To evaluate the deposition kinetics of the SAMs, WCA and thickness were measured for different deposition temperatures and times: in the range 100-150 • C and 1-2 h, respectively.The best film quality was deposited at a temperature of 120 • C and 100 • C for SiO 2 and TiO 2 , respectively, for a deposition time of two hour on both. SAM Characterization A DataPhysics static water contact angle system was used to assess the hydrophobicity of the deposited SAM layer.The measurements were performed ex situ using de-ionized water, with a drop size of 2 µL and a dispensing speed of 1 µL/s.The WCA value was extracted from fitting using the SCA 20 software.UV ozone-cleaned TiO 2 and SiO 2 reference substrates were measured directly after pre-treatment.They showed a characteristic hydrophilic WCA value of <10 • .The WCAs of the SAM-covered samples were compared to these reference samples.If an increase in hydrophobicity was observed, that was evidence of SAM deposition. The thickness of the deposited SAM layer was measured ex situ using a J. A. Woollam RC2 Spectroscopic Ellipsometer system.The data were recorded at three incident angles, 65 • , 70 • , and 75 • , with respect to the sample normal within a wavelength range of 200-2500 nm and an acquisition time of 5 s/angle.The beam divergence was 0.4 • with a beam diameter of 3-4 nm.A model was fitted to the reference SiO 2 and TiO 2 substrates to find the thickness of the oxide layers.Once this was carried out, a Cauchy model was fit to determine the thickness of the SAM layers. Following the deposition of I-PFC12 SAMs on the two different substrates, an investigation into their stability in ambient light was conducted using XPS.The samples were cleaved from larger three cm 2 coupons into one cm 2 coupons for ultrahigh vacuum XPS analysis.Once cleaved and before XPS characterization, the samples were stored in a dark container or were left for a certain length of time under ambient conditions.The base pressure of the XPS system was typically ~3 × 10 −9 mbar.Measurements were recorded using an Al Kα (hν = 1486.6eV) anode of a non-monochromatic PSP CTX400 flood gun X-ray source with a PSP HA50 energy analyzer at a pass energy of 20 eV for core-level scans and 90 eV for survey spectra.The angle of the X-ray source radiation and the analyzer were both 54 • with respect to the sample normal.The peak fitting was performed using AAnalyzer peak-fitting software version 2.25.A Voigt peak, which is a combination of Gaussian and Lorentzian line shapes with a Shirley-Sherwood-type background, was used to fit the spectra [46].The C 1s, F 1s, and O 1s peaks were fit with a Voigt singlet peak, and the Si 2p and Ti 2p were fit using Voigt doublet peaks.The spectra of the SiO 2 substrate were referenced to the Si-Si peak at 99.1 eV.The spectra of the TiO 2 substrate were referenced to the TiO 2 peak at 458.8 eV, essentially using each underlying substrate for internal calibration. XPS measurements were also scrutinized for any beam damage from the X-ray source.This was carried out by first recording the survey scans, followed by the core scans, and then a final set of survey scans.This allowed for approximately 2-2.5 h of X-ray exposure between the initial and final survey scans, allowing enough time for any damage to be observed.The atomic concentrations were then compared to the initial survey scans.After SAM deposition, the samples were placed into a dark container to ensure there was no exposure to ambient light.The samples were scanned as quickly as possible after SAM deposition.Then, three different samples were left in ambient light for different durations: 24 h, one week, and one month.Samples of I-PFC12 on both SiO 2 and TiO 2 were set aside and left sealed in the dark container and scanned after one and three months.Since the samples were cleaved from larger coupons, there were small variations observed in atomic concentrations due to regions of increased SAM density.Despite these variations, there is little impact on the overall trends of atomic concentrations across the surfaces. Optimal I-PFC12 SAM Deposition I-PFC12-derived SAMs were deposited on both TiO 2 and SiO 2 surfaces via the vapor phase technique to be more compatible with current integration schemes used in the industry.For this, 100 mg of SAM was deposited at temperatures varying between 100 • C and 150 • C with deposition times of one and two hours (deposition for one hour at 100 • C is not included as an hour was not a sufficient length of time for the molecules to adsorb onto the surface at such a low temperature).Figure 2a,b illustrates the WCA of I-PFC12 deposited on SiO 2 and TiO 2 surfaces, respectively.A static contact angle test was performed to assess the hydrophobicity of the surface.It is notable that at a deposition time of one hour, the WCA varies considerably depending on the deposition temperature; however, at two hours of deposition, a consistent WCA is observed across all deposition temperatures.When examining the SAM thicknesses depicted in Figure 2c, it becomes apparent that the thickness is greater after the two-hour deposition period.The optimized conditions for SAM deposition on SiO 2 involved a deposition temperature of 120 • C for two hours, resulting in a WCA of 64.9 • ± 0.3 • and a thickness of 0.65 nm.Conversely, for TiO 2 surfaces, the optimized conditions were determined to be a deposition temperature of 100 • C for two hours, yielding a WCA of 93.9 • ± 2 • and a thickness of 0.69 nm. concentrations due to regions of increased SAM density.Despite these variations, there is little impact on the overall trends of atomic concentrations across the surfaces. Optimal I-PFC12 SAM Deposition I-PFC12-derived SAMs were deposited on both TiO2 and SiO2 surfaces via the vapor phase technique to be more compatible with current integration schemes used in the industry.For this, 100 mg of SAM was deposited at temperatures varying between 100 °C and 150 °C with deposition times of one and two hours (deposition for one hour at 100 °C is not included as an hour was not a sufficient length of time for the molecules to adsorb onto the surface at such a low temperature).Figure 2a,b illustrates the WCA of I-PFC12 deposited on SiO2 and TiO2 surfaces, respectively.A static contact angle test was performed to assess the hydrophobicity of the surface.It is notable that at a deposition time of one hour, the WCA varies considerably depending on the deposition temperature; however, at two hours of deposition, a consistent WCA is observed across all deposition temperatures.When examining the SAM thicknesses depicted in Figure 2c, it becomes apparent that the thickness is greater after the two-hour deposition period.The optimized conditions for SAM deposition on SiO2 involved a deposition temperature of 120 °C for two hours, resulting in a WCA of 64.9° ± 0.3° and a thickness of 0.65 nm.Conversely, for TiO2 surfaces, the optimized conditions were determined to be a deposition temperature of 100 °C for two hours, yielding a WCA of 93.9° ± 2° and a thickness of 0.69 nm. I-PFC12 Degradation under X-rays To assess what effect, if any, the X-rays induced on the I-PFC12 SAMs, samples from both substrates were exposed to X-rays for a period of time greater than two hours.Table I-PFC12 Degradation under X-rays To assess what effect, if any, the X-rays induced on the I-PFC12 SAMs, samples from both substrates were exposed to X-rays for a period of time greater than two hours.Table S1 in the Supplemental Information (SI) displays the atomic concentrations for the I-PFC12 SAMs on the SiO 2 and the TiO 2 substrates.Five individual survey scans were recorded, followed by core-level scans.Five more survey scans were repeated, which amounted to approximately 2-2.5 h of X-ray exposure.The atomic concentrations of I-PFC12 on SiO 2 showed a small decrease in the F 1s signal between survey one and survey five, reducing from 18.9% to 16.5%.Any subsequent X-ray exposure did not appreciably decrease the F 1s signal.After survey ten, the F 1s content was still 16.2%, showing that any decrease happens during the initial exposure.This points to the desorption of any unreacted I-PFC12 molecules from the deposition process that may be present on the surface [47].Once these molecules have been removed, the I-PFC12 SAMs are stable, even after more than two hours of exposure to X-rays.Similarly, the SAM on TiO 2 follows the same trend and shows a 2% decrease in F 1s, from 18.5% in the initial survey scan to 16.5% in the final survey scan.Again, this is attributed to the desorption of unreacted SAM molecules.As all small coupon-sized samples were cleaved from larger wafers, some variation was observed on a small number of samples.This could be due to an area where the SAM is less dense than other areas.However, the overall trends observed are the same, even if the atomic concentrations vary ever so slightly. Interestingly, an I 3d peak was not observed on any sample or substrate at any time during the experiment, as shown in Figure S1 in the Supplementary Materials.To maximize the probability of observing the I 3d peak, it was scanned for first on all samples.However, it became clear that it was not present.Next, a large number of scans (50 scans) were recorded due to the low signal that would be expected from the I 3d.This did not yield the required results, and still, no iodine was observed.Single scans and a low number of scans (five scans) were performed in case the iodine was degrading fast due to X-ray exposure.However, again, no iodine was observed.It is unclear as to whether the iodine degrades under X-rays or if it diffuses upon exposure to air, or even a combination of both.Although there was no evidence of iodine, the monolayer remained on the surface.A previous study conducted by Keyun Shou et al. suggests that the monolayer forms due to halogen bonding between the iodine atoms in I-PFC12 and the oxygen atoms on the SiO 2 surface.They postulate that dispersion forces help stabilize the monolayer on the surface.Although they did not have conclusive evidence of halogen bonding, they gave a detailed explanation as to why it is believed to be halogen bonding [38]. I-PFC12 Deposited on SiO 2 Table 1 shows the atomic concentration taken from the survey spectra of the I-PFC12 deposited on the SiO 2 when exposed to ambient light over time.A steady decrease in the F 1s from 19.1% to 12.9% is observed after one month of exposure to ambient light.This decrease in F 1s corresponds to an increase in C 1s intensity, with an initial concentration of 10.6% increasing to 14.2% after one month.A 3% increase in the O 1s is also observed during this time.This suggests that as the fluorine is being removed, there is an increase in the carbon signal from the fluorocarbon chain upon exposure to ambient light.Furthermore, the expected C:F ratio is 12:25, and in this work, it was found experimentally to be 10:19 on the SiO 2 substrate, which is close to the expected value.The initial higher concentration of carbon indicates the presence of adventitious carbon upon atmospheric exposure.This ratio does change over time and with exposure to ambient conditions, increasing to 14:13 after a month of ambient light exposure.The change in ratio again points to the decomposition of the perfluorocarbon chain when exposed to ambient light.All these peaks are consistent with those of the previous literature [48 -51].The peak at 286.2 eV was assigned to both C-O/C-CF x bonds due to the sample being exposed to the atmosphere, meaning that the presence of C-O bonds cannot be discounted.After only twenty-four hours of ambient exposure, a decrease in the CF 3 component peak was visible, corresponding to a very slight increase in the C-O/C-CF x component peak.Very little change is observed between the twenty-four-hour exposure and the one-week exposure, with the C 1s spectra looking almost identical between the two samples.However, with one month of ambient exposure, the CF 3 component peak disappeared from the spectrum, indicating the complete removal of the CF 3 bonds.Interestingly, there is also the emergence of a CHF-CHF component peak at 287.9 eV.It is postulated that this peak emerges as the CF x bonds in the fluorocarbon chain begin to break down.There is no evidence to suggest that I-PFC12 oxidizes over this time in either the C 1s or the O1s (Figure S2a in Supplementary Materials).The C 1s spectra show no evidence of the C-I bond between the head group of the SAM and the carbon chain.Figure 3a shows the unnormalized data for C 1s, and Figure 3b shows the F 1s core scans after different durations of ambient light exposure.C 1s contains four component peaks in the as-received sample, the twenty-four-hour sample, and the one-week sample: C-C/C-H at 284.8 eV, C-O/C-CFx bonds at 286.2 eV, CF2 at 291.6 eV, and CF3 at 293.4 eV.All these peaks are consistent with those of the previous literature [48 -51].The peak at 286.2 eV was assigned to both C-O/C-CFx bonds due to the sample being exposed to the atmosphere, meaning that the presence of C-O bonds cannot be discounted.After only twenty-four hours of ambient exposure, a decrease in the CF3 component peak was visible, corresponding to a very slight increase in the C-O/C-CFx component peak.Very little change is observed between the twenty-four-hour exposure and the one-week exposure, with the C 1s spectra looking almost identical between the two samples.However, with one month of ambient exposure, the CF3 component peak disappeared from the spectrum, indicating the complete removal of the CF3 bonds.Interestingly, there is also the emergence of a CHF-CHF component peak at 287.9 eV.It is postulated that this peak emerges as the CFx bonds in the fluorocarbon chain begin to break down.There is no evidence to suggest that I-PFC12 oxidizes over this time in either the C 1s or the O1s (Figure S2a in Supplementary Materials).The C 1s spectra show no evidence of the C-I bond between the head group of the SAM and the carbon chain.As a result of this, the higher binding energy peak has been assigned to both CF2 and CF3 The F 1s core-level spectra shown in Figure 3b contain two component peaks: CF 2 /CF 3 at 688.7 eV and C-CF x /CHF at 687.3 eV [52].The different CF bonds are not as distinguishable in the F 1s compared to the C 1s, as can be seen from the very broad F 1s envelope.As a result of this, the higher binding energy peak has been assigned to both CF 2 and CF 3 bonds.In contrast, the lower-binding-energy peak is thought to be a convolution of C-CF x and CHF bonds.After only twenty-four hours of ambient exposure, a slight decrease in the intensity of the CF 2 /CF 3 component peak was observed, mirroring the decrease seen in the C 1s of the CF 3 component peak.After one week, a change in the ratio between the two component peaks starts to become visible as the CF 2 /CF 3 component peak decreases.In the case of C 1s, a decrease in the CF 3 component peak was observed.It is suggested that the CF 3 bonds degrade due to the ambient conditions that leave C-CF x bonds remaining.Following one month of ambient exposure, there is an obvious difference in the ratio between the two peaks.The higher-binding-energy peak has decreased in intensity while the lower-binding-energy peak has increased.This is further evidence of the degradation of the I-PFC12 SAMs, with the CF 3 /CF 2 breaking down and forming C-CF x and CHF bonds.An overlay of all the F 1s peaks is displayed in Figure S3 of the Supplementary Materials, where the decrease in F 1s intensity is clearly shown with increasing exposure time to ambient conditions. The O 1s and Si 2p core-level scans are displayed in Figure S2 of the Supplementary Materials.The O 1s in Figure S2a predominantly consists of SiO 2 at 532.4 eV, with evidence of some C-O bonds at 531.5 eV due to the presence of adventitious carbon.There does not appear to be any oxidation of the SAMs as no evidence was seen in the spectra for the formation of CO bonds or for FO bonds over the course of the experiment.The Si 2p core-level scans (Figure S2b) have two component peaks visible: Si bulk at 99.1 eV and SiO 2 103.1 eV.The Si 2p does not show any interaction with the SAMs, the iodine head group, or any carbon or fluorine bonds. Several samples were maintained in dark containers and placed in a cabinet with no exposure to light for up to three months.The atomic concentrations (Table 1) reveal little change in the F 1s after one month when kept in dark conditions, with only a slight decrease from 19.1% to 18.6%, which is well within experimental error.This is consistent with the findings of Shou et al. [38].Following three months in dark conditions, the loss of fluorine is just under 2%, as the F 1s concentration was recorded at 17.3%.Figure 4a,b overlays the unnormalized C 1s and F 1s, both as received and after 3 months of being kept in the dark.C 1s shows a slight decrease in the C-F bonds on the higher-binding-energy peak, which is consistent with the loss of CF 2 /CF 3 bonds.A decrease in F 1s intensity was observed, corresponding to the decrease in C 1s. Overall, no change in peak shape was observed, demonstrating that although the CF x bonds were being removed, there is no creation of any new chemical states. bonds.In contrast, the lower-binding-energy peak is thought to be a convolution of C-CFx and CHF bonds.After only twenty-four hours of ambient exposure, a slight decrease in the intensity of the CF2/CF3 component peak was observed, mirroring the decrease seen in the C 1s of the CF3 component peak.After one week, a change in the ratio between the two component peaks starts to become visible as the CF2/CF3 component peak decreases. In the case of C 1s, a decrease in the CF3 component peak was observed.It is suggested that the CF3 bonds degrade due to the ambient conditions that leave C-CFx bonds remaining.Following one month of ambient exposure, there is an obvious difference in the ratio between the two peaks.The higher-binding-energy peak has decreased in intensity while the lower-binding-energy peak has increased.This is further evidence of the degradation of the I-PFC12 SAMs, with the CF3/CF2 breaking down and forming C-CFx and CHF bonds.An overlay of all the F 1s peaks is displayed in Figure S3 of the Supplementary Materials, where the decrease in F 1s intensity is clearly shown with increasing exposure time to ambient conditions. The O 1s and Si 2p core-level scans are displayed in Figure S2 of the Supplementary Materials.The O 1s in Figure S2a predominantly consists of SiO2 at 532.4 eV, with evidence of some C-O bonds at 531.5 eV due to the presence of adventitious carbon.There does not appear to be any oxidation of the SAMs as no evidence was seen in the spectra for the formation of CO bonds or for FO bonds over the course of the experiment.The Si 2p corelevel scans (Figure S2b) have two component peaks visible: Si bulk at 99.1 eV and SiO2 103.1 eV.The Si 2p does not show any interaction with the SAMs, the iodine head group, or any carbon or fluorine bonds. Several samples were maintained in dark containers and placed in a cabinet with no exposure to light for up to three months.The atomic concentrations (Table 1) reveal little change in the F 1s after one month when kept in dark conditions, with only a slight decrease from 19.1% to 18.6%, which is well within experimental error.This is consistent with the findings of Shou et al. [38].Following three months in dark conditions, the loss of fluorine is just under 2%, as the F 1s concentration was recorded at 17.3%.Figure 4a,b overlays the unnormalized C 1s and F 1s, both as received and after 3 months of being kept in the dark.C 1s shows a slight decrease in the C-F bonds on the higher-bindingenergy peak, which is consistent with the loss of CF2/CF3 bonds.A decrease in F 1s intensity was observed, corresponding to the decrease in C 1s. Overall, no change in peak shape was observed, demonstrating that although the CFx bonds were being removed, there is no creation of any new chemical states. I-PFC12 Deposited on TiO 2 Table 2 shows the atomic concentration taken from the survey spectra of the I-PFC12 deposited on the TiO 2 when exposed to ambient light over time.A 10.2% decrease in the F 1s from 17.6% to 7.4% is observed after one month of exposure to ambient light.The C 1s increases over time from 11.56% to 14.4%.A 5% increase in the O 1s was also observed during this time, while the Ti 2p remained fairly constant.This suggests that the SAMs degrade with ambient exposure due to the removal of the fluorine atoms, which reveals more of the TiO 2 substrate.As previously stated, the expected C:F ratio is 12:25, and on the TiO 2 substrate, it was found experimentally to be 12:17.This indicates that more carbon is present on the sample, pointing to the presence of adventitious carbon upon atmospheric exposure, which is similar to the case of the SiO 2 substrate.This ratio does change over time and with exposure to ambient conditions, increasing to 10:7 after a month of ambient light exposure.The change in ratio again points to the decomposition of the perfluorocarbon chain when exposed to ambient light.The change in F content is more substantial on the TiO 2 substrate than that observed on the SiO 2 substrate, with a loss of 10.2% of the F 1s on TiO 2 compared to a loss of 6.2% on the SiO 2 substrate.It is proposed that this sharper decrease in F 1s is due to the photocatalytic nature of TiO 2 .Figure 5a displays the C 1s, and Figure 5b displays the F 1s for the I-PFC12 SAMs on TiO 2 .Four component peaks were identified in the C 1s and were attributed to C-C/C-H at 284.8 eV, C-O/C-CF x at 286.2 eV, CF at 288.9 eV, and CF 2 at 291.6 eV binding energies [48][49][50].The two lower-binding-energy peaks and the CF 2 component peak are consistent with the SAMs on SiO 2 .However, no CF 3 component peak was detected for any I-PFC12 SAM on the TiO 2 sample that was characterized.Interestingly, a CF peak at a binding energy of 288.9 eV was observed for all samples [44,45]. Little difference was observed in the C 1s spectra between the as-received and the samples exposed to twenty-four hours of ambient conditions.Only a slight increase was observed in CF and C-O/C-CF x peaks after one week.After one month, a decrease in the CF 2 peak was observed, and the CF peak underwent a subtle increase.A change in intensity was also observed for the C-O/C-CF x peak.There was possible evidence of a C=C peak emerging on the lower-binding-energy side after one month of exposure.Due to the high signal-to-noise ratio, this peak has not been included in the peak fitting but could give valuable insights into how the I-PFC12 SAMs degrade.It is thought that the emergence of this peak is evidence of an intermediate stage in the SAM defluorination process; this will be explained in more detail in the discussion section.The C 1s showed no evidence of a C-I bond between the head group of the SAM and the carbon chain. The as-received F 1s core scan contains two component peaks: CF 2 at 688.7 eV and CF/C-CF x at 687.9 eV.Mirroring C 1s, there was little difference detected between the as-received and twenty-four-hour exposed samples.However, after one week, a small noticeable change occurred in the ratio of the two component peaks.The intensity of the CF/C-CF x appeared to increase with respect to the CF 2 , reflecting what was observed in the C 1s spectra.After one month, the F 1s peak depreciated considerably, with the degradation of the CF 2 component peak.Once again, this remains consistent with the trends observed in the case of C 1s. Figure S4a shows O 1s, and Figure S4b shows Ti 2p.These spectra remained constant for the entirety of the experiment, and no changes were observed. From Table 2, a very small increase in both the C 1s and F 1s can be seen in the case of I-PFC12 on TiO 2 after three months in dark conditions.Due to the nature of XPS, this 0.2% increase in carbon and 0.3% in fluorine is well within experimental error and was interpreted as no observable change or degradation in the SAM.The overlays of the C 1s and F 1s are displayed in Figure 6a and b, respectively; they compare the as-received samples and those that were kept for three months in dark conditions.An increase in intensity for both peaks was observed after three months, again demonstrating no signs of defluorination of the SAMs.Therefore, the SAMs are stable on TiO 2 for up to three months in the dark, confirming that the changes observed are due to the ambient conditions and not just the SAMs degrading over time.Little difference was observed in the C 1s spectra between the as-received and the samples exposed to twenty-four hours of ambient conditions.Only a slight increase was observed in CF and C-O/C-CFx peaks after one week.After one month, a decrease in the CF2 peak was observed, and the CF peak underwent a subtle increase.A change in intensity was also observed for the C-O/C-CFx peak.There was possible evidence of a C=C peak emerging on the lower-binding-energy side after one month of exposure.Due to the high signal-to-noise ratio, this peak has not been included in the peak fitting but could give valuable insights into how the I-PFC12 SAMs degrade.It is thought that the emergence of this peak is evidence of an intermediate stage in the SAM defluorination process; this will be explained in more detail in the discussion section.The C 1s showed no evidence of a C-I bond between the head group of the SAM and the carbon chain. The as-received F 1s core scan contains two component peaks: CF2 at 688.7 eV and CF/C-CFx at 687.9 eV.Mirroring C 1s, there was little difference detected between the asreceived and twenty-four-hour exposed samples.However, after one week, a small noticeable change occurred in the ratio of the two component peaks.The intensity of the CF/C-CFx appeared to increase with respect to the CF2, reflecting what was observed in the C 1s spectra.After one month, the F 1s peak depreciated considerably, with the degradation of the CF2 component peak.Once again, this remains consistent with the trends observed in the case of C 1s. Figure S4a shows O 1s, and Figure S4b shows Ti 2p.These spectra remained constant for the entirety of the experiment, and no changes were observed. From Table 2, a very small increase in both the C 1s and F 1s can be seen in the case of I-PFC12 on TiO2 after three months in dark conditions.Due to the nature of XPS, this 0.2% increase in carbon and 0.3% in fluorine is well within experimental error and was interpreted as no observable change or degradation in the SAM.The overlays of the C 1s and F 1s are displayed in Figure 6a and b, respectively; they compare the as-received sam- Discussion The more pronounced defluorination of the I-PFC12 SAM on TiO2 compared to the SiO2 can be attributed to the photocatalytic nature of the TiO2 substrate.This is a wellknown phenomenon and property of TiO2 that enables its many applications, from water Discussion The more pronounced defluorination of the I-PFC12 SAM on TiO 2 compared to the SiO 2 can be attributed to the photocatalytic nature of the TiO 2 substrate.This is a wellknown phenomenon and property of TiO 2 that enables its many applications, from water splitting to use in solar cells; it is utilized here to actively degrade SAMs.A review by Schneider et al. provides a comprehensive overview of the mechanisms behind the photocatalytic nature of TiO 2 and some of its practical uses [52].Environmental science studies can provide valuable insights into the degradation of per-and polyfluorinated molecules.These molecules are heavy pollutants found in water and can bioaccumulate in both animals and humans.Considerable research has been carried out in recent years into how best to degrade and decompose these molecules.As they are similar in structure and composition to the I-PFC12 SAMs, they can be used as an analogy as to how the SAMs degrade.Bentel et al. looked at the defluorination of 34 differently terminated per-and polyfluorinated molecules and found differences in the decay of these molecules based on termination and chain length [53].In particular, several studies have looked at using photocatalytic solutions to disassociate the C-F bonds found in these molecules.Of these, TiO 2 -based photocatalysts have been shown to be extremely effective [44,45]. Yamijala et al. [54] investigated the degradation of per-and polyfluoroalkyl substances using molecular dynamic simulations.Their findings demonstrate that excess electrons are the key to the defluorination process.These excess electrons can originate from oxygen vacancies in the TiO 2 and from photoexcitation.Due to the wide band gap of TiO 2 , the charge carrier lifetime increases as the electron-hole recombination rate decreases.This gives a greater chance of electrons reaching the surface compared to other materials, including SiO 2 , which is known to be a poor photocatalyst.The results observed here demonstrate the superior photocatalytic behavior of the TiO 2 substrates over the SiO 2 substrate, as after one month of ambient light exposure, the SAMs decreased by 10.2% compared to 6.2%, respectively. Figures 3 and 5 reveal different pathways to degradation in relation to the I-PFC12 SAMs depending on the substrate they are deposited on.On the SiO 2 substrate, the growth of an intermediate CHF-CHF bond is observed as the CF 3 and CF 2 peaks decrease.However, the emergence of a CF peak was observed on the TiO 2 substrate, and there was an increase in the C-CF x component peak.Despite the fact that the SiO 2 substrate is an inefficient photocatalyst, it can absorb shortwave UV light.Following the UV ozone pretreatment step, the surface is OH terminated.The SAM cannot bond to every available OH site due to the steric hindrance of the SAM molecules, and so some OH groups remain on the surface.The creation of electron-hole pairs due to light exposure allows for the creation of OH radicals as there are already OH groups present as well as absorbed water [55].The created OH radicals are then free to interact with and degrade the SAM chain.It is postulated that these radicals can disassociate a C-F bond within the CF 2 chain, leaving CHF in its place. On the other hand, the TiO 2 substrate has excess electrons, which can disassociate a C-F bond, as shown by Yamijama et al.Their simulations show that the dissociation of the C-F bond forms a C=C bond in the chain.Liu et al. [56].also established that C-F bonds in the presence of a C=C bond can degrade much more rapidly than a C-F bond in the presence of a C-C bond.The more C=C bonds formed, the quicker the defluorination of I-PFC12 will happen.Although the signal to noise for the sample exposed to one month of ambient conditions is poor, the formation of a C=C cannot be ruled out. Figure S5 displays an alternate peak fitting for the C 1s after one month of ambient exposure in the case of TiO 2 .A component peak corresponding to C=C can be added on to the lower-bonding-energy side of the C-C/C-H peak.While a more conservative approach has been taken for peak fitting to keep it consistent with the other experimental steps, the emergence of a C=C peak cannot be fully ruled out.The inclusion of the C=C would experimentally confirm the previous studies of Yamijama et al. [54]. Conclusions In conclusion, the stability of I-PFC12 SAMs deposited on both SiO 2 and TiO 2 was investigated when exposed to ambient light conditions for different durations of up to one month.No obvious X-ray damage of the SAM molecules was detected following 2-2.5 h of X-ray exposure.The iodine head group was not observed on any sample, whether deposited on SiO 2 or TiO 2 , with no evidence of the C-I bond between the head group of the SAM and the carbon chain observed in the spectra.This indicates that any degradation of the SAMs is not due to iodine's readiness to absorb EUV.Despite the fact that iodine was not observed, the degradation of the SAMs on the two different substrates was successfully compared, and as expected, the SAM degrades more on the TiO 2 substrate due to its superior photocatalytic nature compared to SiO 2 . For the SiO 2 substrate, the degradation of the SAM through defluorination is postulated to occur due to OH radical formation.The C-F bonds are cleaved to form CHF-CHF, as reflected by the C 1s spectra.The complete removal of the CF 3 bonds following one month of ambient exposure is evident, while the F 1s exhibits an obvious difference in the ratio of the two main component peaks after one month.A different degradation mechanism is observed for the SAM on TiO 2 .The excess electrons generated during ambient light exposure disassociate C-F bonds and potentially form C=C bonds, which, in turn, speed up the degradation of the molecule.After one month of ambient exposure, a C=C component peak is visible in the C 1s spectra, verifying the predicted defluorination process.It was observed that the samples of I-PFC12 left in dark conditions with no exposure to ambient light displayed little to no degradation.This confirms that any defluorination is due to exposure to ambient light.Finally, it has been shown that I-PFC12 SAMs are potential candidates for resistless EUV lithography processes as they are easily degraded, even under ambient exposure. Figure 1 . Figure 1.Chemical structure of the I-PFC12 molecule. Figure 1 . Figure 1.Chemical structure of the I-PFC12 molecule. Figure 2 . Figure 2. WCA of I-PFC12-derived SAMs on (a) SiO2 and (b) TiO2 substrates.(c) Thickness of SAMs deposited on SiO2 and TiO2 at temperatures ranging between 100 °C and 150 °C with deposition times of one and two hours. Figure 2 . Figure 2. WCA of I-PFC12-derived SAMs on (a) SiO 2 and (b) TiO 2 substrates.(c) Thickness of SAMs deposited on SiO 2 and TiO 2 at temperatures ranging between 100 • C and 150 • C with deposition times of one and two hours. Figure 3a shows the Figure3ashows the unnormalized data for C 1s, and Figure3bshows the F 1s core scans after different durations of ambient light exposure.C 1s contains four component peaks in the as-received sample, the twenty-four-hour sample, and the one-week sample: C-C/C-H at 284.8 eV, C-O/C-CF x bonds at 286.2 eV, CF 2 at 291.6 eV, and CF 3 at 293.4 eV.All these peaks are consistent with those of the previous literature [48-51].The peak at 286.2 eV was assigned to both C-O/C-CF x bonds due to the sample being exposed to the atmosphere, meaning that the presence of C-O bonds cannot be discounted.After only twenty-four hours of ambient exposure, a decrease in the CF 3 component peak was visible, corresponding to a very slight increase in the C-O/C-CF x component peak.Very little change is observed between the twenty-four-hour exposure and the one-week exposure, with the C 1s spectra looking almost identical between the two samples.However, with one month of ambient exposure, the CF 3 component peak disappeared from the spectrum, indicating the complete removal of the CF 3 bonds.Interestingly, there is also the emergence of a CHF-CHF component peak at 287.9 eV.It is postulated that this peak emerges as the CF x bonds in the fluorocarbon chain begin to break down.There is no evidence to suggest that I-PFC12 oxidizes over this time in either the C 1s or the O1s (FigureS2ain Supplementary Materials).The C 1s spectra show no evidence of the C-I bond between the head group of the SAM and the carbon chain. Figure 3 . Figure 3. (a) C 1s and (b) F 1s of I-PFC12 on SiO2 display several different C-F bonds.The F 1s core-level spectra shown in Figure 3b contain two component peaks: CF2/CF3 at 688.7 eV and C-CFx/CHF at 687.3 eV [52].The different CF bonds are not as distinguishable in the F 1s compared to the C 1s, as can be seen from the very broad F 1s envelope.As a result of this, the higher binding energy peak has been assigned to both CF2 and CF3 Figure 3 . Figure 3. (a) C 1s and (b) F 1s of I-PFC12 on SiO 2 display several different C-F bonds. Figure 4 . Figure 4. (a) C 1s and (b) F 1s of I-PFC12 on SiO2 as received and after 3 months of being kept in a dark container. Figure 4 . Figure 4. (a) C 1s and (b) F 1s of I-PFC12 on SiO 2 as received and after 3 months of being kept in a dark container. Figure 5 . Figure 5. (a) C 1s and (b) F 1s of the I-PFC12 SAM deposited on a TiO 2 substrate. Figure 6 . Figure 6.Overlays of (a) C 1s and (b) F 1s following one month and three months in dark conditions.No degradation of the SAMs was observed at this time. Figure 6 . Figure 6.Overlays of (a) C 1s and (b) F 1s following one month and three months in dark conditions.No degradation of the SAMs was observed at this time. Table 1 . Atomic concentrations of the I-PFC12 deposited on SiO 2 when exposed to different durations of ambient and dark conditions. Table 2 . Atomic concentrations of the I-PFC12 deposited on TiO 2 when exposed to different durations of ambient and dark conditions.
v3-fos-license
2017-01-25T03:49:35.000Z
2017-01-22T00:00:00.000
119099580
{ "extfieldsofstudy": [ "Physics" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://doi.org/10.1016/j.physletb.2017.02.065", "pdf_hash": "52234439240464e94663b16a2c38b0959271d03e", "pdf_src": "Arxiv", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:1610", "s2fieldsofstudy": [ "Physics" ], "sha1": "52234439240464e94663b16a2c38b0959271d03e", "year": 2017 }
pes2o/s2orc
Deformation of the Cubic Open String Field Theory We study a consistent deformation of the cubic open bosonic string theory in such a way that the non-planar world sheet diagrams of the perturbative string theory are mapped onto their equivalent planar diagrams of the light-cone string field theory with some length parameters fixed. An explicit evaluation of the cubic string vertex in the zero-slope limit yields the correct relationship between the string coupling constant and the Yang-Mills coupling constant. The deformed cubic open string field theory is shown to produce the non-Abelian Yang-Mills action in the zero-slope limit if it is defined on multiple D-branes. Applying the consistent deformation systematically to multi-string world sheet diagrams, we may be able to calculate scattering amplitudes with an arbitrary number of external open strings. I. INTRODUCTION If its perturbation theory is correctly defined, the covariant string field theory is expected to replace eventually the quantum field theory which has not been successful to describe quantum particles with spin two and higher spins. However, in practice, it is rather difficult to make use of the covariant cubic string field theory [1,2] to calculate the particle scattering amplitudes. The main reason is that the world sheet diagrams of cubic open string field theory are non-planar unlike those of the light-cone string field theory [3][4][5][6][7][8][9]. Witten [1] introduced an associative product between the open string field operators which represents the mid-point overlapping interaction. With the associative star product, the string field action takes the form of the Chern-Simons three-form which is invariant under the BRST gauge transformation. The cubic open string field theory has a merit of the BRST gauge invariance due to the associative algebra of the string field operators. But at the same time the mid-point overlapping interaction renders the world-sheet diagrams non-planar so that it becomes a difficult task to get the Fock space representations of the multi-string vertices. The Fock space representation of the three-string vertex of the cubic open string field theory has been obtained by Gross and Jevicki in Refs. [10] and [11] by mapping the world-sheet diagram of six strings onto a circular disk and imposing an orbifold condition. The conformal mapping of the four-string world sheet to the upper half complex plane with branch cuts has been constructed by Giddings [12]. The Neumann functions of the three-string vertex have been calculated in Refs. [10,13,14] and the Neumann functions of the four-string vertex has been computed by Samuel in Ref. [15]. However, there seems to be no similarity between the conformal mappings for the three-string vertex and that of the four-string vertex. It seems also difficult to apply those constructions of the conformal mappings to more complex world sheet diagrams of multi-string vertices. Thus, it is desirable to develop a more systematic technique which could be applied to string scattering diagrams with an arbitrary number of external strings. In the present work, we propose a consistent deformation of the world sheet diagrams which transforms the non-planar diagrams of multi-string scattering into planar diagrams. Once having obtained the planar diagrams of the multi-string vertices, we can make use of the light-cone string field theory technique by mapping the world sheet diagrams onto the upper half complex plane. For the three-string vertex and the four-string vertex, it is enough to choose external string states such that physical string states are encoded only on the halves of the external strings. By an explicit calculation, we shall show that the deformed cubic string vertex yields the three-gauge field vertex with the correct Yang-Mills coupling constant in the zero-slope limit. The four-gauge field vertex of the Yang-Mills action shall be also evaluated by using the deformed world sheet diagram of the four-string vertex which is effectively generated by two cubic string vertices and an intermediate string propagator. II. DEFORMATION OF THE WITTEN'S OPEN STRING FIELD THEORY DIAGRAMS We shall begin the Witten's cubic open string field theory action [1] on muti-D-branes which is given as where Q is the BRST operator and the sring field Ψ is U (N ) matrix valued The star product of between the string field operators is defined as follows In terms of the normal modes, the string coordinates X (r) (σ), r = 1, 2, 3 are expanded as It is the associativity of the star product algebra that ensures invariance of the cubic string field theory action under the gauge transformation of the string field δΨ = Q * + Ψ * − * Ψ. In order to discuss the deformation of the cubic string field theory we extend the range of the world sheet coordinate σ firstly as The mid-point is now located at σ = π. Accordingly, the star product Eq. (2) and the normal mode expansion Eq. (4) should be appropriately redefined as shown in Fig. 1. Fig. 2 depicts the world sheet diagram of three-string scattering. We observe that during the scattering process, physical information encoded on the left half of the first string and physical information encoded on the right half of the second string are not carried over to the third string. In view of scattering process roles of the left half of the first string and the right half of the second string are auxiliary. Note that the strings satisfy the Neumann boundary condition on the boundary ABC in Fig. 1 . We may separate the path, corresponding to the world sheet trajectory of the left half of the first string and the right half of the second string from the rest part of the world sheet of three-string scattering. On the patch as we redefine the world sheet local coordinates by interchanging τ ↔ σ, the boundary condition on ABC becomes (See fig. 3.) On the patch we also define new string coordinates The Neumann condition on the boundary ABC may be written as where c N is a normalization constant for the Neumann state. Then the open string state on DEF turns out to be the Neumann state again The extra phase factor (−1) may be absorbed into the normalization constant of the Neumann state. (We may also extend the range of σ as 0 ≤ σ ≤ 2π: It result in removing the factor (−1) because the open string state on DEF becomes exp [−i2πL 0 ] |N = |N .) Hence, if we choose the Neumann condition for the left half of the first string and for the right half of the second string at the initial time, we may remove the patch which consists of the world sheets of the left half of the first string and the right half of the second string. The string path integral over the patch to scattering amplitude is simply Thus, the string path integral over the patch does not contribute to the scattering amplitude To be consistent with this scheme we may encode the initial states of the first and the second string states onto the right half of the first string and the left half of the second string respectively as depicted in Fig. 4 and Fig. 5: Fig. 6 depicts the deformed world sheet diagram of the three-string scattering after the auxiliary patch is completely removed. Because the world sheet diagram is not deformed uniformly, the associativity of the star product is not preserved. Consequently, the BRST gauge invariance is not manifest in the string field action with the deformed cubic interaction. But if we formally keep the auxiliary patch, the associativity of the star product, hence the gauge invariance can be kept intact. As we remove the auxiliary patch, the world sheet diagram of the three-string becomes planar, which then can be mapped onto the upper half complex plane without any additional condition. The planar diagram of the deformed three-string scattering is equivalent to that of the covariantized light-cone string field theory of HIKKO [16] with length parameters fixed as Unlike the HIKKO's open covariant string field theory, we do not need to integrate over the unphysical length parameters to make the string field action invariant under the BRST gauge transformation. Simply reattaching the auxiliary patch would restore the BRST gauge invariant form. On the planar world sheet we may introduce a global coordinate ρ of which real part is the proper time Reρ = τ . The planar world sheet may be mapped onto the upper half complex plane by the Schwarz-Christoffel transformation given as ρ = ln(z − 1) + ln z. The temporal boundaries of the world sheet (labeled as a, b, c in Fig. 7) are mapped onto the real ine. On individual string world sheet patches we may define local coordinates ζ r , r = 1, 2, 3 which are related to z as follows where τ 0 = −2 ln 2. The Fock space representation of the three-string vertex in terms of the Neumann funcitonsN rs nm follows from the light-cone string theory with length parameters fixed: |E [3] The interaction term of three-string field may be written as The three-gauge interaction term may be obtained by choosing the external state as in the zero-slope limit: where the Yang-Mills coupling constant g Y M is related to the string interaction coupling g as we find the three-gauge interaction term IV. FOUR-GAUGE FIELD VERTEX FROM THE DEFORMED FOUR-STRING VERTEX The four-gauge field interaction term of the Yang-Mills gauge field theory is obtained from the four-string scattering diagram which is perturbatively generated by the cubic interaction. Fig. 8 depicts the effective four-string vertex of the cubic open string field theory. Choosing the external string states such that the physical information is encoded only on halves of external strings, we may effectively remove the auxiliary patches as in the case of three-string scattering diagram. This deformation process results in choosing the length parameters of the four strings as The resultant planar world sheet diagram of the deformed four-string scattering is described by Fig. 9 Now we shall discuss the reduction of the four-string vertex to the four-gauge field vertex in the zero-slope limit. The Witten's cubic open string field theory action does not contain a four-string interaction term in contrast to the light-cone string field theory and the covariantized light-cone string field theory of HIKKO [16]. Thus, the four-gauge field interaction term of the Yang-Mills gauge field theory should be derived solely from the effective four-string interaction, perturbatively generated by the three-string interaction. Having deformed the four-string world sheet diagram into the planar diagram, we may map it onto the upper half complex plane as shown in Fig. 10 by the following Schwarz-Christoffel transformation (1) (2) interaction by using the Cremmer-Gervais identity [8] as follows [4]rr 00 If we choose the external four-string state as we may find that the four-string scattering amplitude yields in the zero-slope limit to following effective four-gauge field action: [4]rr 00 and r<s |Z r − Z s | pr·ps = x − s 2 (1 − x) − t 2 in the zero-slope limit, we get the effective four-gauge field action as follows Here we define the Mandelstam variables as In the zero-slope limit The resultant effective four-gauge interaction term S [4] does not only contain the contact four-gauge field interaction but also contribution of the effective four-gauge interaction generated perturbatively by the three-gauge field interaction of the Yang-Mills field theory. Substracting the effective four-gauge field interaction of the Yang-Mills theory S [4]massless from S [4] [16], we get the four-gauge field contact interaction of the Yang-Mills theory Putting together the guage field interaction terms S Gauge [3] , Eq. (26) and S Gauge [4] , Eq. (36) as well as the free field action S Gauge [2] which may be derived easily from the kinetic term of the string field action trΨ * QΨ in the zero-slope limit, yields the covariant Yang-Mills field action V. CONCLUSIONS The Witten's cubic open string field theory possesses a number of advantages over the light-cone string field theory [3][4][5][6][7][8][9] and the covariantized light-cone string field theory [16]: 1. The theory is covariant and invariant under the BRST gauge transformation. 2. The theory does not contain any other unphysical parameter like the length parameters except for the string coupling g. 3. In contrast to two other string field theories, the Witten's open string field theory does not have a quartic interaction term besides the cubic interaction term. However, despite those advantages, it has not been fully utilized to calculate particle scattering amplitudes except in a few cases. The main reason is that the world sheet diagrams generated by the cubic string field theory are non-planar: It is difficult to find a conformal mapping by which the world sheet is mapped onto simple complex planes such as the upper half plane or a circular disk without any additional conditions or structures. One needs to impose an orbifold condition to map the world sheet diagrams of the three-string vertex onto a circular disk [10] and has to introduce branch cuts to map the fourstring vertex to the upper half plane [12]. However, even if we found maps of the world sheets to the complex planes in the cases of the three-string and the four-string scatterings, it is difficult to extend those mappings systematically to evaluate general multi-string amplitudes. It is also difficult to fix the relative strengths of the cubic gauge field interaction term and the quartic gauge field interaction term because there is no analog of the Cremmer-Gervais identity [8] which relates the three-string scattering amplitude to the four-string scattering amplitude. In this work, we proposed a consistent deformation of the cubic string field theory by which the world sheet diagrams of the multi-string scattering are effectively transformed into planar diagrams. Having obtained planar diagrams representing the string scattering amplitudes, we can adopt the light-cone field theory technique to construct the Fock space representations of multi-string vertices systematically. By explicit calculations, we show that the three-string amplitude and the four-string amplitude in the zero-slope limit yield the cubic and quartic gauge interaction terms of the Yang-Mills theory if the external string states are chosen to be the massless gauge particles. The deformation process is applicable to multi-string scattering with an arbitrary number of strings. This work may be also regarded as a proof that the string field theory in the proper time gauge [17,18] is invariant under the BRST gauge transformation. Applications of the deformed cubic string field theory to various scattering processes [19][20][21][22] will be given elsewhere.
v3-fos-license
2022-05-28T10:41:40.463Z
2022-01-01T00:00:00.000
244045849
{ "extfieldsofstudy": [], "oa_license": "CCBY", "oa_status": "GREEN", "oa_url": "https://acp.copernicus.org/preprints/acp-2021-862/acp-2021-862.pdf", "pdf_hash": "1039cc2eb7a84ef4351b9add3f49fe3b488a9f4e", "pdf_src": "ScienceParsePlus", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:1611", "s2fieldsofstudy": [ "Environmental Science" ], "sha1": "1039cc2eb7a84ef4351b9add3f49fe3b488a9f4e", "year": 2022 }
pes2o/s2orc
Authors’ Response to Reviewers’ Comments Important role of stratospheric injection height for the distribution and radiative forcing of smoke aerosol from the 2019/2020 Australian wildfires General comment The manuscript “Important role of stratospheric injection height for the distribution and radiative forcing of smoke aerosol from the 2019/2020 Australian wildfires”, Heinold et al., presents and discusses the aerosol spatiotemporal distribution and radiative forcing (RF) of ECHAM6.3-HAM2.3 model simulations of the pyro-convective paroxysmal phase of the record-breaking Australian bushfires in the fire season 2019/2020. The importance of representing pyro-convection and UTLS injection of smoke into models is discussed based on sensitivity analyses. The manuscript provides a new estimation of the RF and radiative heating of this Australian fire event, which is quite a hot topic. It certainly falls within the scopes of ACP and is very interesting for the aerosol/climate community, and is potentially suitable for publication. By the way, I have found that there are several aspects – some of them very important – that must be clarified before I can recommend publication of the manuscript. Thus, I kindly ask the Authors to reply to the following major and specific comments and to provide an amended version of the manuscript, that I would be glad to re-read when ready. therefore the instantaneous radiative forcing estimates in the ECHAM-HAM simulations are supposedly misrepresented is incorrect. In their figure caption of Figure 4, Khaykin et al. clearly state about the time lag between gases and aerosol particles that "The lagging increase of the aerosol mass is due to the fact that the OMPS-LP extinction retrieval saturates at extinction values above 0.01 km −1 . Profiles are, therefore, truncated below any altitude exceeding this value, which can lead to an underestimation of the early aerosol plume when it is at its thickest. This artifact, which explains the slower increase of aerosol mass than gases, persists until mid-February when the plume is sufficiently dispersed so that OMPS-LP extinction measurements no longer saturate." This clearly shows that extinction was too high to be not be measured in January and cannot be used for model comparison. Peterson et al, 2021, which we now also include as a reference, provide emission strengths for the stratospheric smoke injection of 0.2-0.8 Tg for the pyroCb events on 29-31 December 2019 and 0.1-0.3 Tg for 4 January 2020. These values agree well with our simulation results. Secondary aerosol formation appears unlikely to be the explanation considering the required amount of smoke. However, it can be a source of model uncertainty, which is now also discussed in connection with the underestimation of modelled extinction profiles compared to lidar observations. The study by Brown et al. (2020), mentioned by the reviewer, shows that aerosol-climate models (including ECHAM-HAM) may generally overestimate absorption by biomass burning aerosol due to an insufficient representation of the mixing state for fire aerosol. The revised version of the manuscript points out this uncertainty in more detail. For the extreme 2019/2020 wildfires, however, we can show in comparison with single scattering albedos (SSAs) derived from lidar measurements for this particular event that the assumptions about the optical properties of the smoke particles in the model are reasonable, with SSA values of 0. 79-0.8 (550 nm) in the lidar inversions and slightly more reflecting values of 0.82-0.85 in the model. MC2) The Authors obtain a relatively large and positive RF at top of atmosphere (TOA), which is basically in contradiction with all estimations available at the moment (Khaykin et al., Yu et al., Hirsch and Koren, papers that are cited in the manuscript). The observational (Hirsch and Koren) and hybrid observational/modelling (Khaykin et al.) estimations agree on a relatively large negative TOA RF. The Authors interpret this disagreement with respect to these previous estimations as the result of all-sky calculations and the high surface reflectivity in the present manuscript, which, I agree, can partially explain that. By the way, the RF is very sensitive to optical properties of the aerosol layer (in particular, the absorption properties of the layer and its angular distribution of scattering, see discussion in SC56 and other SCs). In addition, also Yu et al. obtain a negative TOA RF but at all-sky conditions. This should be discussed more thoroughly in the text (see suggestions in several of the following specific comments) and the different statements supporting a positive RF must be smoothed a bit. The review of the above-mentioned papers shows that there is no reliable all-sky estimate of the aerosol radiative forcing for this event to be compared to our model results. They mostly refer to clear-sky conditions and predominantly to open ocean: (1) Khaykin et al. (2020) simply derive an all-sky forcing (RF) by assuming that "all-sky RF [are] reduced to about 50% of the clear-sky RF", which a priori excludes the possibility of a change in sign. (2) Hirsch and Koren present a clear-sky value from the CERES data that is at least significant at 20-60°S, but otherwise is not significant in the deviation from the CERES mean. However, they also find a possibly significant positive forcing above clouds and Antarctica, but the low accuracy of the CERES data in these regions does not allow a more precise statement. (3) Yu et al. (2021) consider only clear-sky conditions but provide an effective forcing for shortwave and longwave radiation in contrast to the instantaneous forcing in this study. The atmospheric adjustments included in the effective forcing obviously reduce the instantaneous forcing considerably. Nevertheless, the instantaneous forcing we present is important as a measure of the energy added instantaneously to the atmosphere (here the stratosphere). Thermodynamical and dynamical adjusments are currently under research and will be treated more extensively in a future study. Regarding the optical properties of the fire aerosol, the comparison with the lidar-based inversions of single scatter values was strengthened, and the asymmetry parameter was included in the discussion (see the reply to the specific comments). This analysis, however, further supports that the optical properties of the fire aerosol are reasonably realistic for this case, and thus the positive instantaneous solar radiative forcing at TOA. Specific comments (#SC) SC1) P1 L24-25: "Global…wildfires": I would not call this "uncertainties" but rather "incomplete representation", like said in the following line, or something similar "show significant uncertainties" was replaced by "lack adequate descriptions of". SC2) P1 L27: Its more "observation-based input to the simulations" than "observation based approach" Agreed. Changed accordingly. SC3) P1 L28: please add "Based on our simulations,…" before "The 2019-2020 Australian fires caused…" Done. SC4) P1 L32: "While at surface,…" is an awkward way to start a sentence, please rephrase. Done. SC5) P1 L34: "deep wildfire plumes…", "deep" is a bit too generic here: do you mean "with high altitude injection" or just "extreme"? Exactly, high-altitude plumes were meant here. The previous wording was obviously inspired by deep pyro convection. SC6) P2 L5: "life": do you mean "wildlife"? No, we had all life in mind here, not just wildlife but also people. Therefore, we would prefer to keep just "life". SC7) P2 L6-7: "In addition…whether", this is an awkward sentence, please rephrase Agreed. The sentence was rephrased. SC8) P2 L14-17: please add SAGE-III and TROPOMI observations (as shown by Khaykin et al., 2020) into the discussion of evidences of the fire with satellites SAGE-III and TROPOMI were added to list of satellite detections. SC9) P2 L18-19: The first estimation of the radiative forcing of Australian fires was provided by Khaykin et al., 2020, with hybrid observations/modelling approaches. Please mention this manuscript and the method. The method and the estimates of the radiative forcing of the Australian fire aerosol by Khaykin et al. (2020) were added as requested. SC10) P3 L1-3: please break this very long sentence Done. SC11) P3 L5-6: Radiative-heating-induced self-rising can occur in fires but this is not the norm, so please smooth this sentence Agreed, the statement has been softened. SC12) P3 L9: "effects": Please specify which specific effects In order to be more specific, the sentence was adapted and now reads: "Such extreme wildfires and associated deep pyroconvection, for which injection of biomass burning smoke into the stratosphere has been observed, can have similar effects as volcanic eruptions in terms of stratospheric aerosol injection and radiative impact." SC13) P3 L10-11 "which is considered to be the strongest warming short-lived radiative forcing agent.": please add a reference for this statement In an earlier version of the manuscript, this referred to particulate climate forcers. As the current formulation is more general, we have corrected the statement and added references. SC14) P3 L11-12: "In addition…emitted": also precursors of secondary organic aerosols can be emitted, please mention Now also precursors of secondary organic aerosol are mentioned. SC15) P3 L12: "…radiative properties…": you might mean "optical properties" "radiative properties" replaced "optical properties". SC16) P3 L13-14: "…as well…altitude.": not clear, what do you mean? As explained by Ban-Weiss et al. (2012), absorbing aerosol like black carbon (BC) in lower atmospheric layers heats the surface due to diabatic heating. If located at higher altitudes, instead, it has a cooling effect, as the increased solar absorption is compensated by stronger outgoing long-wave radiation. Furthermore, Ban-Weiss et al. show that BC at high altitudes reduces high-altitude cloud cover, which also results in a surface cooling. SC17) P3 L16: "…the recent accumulation of extreme wildfires...": you mean "aggregated effect"? What was meant is "the recent series of extreme wildfires". The wording was changed accordingly. SC18) P3 L21: please remove "more" We believe "most" was meant, which we removed. SC19) P3 L27: not sure that "to capture" is the right verb here The verb "capture was replaced by "address". SC20) P4 L18: "height profiles": do you mean "vertical profiles"? Replaced. SC21) P5 L10-12: which altitudes for these vertical layers in the model? Note that these are not fixed injection heights. The PBL height is largely driven by the surface fluxes and associated vertical mixing. It therefore varies strongly in time and space. For this reason, no explicit altitude values can be given here. SC22) P5 L14: "47 levels": which approximate vertical resolution? The vertical resolution is variable across, but approximate values are now given for relevant altitudes: "In the vertical, the model is set up with 47 levels with increasing layer thickness from the ground to 0.01 hPa (~80 km). The vertical resolution ranges from approximately 70 m at surface to 500 m at 2.5 km and 1100 m at 15 km height and coarsens accordingly thereabove." SC23) P5 L18-19: "AOT and vertical profiles of extinction": This looks redundant as the AOT is just the vertical integration of the aerosol extinction. This may have looked redundant in the way it was written. However, the aerosol extinction profiles are calculated by an online lidar simulator that was implemented especially for such comparisons with CALIOP and ground-based lidar measurements. We have expanded the description. SC24) P5 L28: "Since no direct information was available on the actual pyroconvective injection heights…": This is not completely true as Khaykin et al. give an upper bound for the injection altitude at circa 17 km using CALIOP, and it is also shown at approximately this altitude for Hirsch and Koren, 2021 (as you mention later in the text). Please correct the sentence. This statement is meant literally. There was in fact no direct information at or above the wildfire sites, which can be typically used in models. There was no reliable radiative power information from satellites (otherwise it would have been included in the GFAS data) or other, possibly in-situ observations due to cloud cover (and heavy smoke). All approximate values for the injection heights in the studies mentioned are of course reasonable but based on satellite observations at some distance from the Australian continent or model assumptions from which the emission height was inferred. This is what we have also tested within the sensitivity study. SC25) P5 L37-38: The results of Hirsch and Koren (2021), seems more to show that smoke injection is injected at altitudes >16 km. Why do you say "14 km"? Hirsch and Koren (2021, suppl.) found "smoke fragments […] located on the lower stratosphere below 17 km […] during the fire emissions." We find that the vertical spread in the CALIOP imagery justifies the "14 km" assumption, in particular, since due to the vertical resolution in the model in the tropopause region a larger altitude range is directly affected. We point this out more strongly in the text now. SC26) P5 L38-39: "In addition…the original biomass burning injection…": The original injection is what is described above (P5 L10-12)? Please clarify in the text. Yes. The details of the original fire injection are given here again to make this clear. Table 1: in the NoEmiss lines, when you say "1 April" you mean "4 January"? It is in fact 4 January. Many thanks for spotting this obvious mistake. SC27) SC28) More in general on the NoEmiss scenario: In the NoEmiss scenario, how are the previous emissions from Australian fires (i.e. the Australian fire season prior to 29/12/19) considered? All other days except the mentioned pyroCb days are treated as in the original configuration. This was added to the text for clarification. SC29) P6 L11: please suppress "significantly" Deleted. SC30) P6 L12: The long-term mean ("which implies…") is not shown in Fig. 1b, so please explain how it is calculated and rephrase the sentence? Please also note our response to reviewer #1. Now, Fig. 1b Figure 2: wrt MC1, the trend (maximum AOT in January and then decreasing) is not consistent with observations, e.g. the SAOD from SAGE-III in Khaykin et al., 2020 (Fig. 3). Please explain why Again, note that the peak AOT in Khaykin et al. (2020) is actually likely delayed. The authors point to saturation effects in the satellite retrievals in the caption of their Fig. 4 as an explanation. SC35) P7 L8: "The emissions…are reproduced…": The emissions are not "reproduced" by ECHAM but are "an input" to ECHAM: please rephrase We agree. The sentence was rephrased to read now: "The dispersal of this smoke plume is reproduced using the global aerosol-climate model ECHAM6.3-HAM2.3 with the pyroconvective injection heights prescribed." SC36) P7 L 10: "…provide an insight…due to wildfire smoke…": If NoEmis has only the smoke emissions of 29-31 December and 4 January switched off, then this comparison does not provide "the AOT distribution due to wildfire smoke" but rather "the AOT distribution due to pyro-convective events of 29-31/12 and 04/01". Please verify and possibly correct. Correct, this was well spotted and was corrected in the manuscript. SC37) P7 L16: "AOT differences": Differences with respect to what? This again refers to the difference between TP+1 and NoEmiss simulation results, which is now included in the text. SC38) P8 L15: "…is clearly better": Is it "clearly" better? Not to my eyes: the comparisons with different injection altitudes looks quite similar to me. This is not so surprising because the effect of the fires on the column AOT is not as strong as the one at selected UTLS altitudes (as visible in Fig. 4). Please, based on that, smooth these statements, and reconsider this discussion. The comparison of modeled and observed AOT in Fig. 3 may give this impression at first glance. However, considering the actual change in modeled AOT against the background of the in general very low levels of AOT at the Southern Hemisphere sites, the UTLS injection heights in fact lead to a substantial improvement. Now, we explicitly point out this fact. SC39) P8 L16-17: please suppress "using…above" (this is already clear from scenarios descriptions above, so is redundant) Deleted. SC34) SC40) P8 L18-19: "…indicating that the modeled effect…stratospheric smoke": This statement is not true because the solar absorption depends not only on the aerosol load but also on the optical properties of the aerosols -and then, for your estimations, on the assumptions made in the model on composition and atmospheric evolution of the smoke plume. Please smooth the statement. This statement was modified in the manuscript. SC41) P8 L19: "is larger" --> "is slightly larger" We disagree. The bias of the BASE case is on average about 30% larger than for the other cases using UTLS smoke injection. This quantitative information was added to the discussion. SC42) P8 L20: "…correlation is also lower": From Tab. 2 it looks like BASE R is quite comparable wrt TP(+-1) and the others setups. Replaced by "slightly lower". SC43) P8 L25-26: "which is also consistent with the CALIPSO satellite lidar observations": Which CALIPSO observations? (they're neither in Fig. 4 nor 5) The CALIOP comparison was deleted at this point. SC44) P8 L28: "reflect" is not a good choice here as a term, as it also has an optical meaning: please change term Replaced by "evident". SC47) P9 L12-13: "The model results…smoke layer": This is not peculiar of the simulations but only empirically visible in Fig. 4 and 5 (for model as well as in the lidar observations): please rephrase. Agreed. We included a reference to the discussion of Figure 6 here. Figure 4: Please spell "coeff." and not "cf."; please use "km-1" as aerosol extinction units Corrected. Figure 7: Would it be possible to have a altitude vertical axis as well? Figure 7 was completely revised. It now has a height axis. In addition, the geographical area was limited to the longitudes between Australia and Argentina (145°E -70°W) and the tropics/subtropics (30°S -60°S) were excluded as suggested by Reviewer #1. Furthermore, an averaging error in the previous version was corrected. SC54) P12 L6: "greenhouse forcing": You mean "greenhouse gases forcing"? Black carbon also can produce greenhouse effect (but it's particle, so better to be more specific) Correct, what is meant is the greenhouse effect caused by anthropogenic aerosols and gases, as it is now written. SC55) Your RF estimations: same question as for the heating rates: are these estimation for SW-only or LW+SW? All estimates of smoke aerosol forcing in this study are for the shortwave radiation. This was made clear in several places in the text. SC56) With reference to MC2: the RF of aerosols depends very strongly on the optical properties of the aerosol layer, which in turns, and this is very important for the complex smoke emissions by fires, depend on the atmospheric evolution of the plumes. In particular, the aerosol RF depends quite strongly on both absorption properties (summarised by SSA) and the angular distribution of scattering (the phase function, summarised by the asymmetry parameter). Examples of such variability of RF on these two integral optical parameters (for volcanic aerosols, but it applies more in general) can be found here;-Sellitto et al., 2020 (https://www.nature.com/articles/s41598-020-71635-1), see their Fig. 5 -Kloss et al., 2021 (https://acp.copernicus.org/articles/21/535/2021/), see their Fig. 9. The situation can be even more complex for fire emissions, where the optical properties of the emitted and secondary formed aerosols, as well as their evolution in a complex environment of high humidity, many gaseous emission and locally high temperatures. Thus, your estimation depends strongly on the somewhat arbitrary assumptions of your simulations. This must be critically discussed in the text. The assumptions in this study are not arbitrary. The ECHAM-HAM model is a widely used community model that has been thoroughly evaluated for aerosol processes and aerosolclimate interactions. The only assumption that differs from the default configuration is the adjustment of the smoke injection heights for the 4 pyroCb days. This is also not arbitrary but based on ground-based and satellite observations and, moreover, is shown to be reasonably realistic by the evaluation performed. In addition, the comparison of single scattering values in the model with the lidar-based inversions as well as the newly included mention of the asymmetry parameter shows that the particle optical properties are adequate for this fire event and thus the positive instantaneous solar radiative forcing at TOA. Yu et al. (2021) only look at clear-sky conditions. However, in contrast to the instantaneous forcing estimated presented in this study, they provide an effective forcing for shortwave and longwave radiation. The atmospheric adjustments included in the effective forcing reduces the instantaneous forcing, especially due to the high altitude of the smoke aerosol layer, resulting in an overall negative TOA forcing. Note that our clear-sky instantaneous TOA forcing actually agrees well with the clear-sky effective TOA forcing from Yu et al. (2021) for solar radiation, for which the rapid adjustments are of minor importance. SC58) P12 L20-21: This is another reference that can be helpful in comparing your results with volcanic eruptions: https://www.nature.com/articles/ncomms8692?proof=t Many thanks for this additional reference. SC59) P12 L22-24: No mention to the stratospheric vortex driven by rapid vertical transport + plume heating seen for this fire event? (Khaykin et al., 2020;Kablick et al., 2020). This was implicitly meant by the "responses in atmospheric dynamics". However, this is now explicitly mentioned: " Khaykin et al. (2020) actually showed that a self-sustained 1000km anticyclonic vortex formed as a result, which traveled through the stratosphere for weeks, accompanied by a local ozone reduction". SC60) P13 L7-8: "uncertainties in AOT; in particular…single scattering albedo…": This should be rephrased: the AOT representation and single scattering albedo are only in part inter-dependent. In addition, the angular scattering properties of the aerosol layer is also (or, at some conditions, even more) important for RF estimations (see Sellitto et al., 2020;Kloss et al. 2021, mentioned at SC56), and this should be mentioned. This sentence now includes the asymmetry parameter as a source of uncertainty, which is additionally addressed later in this section. See also the response below. SC61) P13 L9: "SSA lies between 0.82-0.85…". Single scattering albedo (and asymmetry parameter of soot aerosols, see SC60) may be significantly affected by their mixing states, and coating of BC. SSA can be significantly larger (up to ~0.95 at 550 nm) if BC is coated by aqueous secondary aerosols (organic or sulphate) -e.g. https://www.nature.com/articles/s41467-020-20482-9 . Also, and importantly, it looks like smoke aerosols are too absorbing in models due to a generally incomplete representation of the aerosol mixing state for biomass burning aerosols: https://www.nature.com/articles/s41467-020-20482-9 . This must be mentioned and discussed in the text, as this might be a large source of uncertainties for your RF estimations (as well as heating rates estimations and, even more at the basis, AOT fields). SC53) We are aware of this study on the apparently widespread overestimation of absorption of biomass burning aerosol in aerosol-climate models. The revised version of the manuscript now discusses this point in more detail, including a reference to the study of Brown et al. (2020). In addition, the comparison with current lidarbased derivations of SSA values for other intense fire events and for this one in particular was made more concrete. These values, however, support our previous conclusion that smoke absorption is even slightly underestimated in the model for this extreme wildfire case. Regarding the asymmetry parameter, it is difficult to make an evaluation because the exact morphology of the smoke particles is not known. For the smoke particles, only the asymmetry parameter of the fine mode fraction is relevant here. In our model, the asymmetry parameter for the Australian smoke is about 0.6 at 550 nm, which however is in good agreement with typical values for wildfire aerosol in the literature (see e.g., Reid et al., ACP, 2005). In the case of the volcanic ash, mentioned by the Reviewer, the asymmetry factor is even more uncertain due to the more irregular particle shapes (depending on the eruption type) and the uncertainties in radiative forcing may not be directly comparable. Table 3. As already said for the AOT trend, in Khaykin et al. the strongest RF is in February and here is in January. As already mentioned in previous comments, this might be linked to an insufficient representation of secondary aerosols formation in your model and mixing state. Please mention and comment in the text. As already stated in reply to comments MC2 and SC56, this comment originates from overlooking the saturation effect in the satellite measurements mentioned by Khaykin in the figure caption of Fig. 4, which explains the delayed aerosol maximum in the observations. Nevertheless, it is possible that in this study the model underestimated the secondary aerosol formation, however, this would not have such large extent. We mention this as a further potential source of underestimated extinction in the revised manuscript. SC65) P14 L15: "While this is appropriate…pyroCb clouds": Thus, why not using the satellite observations of the plumes themselves as proposed by Kloss et al. (2021, Fig. A1-2), see also SC56? It is certainly a good idea to further explore the potential of satellite observation for initializing the injection heights of vegetation fires. However, bush or forest fires are more complex than volcanic eruptions as described in the mentioned paper. They are spatially variable and can extend over larger areas, which is also difficult to predict precisely. For modelling applications, satellite-based radiative power has proven to be a good measure of the height of smoke injection into the atmosphere, which can be well attributed to individual fire sites. However, it has been shown that, just as in this case, heavy smoke and cloud development strongly hamper detection. SC66) P14 L20-21: "Consequently, aerosol-climate models underestimate the wildfire aerosol impacts on the energy balance, as the vertical location of the smoke relative to clouds is fundamental to its radiative impact.": This might be true for pyro-convective fires while it is demonstrated that the biomass burning RF is overestimated in general, in models, again available at the following link: https://www.nature.com/articles/s41467-020-20482-9 Agreed. The statement was made more precise. SC67) P14 L29: please put references in chronological order. Done.
v3-fos-license
2019-03-28T13:42:35.734Z
2014-02-28T00:00:00.000
54843637
{ "extfieldsofstudy": [ "Biology" ], "oa_license": "CCBY", "oa_status": "HYBRID", "oa_url": "http://www.hoajonline.com/journals/pdf/2050-1412-3-1.pdf", "pdf_hash": "db67c033562dc7ef02feec34399c853dfbf91f67", "pdf_src": "MergedPDFExtraction", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:1612", "s2fieldsofstudy": [ "Biology" ], "sha1": "ec72fb1faf6154ca483d9561a3c3b871bbc56274", "year": 2014 }
pes2o/s2orc
The expression of scavenger receptor B2 in enterovirus 71-infected mice Objectives: Scavenger receptor class B, member 2 (SCARB2) participates in early innate immune responses to infection, so our aim was to explore the expression and role of mouse SCARB2 (mSCARB2) in different tissues in EV71-infected mice. Methods: ICR mice were inoculated intraperitoneally (i.p.) with EV71 0.1 ml 107.5 TCID50/ml. The control mice were injected i.p. with the same volume RD cell lysate. Mice were sacrificed by aether anesthesia at day 4, 8 and 12 post infection (p.i.), their brain, brainstem, spinal cord, cerebellum, lung and heart were dissected out for determining the number of copies of viral RNA by quantitative real-time PCR (qRT-PCR), detection of expression of mSCARB2 by immunohistochemistry, qRT-PCR and Westernblotting. Cytokines quantification by ELISA. Results: The viral loads in central nervous system (CNS) were higher than in lung or/and heart. The expression of mSCARB2 increased in tissues of EV71-infected mice, however, the levels of mSCARB2 increased in CNS were higher than in lung or/and heart within a certain period of time, particularly in brainstem and brain. In addition, local TNF-α, IL-6 and IL-1β levels of production were consistent with mSCARB2 levels of expression in tissues of EV71-infected mice. However, it presented a positive correlation between relative mSCARB2 mRNA level and TNF-α, IL-6 and IL-1β levels in local tissues at day 4 and 8 p.i.. Conclusions: Our data revealed that the elevated local mSCARB2 may modulate pro-inflammatory cytokines induction in local tissues, particularly, in CNS of EV71-infected mice. Introduction Enterovirus 71 (EV71), a neurotropic virus with undefined pathogenesis, has caused significant morbidity and mortality throughout the world, especially in the Asia-Pacific region since it was first detected in 1969 in the United State [1,2], including Singapore [3,4], South Korea [5], Malaysia [6], Japan [7], Vietnam [8], and China [9,10]. EV71, together with coxsackievirus A 16 (CVA16) infections are generally associated with hand, foot and mouth disease (HFMD), but EV71 infection occasionally progress to severe neurological disease, including aseptic meningitis, poliomyelitis-like paralysis, and possibly fatal encephalitis in neonates, especially brainstem encephalitis associated with pulmonary edema and cardiac insufficiency were the primary manifestations in patients with neurologic involvement [11,12]. Numerous animal models have been developed to study the pathogenesis of EV71 infection using the mouse-adapted strain of EV71 [13,14], innate immunodeficient mice [15]. The EV71 BrCr strain was demonstrated to induce neurological manifestation of tremor, ataxia, and brain edema in cynomolgus monkeys [16]. Moreover, EV71 BrCr infected mice also developed limbs paralysis and encephalitis [17]. SCARB2 (also known as Lysosomal Integral Membrane Protein II, LIMP II, LGP85 or CD36b like-2) is composed of 478 amino acids and belongs to the CD36 family, which includes CD36 and scavenger receptor B, member 1 (SR-B I and its splicing variant SR-B II [18,19]. SCARB2 is one of the most abundant proteins in the lysosomal membrane and participates in membrane transportation and the reorganization of the endosomal/ lysosomal compartment [19][20][21]. SCARB2 shuttles between these compartments and the plasma membrane [19]. SCARB2 is a type III double-transmembrane protein with a large extracellular domain (when it is present at the cell surface) and short cytoplasmic domains at the amino-and carboxy-terminus [18]. SCARB2 is expressed in a variety of tissues, including neurons in the CNS. SCARB2 deficiency in mice causes ureteric pelvic junction obstruction, deafness, and peripheral neuropathy, and SCARB2 deficiency in humans causes action myoclonus-renal failure syndrome (AMRF) [22,23]. The role of SCARB2 appears to be connected to the TNF-α-dependent and early activation of Listeria macrophages through internal signals linking the regulation of late trafficking events with the onset of the innate Listeria immune response [24]. Animal models have been developed to detail the pathogenesis of EV71 infection. However, the majority of the research has been devoted to understanding the neurotropism and neuropathogenesis of EV71, whereas the immunopathogenesis doi: 10.7243/2050-1412-3-1 aspect of the viral infection has remained largely unknown. It was proposed that overwhelming virus replication combined with the induction of massive pro-inflammatory cytokines is responsible for the pathogenicity of EV71 [25][26][27]. Indeed, high levels of interleukin-1β (IL-1β), IL-6, IL-10, IL-13, gamma interferon (IFN-γ), and tumor necrosis factor alpha (TNF-α) in the serum and cerebral spinal fluid (CSF) from EV71infected patients have been consistently reported [25,27,28]. In particular, CSF levels of IL-1β, IL-6, and TNF-α were found significantly elevated in patients with pulmonary edema (PE) and encephalitis, demonstrating a strong correlation between pro-inflammatory cytokine production and clinical severity in EV71 infections [26,29], and in EV71-infected neonate mouse model sustained high levels of IL-6 [30]. SCARB2-deficient mice display a macrophage-related defect in Listeria innate immunity. They produce less acute phase pro-inflammatory cytokines/chemokines, MCP-1, TNF-α, and IL-6, but normal levels of IL-12, IL-10, and IFN-γ and 25-fold increase in susceptibility to Listeria infection [24]. In this study, we assessed the expression of SCARB2 and the production of pro-inflammatory cytokine during EV71 infection in the neonatal mouse. Our results indicate that EV71 infection leads to the expression of SCARB2 increased in different tissues, which correlated with the local elevated levels of pro-inflammatory cytokine induction, especially in CNS. Cells and viruses Human Rhabdomyosarcoma (RD) cells (purchased from the Chinese Academy of Sciences Cell Bank, Shanghai, China.) were maintained in Dulbecco's Modified Eagle's Medium (DMEM, Gibco) containing 10-15% fetal bovine serum (FBS, Gibco), 2 mM L-glutamine, 100 IU of penicillin, and 100 μg of streptomycin/ml at 37 0 C, 5%CO 2. Non-mouse-adapted EV71 strain BrCr (a kind gift from Institute of Medical Biology, Chinese Academy of Medical Sciences & Peking Union Medical College, Kunming, China.) was propagated in RD cells. Once the cells displayed cytopathic effect (CPE), they were harvested, and cellular debris was removed by centrifugation at 10,000×g for 30 min. To prepare virus stocks, virus were propagated for one more passage in RD cells. The virus was purified by Amicon® Ultra 100 K device (Millipore) at 4,000×g for 40 min. The 50% tissue culture infective dose (TCID 50 ) was determined in RD cells using the Reed and Muench formula [31], and working virus stocks at 10 7.5 TCID 50 per ml. Animals and treatments ICR mice ( purchased from Laboratory Animal & Animal Experiment Center, Qingdao, China). They were housed under specific-pathogen-free conditions, housing temperature at 23 0 C. All institution guidelines for animal care and use were strictly followed throughout the experiments. One-day-old ICR mice were inoculated i.p. with EV71 0.1 ml 10 7.5 TCID 50 /ml. The control mice were injected i.p. with the same volume RD cell lysate and kept in separate cages. Their weight gain or loss and clinical signs, including ruffled fur, hunchback, wasting, limb weakness, limb paralysis, twitch, moribund and death were monitored daily up to 14 days after inoculation. The clinical score was graded as follows: 0, healthy; 1, weakness in hind limbs; 2, paralysis in a single limb; 3, paralysis in more than two limbs; 4, death [32]. In addition, mice per group were sacrificed by aether anesthesia at day 4, 8, and 12 post infection, respectively. After perfusion with PBS containing EDTA, their brain, cerebellum, brainstem, spinal cord, heart and lung were immediately dissected out for the extraction of RNA, for the extraction of protein or for immunohistochemical examinations, respectively. The experimental protocol was approved by the Animal Care and Use Committee of the Institute of Laboratory Animal Science of Chinese Academy of Medical Sciences. Virus detection in mice For this study, quantitative real-time PCR (qRT-PCR) was used to determine the number of copies of viral RNA present in detected tissues. Total RNA was extracted from individual brain, cerebellum, brainstem, spinal cord, heart and lung using an RNAiso Plus Kit (Takara, Dalian, China) according to manufacturers' instruction. Next, total RNA was reverse-transcribed with random hexamers using a Reverse Transcription kit (Thermo Scientific). The cDNA was subjected to quantitative PCR in a 50-μl reaction mixture (Thermo Scientific DyNAmo SYBR Green qRT-PCR Kit) with primers of EV71-S (5'-GCAGCCCAAAAGAACTTCAC-3') and EV71-A (5'-ATTTCAGCAGCTTGGAGTGC-3') for EV71/BrCr of nucleotides 2372-2598 [14,33], and the conditions consisted of a denaturation step at 95 0 C for 15 min and 40 cycles of thermal cycling of 95 0 C for 10 s and 60 0 C for 60 s. The EV71 virus fragment of nucleotides 2372-2598 was used as real-time PCR standard by adjusting to a concentration gradient of 1×10 7 copies/μl, 1×10 6 copies/μl, 1×10 5 /μl, and 1×10 4 copies/μl, and the DNA fragment with known copies was used as standard to calculate the copy number of virus RNA in the infected tissues. Quantitative real-time RT-PCR was performed using the Mxpro-Mx3000P system. Immunohistochemical staining The tissues from sacrificed mice were rinsed in 10% buffered formalin and then embedded in paraffin. Four micrometer sections were slided (Leica RM 2235) and placed on poly-L-lysine-coated glass slides before fixing with 3.7% paraformaldehyde. The sections were blocked by endogenous peroxidase for 10 min, nonspecific protein binding sites were also blocked for 10 min. The sections were incubated with mSCARB2 antibody (Abcam® discover more) 1:100 for 1 h, and then were incubated with secondary antibody IgG-Biotin and Streptavidin-HRP (Streptavidin-HRP Kit, CWbio.Co.Ltd, Beijing, China) for 10 min at room temperature, respectively. A red to brown peroxidase stain was developed using the doi: 10.7243/2050-1412-3-1 DAB Chromogenic Reagent kit (CWbio.Co.Ltd, Beijing, China), and the sections were examined with a light microscope after counterstaining with hematoxylin. Detection of mSCARB2 gene expression To examine mSCARB2 expression, total RNA from different tissues of EV71-infected mice and controls using an RNAiso Plus Kit (Takara, Dalian, China) following the manufacturer instructions were isolated. Total RNA was converted into cDNA by the reaction of reverse transcription (RT) using a Reverse Transcription kit (Thermo Scientific). The cDNA was subjected to quantitative PCR (Thermo Scientific DyNAmo SYBR Green qRT-PCR Kit) with a Rotor-Gene RG-3000 System. The primers were mSCARB2-L1 (5'-TCTGCTGTCACCAATAAGGC-3') and mSCARB2-R1 (5'-CCAGATCCACGACAGTCAAC-3'). The conditions consisted of a denaturation step at 95 0 C for 15 min and 40 cycles of thermal cycling of 95 0 C for 10 s and 60 0 C for 60 s. The GAPDH was used as an internal control. The relative gene expression was calculated using the 2 -ΔΔCt as described previously [34]. Each sample was run in triplicate. Western blot analysis For Western blot analysis of mSCARB2 in the various tissues, each sample was homogenized in ice-cold tissue extraction buffer (Invitrogen, Carlsbad, CA) containing 1% protease in hibitor cocktail. The homogenates were centrifuged at 11000×g for 30 min at 4 0 C. The BCA protein assay kit (PIERCE, UK) was used to assay the total protein of each sample. Samples with equal protein concentrations, were loaded onto an 8% SDS-PAGE. After electrophoresis, the proteins in the gels were transferred electrophoretically onto polyvinylidene fluoride membranes. Excess sites on the membrane were blocked by incubation for 2 h at room temperature, with 3% (wt/vol) nonfat dried skimmed milk in 20 mM Tris-HCl, PH 7.5, and 150 mM NaCl (Tris-buffered saline [TBS]). After a single washing with TBS, the membranes were incubated with anti-mSCARB2 antibodies (Abcam® discover more) 1:100 in TBS/0.05% Tween 20 by incubation overnight at 4 0 C. The membranes were then washed three times with TBS/0.05% Tween 20, and incubated with 0.3 µg/ml horseradish-peroxidase-labeled anti-rabbit IgG (Biosource) in 3% (wt/vol) nonfat dried skimmed milk in TBS for 1 h, at room temperature. Immunoreactive proteins were visualized with an enhanced chemiluminescence detection system (ZSJQ Corp., Beijing, China) according to the manufacturer's instruction. The amounts of mSCARB2 proteins were expressed relative to those of the amount of GAPDH. Cytokine quantification Various tissues were harvested from sacrificed animals at indicated time point, weighed, and homogenized in 500 µl of 1×phosphate-buffered saline (PBS) immediately. The homogenates were centrifuged at 13,000×g for 10 min at 4 0 C, and the supernatant was collected and stored frozen at -80 0 C until further analysis. The levels of cytokines were measured using a Solid Phase Sandwich ELISA kits (Mouse TNF-α, IL-6 and IL-1β Quantikine, R&D Systems), and following the manufacturer's instructions. Sensitivities of the TNF-α, IL-6 and IL-1β assays according to manufacturer protocol were 7.21 pg/ml, 1.8 pg/ml and 4.8 pg/ml, respectively. Intraassay and interassay coefficients of variation were: TNF-α: 3.9% and 6.2%; IL-6: 3.9% and 8.9%; IL-1β: 4.6% and 6.6%. Statistical analysis All statistical analyses were done with GraphPad Prism, version 5.0 (GraphPad 4 Software, San Diego, CA), for Mac. Kaplan-Meier survival curves were analyzed by a log rank test. Clinical score curves were analyzed by the Kruskal Wallis test. Other experiments were analyzed by Student's t test or by one-way analysis of variance (ANOVA) followed by Tukey's multiple comparison tests. Pearson's correlation was used to analyze the relation between pro-inflammatory cytokines and mSCARB2. A P-value of <0.05 was considered as statistically significant. EV71 Infection in mice The mice infected with virus were monitored daily for 14 days after inoculation with virus. In this study, infected mice developed severe symptoms. Fatigue in the hind limbs occurred at day 1-2 p.i., followed by paralysis in a single limb or/and paralysis in more than two limbs at days 3-7 p.i., or showed other signs of encephalitis such as hunched posture, lethargy, or ataxia, and death occurred at 2-7 p.i.. The healthy mice in the cell lysate control group was no a single mouse dead. Among the observed three groups (A, B, C), the survival curves were no significantly different ( Figure 1A). But 7-8 days later, the survivors' symptoms gradually restored. In the three groups (A, B, C), the clinical scores were no significantly different ( Figure 1B). However, their body weights appeared to grow slowly. EV71 Strain BrCr Displays Neurotropism in ICR mice At day 1 p.i., viral RNA were only detected in spinal cord, but were not detected in brainstem, cerebellum, brain, heart and lung. The number of copies of EV71 RNA detected at day 4 p.i. were in lung (3.99±0.13 log 10 copies/mg tissue), heart (3.11±0.12 log 10 copies/mg tissue), brain (5.31±0.30 log 10 copies/mg tissue), brainstem (6.17±0.18 log 10 copies/mg tissue), spinal cord (5.59±0.12 log 10 copies/mg tissue), and cerebellum (4.51±0.26 log 10 copies/mg tissue), however, the virus was gradually eliminated (Figures 2A-2F). A histopathological examination of the infected mice in different time was carried out. Marked lesions and/or obvious signs of inflammation were observed for the brain, brainstem, spinal cord, cerebellum, but heart and lung showed less lesions and/or signs of inflammation (data not shown). Different Tissues of EV71-infected Mice Express mSCARB2 Expectedly, mSCARB2 immuno-reactivity was not only doi: 10.7243/2050-1412-3-1 Cerebellum observed in lung, heart, brain, brainstem, spinal cord and cerebellum cells, but also the obvious immuno-reactivity was observed at day 4 p.i. compared to controls, and gradually decreased in later days (Figure 3). These results suggested that expression of mSCARB2 increased in these tissues after mice with EV71 infection. In this study, we found that the mSCARB2 mRNA levels elevated in all selected tissues at day 4 p.i., but the mSCARB2 mRNA levels were higher in brainstem (P<0.001), brain (P<0.01), spinal cord (P<0.01) and cerebellum (P<0.05) than in lung or/and heart. However, at day 8 p.i., the mSCARB2 mRNA levels obviously decreased, still in brainstem (P<0.001), brain (P<0.05) were higher compared to lung or/and heart. At day 12 p.i., only the mSCARB2 mRNA level in brainstem were observed higher than in lung or/and heart (P<0.05) (Figure 4A). Figure 4A also showed that the mSCARB2 mRNA levels in brainstem and brain were higher than in spinal cord and cerebellum. These results suggest that the expression of mSCARB2 increased obviously in CNS of EV71-infected mice, especially, in brainstem and brain ( Figure 4A). The expression of mSCARB2 protein showed moderate signals (~70~85 KDa band) at day 4 p.i. in brainstem, brain, spinal cord, cerebellum, lung and heart, weaken band at day 8 and 12 p.i., and controls. These results further conformed the mSCARB2 gene expression tested by qRT-PCR. The protein expression of mSCARB2 had the similar trend with gene expression (Figures 4B-4E Local Levels of Pro-inflammatory cytokines were elevated in EV71-infected mice Enhanced cytokine production has been proposed to contribute to EV71 pathogenesis in both humans and mice [26,28,30]. Local TNF-α, IL-6 and IL-1β levels were significantly higher in the various tissues homogenates prepared from EV71-infected animals at day 4 p.i. than in those from agematched noninfected controls. Meanwhile, TNF-α, IL-6 and IL-1β levels were significantly higher in CNS (brain, brainstem, spinal cord and cerebellum) than in lung or/and heart ( Figures 5A-5C). At day 8 p.i., these pro-inflammatory cytokines levels decreased in all tested tissues, but still higher in CNS than in lung or/and heart (Figures 5A-5C), and at day 12 p.i., these pro-inflammatory cytokines further declined, however, IL-6 and IL-1β levels in brainstem and brain presented higher compared to in lung or/and heart (Figures 5A-5C). In this study, we found that local mSCARB2 expression were consistent with TNF-α, IL-6 and IL-1β production in the brain, brainstem, spinal cord, cerebellum, lung and heart from EV71-infected mice. Surprisingly, it presented a positive correlation between relative mSCARB2 mRNA level and TNF-α, IL-6 and IL-1β levels in local tissues at day 4 p.i. and at day 8 p.i., till at day 12 p.i., it showed no correlation ( Table 1). These results suggested that the elevated pro-inflammatory cytokines in a certain range in local tissues induced higher expression of mSCARB2. Discussion We have demonstrated that one-day-old ICR mice were infected by the EV71 BrCr strain in vivo, and we used these models to assess the expression of mSCARB2 in CNS, lung and heart. The survival rates and clinical scores of infected mice were used to measure clinical symptoms or activities. After infection with EV71, virus was detected within various tissues by qRT-PCR. Our results indicated that one-day-old ICR mice are susceptible to EV71 infection and develop into CNS infection, as observed for humans. Upon infection via the peritoneal route, ICR mice consistently displayed hunchback, doi: 10.7243/2050-1412-3-1 limb weakness, and limb paralysis prior to death. Similar to human manifestations of EV71 encephalomyelitis [11], the virus exhibited a strong tropism for the CNS of ICR mice, with the numbers of viral RNA copies in CNS (brainstem, brain, spinal cord and cerebellum) were higher than in lung or/and heart, especially, in brainstem and brain higher than other tissues coinciding with the severity or even death of the animals. In addition, all sick mice exhibited massive neuronal damage, increased levels of cytokines, as reported previously for severe cases of human EV71 disease [35]. In this study, we found that the expression of mSCARB2 moderately increased in CNS, lung and heart in EV71 (BrCr)infected mice, and the expression of mSCARB2 was higher in CNS than in lung or/and heart at day 4 p.i., especially, in brainstem and brain, and at day 8 and 12, the expression of mSCARB2 decreased. The TNF-α, IL-6 and IL-1β production significantly increased in the CNS of EV71 infected mice in comparison with lung and heart at day 4 p.i.. At day 8 and 12 p.i., the levels of TNF-α, IL-6 and IL-1β production decreased. Interestingly, the expression of mSCARB2 in various tissues of EV71-infected mice has the similar trend to the production of TNF-α, IL-6 and IL-1β. Surprisingly, our data revealed a positive correlation between relative mSCARB2 mRNA level and TNF-α, IL-6 and IL-1β levels in local tissues at day 4 p.i. and at day 8 p.i., but at day 12 p.i., it showed no correlation. Carrasco-Marín E et al., presented evidence for the specific role of LIMP-2/SCARB2 in the innate immune response to Listeria monocytogenes and in phagocytosis. LIMP-2 tightly controls the number of cytosolic LM and the induction of acute phase pro-inflammatory cytokines such as MCP-1, TNF-α, and IL-6. However, the production of late pro-inflammatory doi: 10.7243/2050-1412-3-1 cytokines, such as INF-γ and IL10, was not regulated by LIMP-2/SCARB2 [24]. In infection, two cytokines involved in macrophages (MØ) activation: TNF-α and INF-γ. TNF-α acts as an early signal in innate immunity, INF-γ is a late signal. It has been claimed that exogenous action of TNF-α promotes an early activating state in MØs that triggers the cytosolic microbicidal mechanisms [36][37][38]. In EV71 infection, SCARB2 may also participates in exogenous MØ activation, the early signals modulated by TNF-α. Taken together, we assumed that in EV71 infected mice, the elevated local mSCARB2 may regulate the early innate immune response to EV71, or even modulate pro-inflammatory cytokines induction; mSCARB2 may also act as the invasive receptor for the enterovirus 71 although no experimental evidence has ever been provided support this hypotheses, because human SCARB2 (hSCARB2) have been identified as cellular receptors for EV71, and mSCARB2 exhibited 85.8% amino acid identity and 99.9% similarity to hSCARB2 [39,40]; the elevated expression of mSCARB2 in EV71-infected mice may play other roles, which are not clear now.
v3-fos-license
2023-06-15T15:05:39.560Z
2023-06-05T00:00:00.000
259159923
{ "extfieldsofstudy": [], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://storage.googleapis.com/jnl-sljo-j-sljch-files/journals/1/articles/10346/646733279fe8f.pdf", "pdf_hash": "bc220d2db7ff8ddbe04f5a445abc257756856e20", "pdf_src": "ScienceParsePlus", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:1615", "s2fieldsofstudy": [ "Medicine" ], "sha1": "4cb2fdf4c76eacd5f89f4891000ef821c58a4be6", "year": 2023 }
pes2o/s2orc
Two Sri Lankan siblings with diazoxide responsive congenital hyperinsulinaemic hypoglycaemia due to a rare mutation in the ABCC8 gene No abstract available Sri Lanka Journal of Child Health, 2023: 52(2): 231-235 Introduction Congenital hyperinsulinaemic hypoglycaemia (CHH) encompasses a spectrum of rare genetic disorders characterized by dysregulated insulin secretion by pancreatic  cells and is the most frequent cause of severe, persistent hyperinsulinism in newborn babies and infants [1][2][3] . CHH can occur due to mutations in several genes, including KCNJ11 and ABCC8. Inactivating mutations in the ABCC8 gene generally lead to unregulated insulin secretion, causing a severe form of CHH that is unresponsive to diazoxide. We present two Sri Lankan siblings born to non-consanguineous parents with a rare diazoxide responsive form of CHH, associated with a missense variant (c.2146G>A, p.(Gly716Ser) mutation in the ABCC8 gene. Case report Baby girl was born to non-consanguineous parents after two 1 st trimester miscarriages at 32 weeks of gestation with a birth weight of 2.5 kg (>+3SD on the preterm growth chart) via emergency lower segment caesarean section due to impending eclampsia and maternal diabetes. She was admitted to the special care baby unit (SCBU) due to moderate prematurity despite being born in good condition. She had an unrecordable capillary blood glucose (CBG) level on admission to the SCBU at 2 hours of life. She was immediately commenced on a dextrose bolus followed by an infusion of glucose at a rate of 4.1mg/kg/min in addition to expressed breastmilk (EBM). Subsequently, the glucose infusion rate (GIR) was gradually increased from _________________________________________ A critical sample sent on day 14 (CBG = 41mg/dl) while on a GIR of 19mg/kg/min to maintain the CBG just above 50mg/dl, recorded a serum cortisol level of 303nmol/L (55-304nmol/L), a growth hormone level of 3.3g/L (2-5g/L), a serum insulin level of 37.2pmol/L (5.18U/mL) (normal <2U/mL), with no ketone bodies in the urine, confirming hyperinsulinaemic hypoglycaemia. She was started on oral diazoxide therapy at a dose of 5mg/kg/day in 3 divided doses, which was increased to 7mg/kg/day 24 hours later to maintain CBG consistently above 70mg/dL. The GIR was gradually reduced to 4mg/kg/min, and the dextrose infusion was omitted on day 17. Decrease of CBG to 56 mg/dL resulted in a further increase in diazoxide to 8mg/kg/day on day 27 after which the CBG remained above 75mg/dL until discharge while on exclusive breastfeeding. The diazoxide dose was tapered off and completely omitted at 2 months and 20 days of age during the clinic followup, due to normal CBG levels. Her blood glucose remained normal until 7 months of age until she presented again with an afebrile hypoglycaemic convulsion at which time, she was re-started on diazoxide to which she showed a good response. She was confirmed to be heterozygous for an ABCC8 missense variant, c.2146G>A, p.(Gly716Ser), at 11 months of age with the help of Exeter Genetic Laboratory in the United Kingdom. Her mother had the same ABCC8 missense variant. Genetic testing could not be performed on the father, as he was working abroad, and further genetic testing and counselling was deferred till his return. Our patient is now 3 years of age and maintains normoglycaemia while on regular diazoxide therapy. Her developmental milestones are age appropriate with normal neurological examination. They had not sought genetic counselling when the mother conceived her second child during the COVID-19 pandemic, when the father returned to Sri Lanka. A baby boy was born on 05 th April 2022 at 32 weeks of gestation with a birth weight of 2.9kg (>+3SD on the preterm growth chart) via emergency lower segment caesarean section due to impending eclampsia and maternal diabetes just as in the index case. He too was born in good condition and was admitted to the SCBU since he was moderately preterm. Blood glucose on admission to the SCBU (at 2 hours of age) was 20mg/dL. This was followed by a 10% dextrose bolus, and infusion at a GIR of 4.2mg/kg/min. GIR was increased to 7mg/kg/min to maintain the CBG levels above 45mg/dl on day 2. GIR was maintained at 7-8 mg/kg/min while the dextrose infusion was tailed off from day 3 and omitted on day 9 when the GIR was 5.4mg/kg/min, while maintaining a CBG >60mg/dL with increasing amounts of expressed breast milk. However, 10% dextrose infusion was restarted on day 10 as the CBG decreased to 50mg/dL with a GIR of 9.6mg/kg/min which increased to 15.2mg/kg/min as the baby achieved full feeds (150mL/kg/day) 48 hours later. The baby was asymptomatic throughout this period and demonstrated no symptoms or signs of hypoglycaemia. The diagnosis of CHH in the older child was only revealed at this stage. The endocrine team managing the first child was then consulted, and he too was commenced on diazoxide 10mg/kg/day in 3 divided doses after sending a critical sample when the CBG was 21mg/dL. Urine ketone bodies were negative while the serum cortisol level was 383nmol/L (55-304nmol/L), growth hormone level was 2.5g/L (2-5g/L) and serum insulin level was 22.9pmol/L (3.19U/mL) (normal <2U/mL) confirming hyperinsulinaemia. With this dose of diazoxide, the dextrose infusion was gradually tapered off and completely omitted in the next 48 hours. He was further observed for another 4 days for any hypoglycaemic episodes and discharged on day 17 with a plan to follow up at the clinic. At 36 days of age, developmental milestones were age appropriate and neurological examination did not reveal any abnormality. Both parents were counselled together regarding the condition and implications for future pregnancies explained. Genetic testing is planned for the second child, as well as the father, and long term follow up arranged for both children at the paediatric endocrine centre. Both children are on standard doses of diazoxide, together with thiazide to minimise fluid retention Discussion Congenital hyperinsulinaemic hypoglycaemia (CHH) is a cause of severe, persistent hypoglycaemia in newborn babies and infants 1 . CHH can occur due to one of several rare genetic disorders associated with dysregulated insulin secretion by the pancreatic  cells 1-3 . The incidence of CHH is estimated to be 1:50,000 live births but could be as high as 1:2500 in regions with high consanguinity rates 4 . Mutations in many genes have been described in relation to CHH 5,6 . Of these, mutations in ABCC8 and KCNJ11 genes affect KATP channel in pancreatic β cells, while other genetic mutations mainly alter the concentration of intracellular signalling molecules (ATP) 5 . Mutations in ABCC8 and KCNJ11 genes cause the most severe form of CHH, which is typically unresponsive to diazoxide treatment 5 . Pancreatic  cells have KATP channels. The key regulators of the KATP channels are intracellular ATP and ADP. An increase in the intracellular concentration of ATP results in closure of the KATP channel 1,5 . This in turn results in depolarization and activation of the voltage gated calcium channels causing calcium influx and exocytosis of insulin 5 . The ABCC8 gene stands for ATP-Binding Cassette, Sub-Family C, Member 8, and codes for the SUR1 protein, which makes the KATP channel in the pancreatic  cells sensitive and responsive to sulfonylureas and channel activators such as diazoxide 5 . Inactivating mutations in any region of the ABCC8 gene leads to persistent depolarization of the pancreatic β-cell membrane, which leads to unregulated insulin secretion and can cause a severe form of CHH that is unresponsive to medical therapy with diazoxide 5,7 . Diazoxide binds to the SUR1 subunit in the KATP channel, causing it to remain open thereby increasing its permeability to potassium ions resulting in hyperpolarization of the pancreatic  cells and inhibiting calcium dependent insulin secretion 6,8 . The ABCC8 missense variant, c.2146G>A, p.(Gly716Ser) identified in our index patient has been identified in three additional unrelated infants referred for congenital hyperinsulinism testing to the Exeter Genetic Laboratory. In two of these patients the variant was maternally inherited and in the third case the variant was de novo 9 . This variant has not been reported in the genome aggregation database. In our patient, the mutations could either have been inherited recessively with a non-coding paternal ABCC8 variant or could be an acquired ABCC8 variant on the paternal allele, or a dominant variant with variable penetrance from the unaffected mother 9 . This variant is recognised as being sensitive to diazoxide, and explains the good response noted in these two Sri Lankan siblings, despite having CHH due to ABCC8 gene 10 . Additional genetic testing on the father and younger sibling is awaited. Most newborns with CHH have macrosomia, as insulin acts as a growth stimulator in utero. The average birthweight is 3.7 kg at term 2 . The degree of hypoglycaemia in CHH can range from asymptomatic hypoglycaemia detected by routine blood glucose monitoring to life-threatening hypoglycaemic coma or status epilepticus 2 . CHH should be suspected when there is persistent hypoglycaemia beyond 48 hours of life, with increasing dextrose requirement to maintain normoglycaemia 2 . Further, hypoglycaemia can occur in fasting as well as post-prandial states 2 . In CHH, excessive insulin also blunts the normal counter-regulatory hormone response to hypoglycaemia and inhibits the normal protective mechanisms which occur during hypoglycaemia in the fasting state, such as glycogenolysis, gluconeogenesis, lipolysis and ketogenesis, thus depriving the brain of both glucose and alternative energy substrates 4,11,12 . Therefore, early diagnosis and treatment of patients with CHH are essential to avoid brain damage and long-term neurological sequelae 1 . The acute management of hyperinsulinaemic hypoglycaemia requires parenteral glucose infusion to maintain blood glucose above 3.5mmol/L 13,14 . The parenteral glucose requirements exceed 8mg/kg/min and can often be as high as 15-25mg/kg/min 9 . In case of emergency (e.g., symptomatic hypoglycaemia and seizures without a venous access), intramuscular administration of glucagon may be used 1,4,13,15,16 . Early initiation and frequent feeding is also a very important supportive method although it can be difficult due to the feeding disturbances, food aversion, gastro-oesophageal reflux disease and foregut dysmotility which has been observed in patients with CHH 13,17 . Long term treatment will be different in children who are responsive to diazoxide compared to those who are diazoxide unresponsive. The management of diazoxide responsive children is straightforward, while the management of diazoxide unresponsive children is challenging. In such cases, it is essential to find a suitable medical therapy or if necessary, in case of medically unresponsive cases, resort to surgical intervention 1,13,15,17,18 . The drugs that can be used in the long-term management of diazoxide unresponsive CHH include, octreotide, lanreotide, nifedipine, glucagon and sirolimus while surgical interventions include partial pancreatectomy for focal disease or near total pancreatectomy for diffuse disease 1,4,13 . In the case history described above, both siblings were large for the period of gestation, although macrosomia was not present due to the prematurity. Although blood glucose levels reach a physiologic nadir within 4 hours of birth, they were both identified to have hypoglycaemia on routine investigations done on admission to the SCBU within 2 hours of birth and were actively managed. Both had a glucose requirement greater than 15mg/kg/min to maintain normoglycaemia. Hyperinsulinaemia was confirmed biochemically in both, and both responded well to diazoxide, an unusual feature for CHH arising due to ABCC8 gene mutations 3,6 . Hypertrichosis was noted in the older sibling, but neither developed any other adverse effects. There is no evidence of neurodevelopmental delay in both children to date. Conclusion Persistent and/or recurrent hypoglycaemia beyond 48 hours with increasing requirement of dextrose should always raise the suspicion of CHH. With early diagnosis and treatment, satisfactory long term neurodevelopmental outcomes can be achieved in medically responsive CHH.
v3-fos-license
2022-07-20T15:10:53.867Z
2022-07-01T00:00:00.000
250660227
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.mdpi.com/2218-1989/12/7/654/pdf?version=1657892557", "pdf_hash": "43ed0a575ed1dbaf432ca4be5deb66c7e7a97ff2", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:1616", "s2fieldsofstudy": [ "Medicine", "Environmental Science" ], "sha1": "9210a8347fb53300a6ee21a3cea9e5c39b251e5b", "year": 2022 }
pes2o/s2orc
Coffee Drinking and Adverse Physical Outcomes in the Aging Adult Population: A Systematic Review Declining physical functioning covers a prominent span of later life and, as a modifiable driver to be leveraged, lifestyle plays a critical role. This research aimed to undertake a systematic review investigating the association between levels of coffee consumption and declining conditions of physical functioning during aging, such as sarcopenia, frailty, weakness, falls, and disability, while trying to explain the underlying mechanisms, both from a metabolic and social angle. The literature was reviewed from inception to May 2022 using different electronic databases, not excluding the grey literature. Two independent researchers assessed the eligibility of 28 retrieved articles based on inclusion criteria; only 10 met the eligibility requirements. Different levels of coffee consumption were considered as exposure(s) and comparator(s) according to PECO concepts, while middle age was an inclusion criterion (40+ years). No limitations were set on the tool(s) assessing physical functioning, type of dietary assessment(s), study setting, general health status, country, and observational study design (cohort, cross-sectional). The cross-sectional design outnumbered the longitudinal (90%, n = 9/10). The overall quality rating was judged poor (70%) to good (30%). It was found that higher exposure to coffee drinking is strongly associated with better physical functioning outcomes, and the findings showed consistency in the direction of association across selected reports. Countering physical decline is a considerable challenge in easing the burden of population aging. For preventive models that aim to allow a better lifestyle, it has to be kept in mind that increased coffee consumption does not lead to poor physical functioning. Introduction Population aging is a major challenge and top priority in the 21st century. As the number of older people in industrialized countries grows, the World Health Organization (WHO) underlines how important it is for aging adults to keep their physical mobility so Metabolites 2022, 12, 654 2 of 15 that they can continue to live active, independent lives [1]. Indeed, this population subset is far more likely to suffer functional impairment [2,3], frailty [4], and disability [5,6]. A systematic review report of studies across several countries by Collar and colleagues [7] found a 10% and 25% prevalence of frailty in community-dwelling people aged over 60 and 80 years, respectively. We recently found comparable prevalence data in our "Salus" study of an elderly population in southern Italy [3]. In the United States, 50% of people aged 80 and older reported mobility limitations, 35% some disability in instrumental activities of daily living (IADL), and 27% some disability in basic activities of daily living (ADL) [8]. Preventing the adverse physical outcomes induced by the progressive agerelated deterioration of the musculoskeletal system is critical to increasing the number of healthy life years, avoiding institutionalization, and reducing the healthcare system burden imposed by the geriatric population subset. To this end, a better understanding of lifestyle modifiable risk drivers of sarcopenia, frailty, loss of mobility and autonomy, falls, weakness, disability, and muscle vigor loss appears critical to improving monitoring and prevention in older people. Epidemiological research into the association between dietary factors and bad physical outcomes in the aging population is poor, still lacking in evidence, and mainly focuses on antioxidants, B vitamins, fruits and vegetables, and dietary patterns [9][10][11][12][13]. Beverages fall into the class of dietary consumption essentials, which is a hot topic, especially in aging populations that undergo physiological alterations in thirst and taste. With an estimated 2.25 billion cups drunk daily worldwide, coffee is one of the most widely consumed beverages in the world [14,15]. Population studies suggest that coffee consumption is highly prevalent among the elderly [16]. Coffee drinking provides exposure to a huge number of biologically active compounds and nutrients [17], such as polyphenols, lipids, minerals, and particularly caffeine, which is the most widely consumed psychoactive substance in the world (85% of the US population) [18]. Improvements in a wide variety of health outcomes due to exposure to coffee consumption have already been described in the literature, including lower mortality, weight, cancer, diabetes, or patterns of markers of inflammation and insulin resistance [14,[19][20][21][22], often showing a dose-dependent relationship [23]. For all these reasons, coffee consumption has attracted and continues to attract an enormous amount of research. Against this background, assessing the association between exposure to coffee consumption and the physical decline outcomes of aging seems to be an important issue. Unfortunately, to our best knowledge, the current literature lacks an overview of this research question. The present research aimed to assess the magnitude and direction of the association between different coffee exposure levels and risks of adverse physical outcomes, including physical frailty, sarcopenia, impaired walking and mobility, and disability. Findings would help summarize the available window of evidence to improve public health advice on coffee consumption in the aging adult population while trying to unfold possible underlying mechanisms, both from a metabolic and social perspective. Results The first systematic search of the literature yielded 284 entries. After excluding duplicates, 231 were classified as potentially relevant and selected for the title and abstract analysis. Then, 203 were excluded for failing to meet the characteristics of the approach or the review goal. After reviewing the full text of the remaining 28 records, only 10 met the inclusion criterion of age and were included in the final qualitative analysis [24][25][26][27][28][29][30][31][32][33]. The flow chart of Preferred Reporting Items for Systematic Reviews and Meta-analyses (PRISMA), illustrating the number of studies in each review stage, is shown in Figure 1. The final study base included ten articles reporting nineteen different outcomes. Figure 2 shows a graph overview of the results. Metabolites 2022, 12, x FOR PEER REVIEW 3 of 16 Details of the design (cohort, cross-sectional), sample size (N) and sex ratio (%), minimum or age range, study population, and the country of each study are provided in Table 1. The cross-sectional (90%, N = 9 out of 10) predominated over the longitudinal design [30]. The recruitment contexts were community-based, except for one that collected data from universities, colleges, and technical schools [27] and one in a hospital context [33]. The geographical setting of the selected studies was evenly distributed between Asia (N = 5/10, 50%) and Europe (N = 4/10, 40%), with an American minority [24]. Details of the design (cohort, cross-sectional), sample size (N) and sex ratio (%), minimum or age range, study population, and the country of each study are provided in Table 1. The cross-sectional (90%, N = 9 out of 10) predominated over the longitudinal design [30]. The recruitment contexts were community-based, except for one that collected data from universities, colleges, and technical schools [27] and one in a hospital context [33]. The geographical setting of the selected studies was evenly distributed between Asia (N = 5/10, 50%) and Europe (N = 4/10, 40%), with an American minority [24]. Details of the design (cohort, cross-sectional), sample size (N) and sex ratio (%), minimum or age range, study population, and the country of each study are provided in Table 1. The cross-sectional (90%, N = 9 out of 10) predominated over the longitudinal design [30]. The recruitment contexts were community-based, except for one that collected data from universities, colleges, and technical schools [27] and one in a hospital context [33]. The geographical setting of the selected studies was evenly distributed between Asia (N = 5/10, 50%) and Europe (N = 4/10, 40%), with an American minority [24]. Following the inclusion criteria, subjects were over 40 years of age, predominantly 60 or older. Of the 34,921 individuals in the selected studies, the female sex was more prevalent, yet many studies failed to report the sex ratio. Regarding those outcomes fitting the inclusion criteria, the selected studies (N = 10) reported a set of adverse physical conditions, i.e., poor daily living skills (i.e., lower extremity mobility, general physical activity, leisure and social activities, impaired agility, impaired mobility, and impaired general physical function), sarcopenia (according to operational construct(s) or individual dimension), frailty (according to Fried's phenotype or the FRAIL scale), falls, slow gait (assessed by global gait or gait speed), and exhaustion (assessed by SPPB or chair rise test). The main finding was the consistent direction of the association, in the sense that the greater the coffee consumption, the better the physical functioning (Table 2). Falls Asking the participants "In the last year have you had any falls?" The possible answers were "no falls", "only one fall", and "more than one fall". Falls Asking the participants "In the last year have you had any falls?" The possible answers were "no falls", "only one fall", and "more than one fall". <1 cup/day Adjusted means and their 95% confidence intervals of skeletal muscle mass index according to increasing daily coffee consumption (<1, 1-2, 3 or more cups/day) compared to no consumption: adj mean 7.07 (95% CI 7.08-7.14), adj mean 7.12 (95% CI 7.09-7.14), and adj mean 7.14 (95% CI 7. 11-7.17) Concerning outcomes of poor daily living skills, we found articles by Wang [24], Machado-Fragua, and colleagues [30] reporting on the association between coffee consumption and lower extremity mobility, general physical activity, leisure, and social activities, and declines in activities of daily living or instrumental activities of daily living. The findings were roughly consistent in that direction. On the one hand, authors found that higher coffee consumption reduced odds of functional disability in older U.S. adults; on the other hand, no association suggested an increased risk of functional disability. Indeed, higher coffee consumption could even benefit women and patients with hypertension, obesity, or diabetes. FFQ Regarding sarcopenia, three studies reported on this outcome, but only one focused on individual dimensions of sarcopenia. Chung and Kim's reports [25,26] collected data on the KNANES Korean population, exploring ASMI as a measure of sarcopenia. They found that the consumption of at least 3 cups of coffee daily was associated with a lower prevalence of sarcopenia in older Korean men. On individual dimensions of sarcopenia, i.e., SMI and handgrip strength, Iwasaka and colleagues [28] found a significant positive correlation between coffee intake and SMI levels. In contrast, handgrip strength did not reach statistical significance, although a positive trend was reported. Similarly, Jyvakorpi and colleagues [33] found a linear, non-significant association between coffee consumption and handgrip strength. A single report evaluated the risk of falls as an outcome [32], reporting that habitual coffee consumption was associated with a lower risk of falls in two European cohorts, i.e., older people in the Seniors-ENRICA cohort (Spain) and the UK Biobank study. Last, Verlinden [31] and Jyvakorpi [33] reported on gait speed and exhaustion. They documented associations between coffee consumption levels and overall gait, gait speed, SPPB, and chair sitting and standing test, respectively. Verlinden concluded that in a communitydwelling population, consuming more than one daily cup of coffee is related to a better gait; consistently, Jyvakorpi found the same positive trend with gait speed and handgrip strength, SPPB score, and sitting and standing test points. We found a poor (N = 7), fair (N = 2) to good (N = 1) methodological quality overall. An overview of quality ratings within and across studies is provided in Supplementary Table S2 and Figure 3, respectively, highlighting areas with higher or lower ratings. Biases were found mainly in the domains of sample size justification (selection bias) and blinded assessors (detection bias) (91% and 82% of studies, respectively), and to a lesser extent in the domains of different levels of exposure (46% of studies) and multiple exposure assessments over time (73% percent of studies) in light of the prevalent cross-sectional setting. Since 73% of the studies had a cross-sectional design, the same percentage reflected an unclear risk for the following qualitative assessment items: prior exposure to the outcome, sufficient time frame, and loss to follow-up. and declines in activities of daily living or instrumental activities of daily living. The ings were roughly consistent in that direction. On the one hand, authors found that hi coffee consumption reduced odds of functional disability in older U.S. adults; on the o hand, no association suggested an increased risk of functional disability. Indeed, hi coffee consumption could even benefit women and patients with hypertension, obe or diabetes. Regarding sarcopenia, three studies reported on this outcome, but only one foc on individual dimensions of sarcopenia. Chung and Kim's reports [25,26] collected on the KNANES Korean population, exploring ASMI as a measure of sarcopenia. T found that the consumption of at least 3 cups of coffee daily was associated with a lo prevalence of sarcopenia in older Korean men. On individual dimensions of sarcop i.e., SMI and handgrip strength, Iwasaka and colleagues [28] found a significant pos correlation between coffee intake and SMI levels. In contrast, handgrip strength did reach statistical significance, although a positive trend was reported. Similarly, Jyvak and colleagues [33] found a linear, non-significant association between coffee consu tion and handgrip strength. A single report evaluated the risk of falls as an outcome [32], reporting that hab coffee consumption was associated with a lower risk of falls in two European cohorts older people in the Seniors-ENRICA cohort (Spain) and the UK Biobank study. Last, linden [31] and Jyvakorpi [33] reported on gait speed and exhaustion. They docume associations between coffee consumption levels and overall gait, gait speed, SPPB, chair sitting and standing test, respectively. Verlinden concluded that in a commu dwelling population, consuming more than one daily cup of coffee is related to a b gait; consistently, Jyvakorpi found the same positive trend with gait speed and hand strength, SPPB score, and sitting and standing test points. We found a poor (N = 7), fair (N = 2) to good (N = 1) methodological quality ove An overview of quality ratings within and across studies is provided in Supplement Table S2 and Figure 3, respectively, highlighting areas with higher or lower ratings. ases were found mainly in the domains of sample size justification (selection bias) an blinded assessors (detection bias) (91% and 82% of studies, respectively), and to a les extent in the domains of different levels of exposure (46% of studies) and multiple ex sure assessments over time (73% percent of studies) in light of the prevalent cross-se tional setting. Since 73% of the studies had a cross-sectional design, the same percen reflected an unclear risk for the following qualitative assessment items: prior exposu the outcome, sufficient time frame, and loss to follow-up. Discussion The present systematic review addressed the conceptual hypothesis of a link between coffee consumption and better outcomes in terms of declining physical functioning in the aging adult population. To this end, the body of evidence on different exposure levels to coffee consumption was examined against a cluster of impaired physical functioning outcomes, as assessed by operationalized constructs and other validated tools related to sarcopenia, frailty, exhaustion, gait, falls, and disability. The most important finding was the consistent direction of association across all studies selected to fill the knowledge gap about the research question. Although most reports had a cross-sectional design, thus leaving little room for causal inference, we found that the higher the coffee consumption, the greater the drop in adverse outcomes of physical functioning. The above negative link between coffee consumption and adverse outcomes of physical functioning may be explained from a social perspective as well as, conceivably, from a causal, biological, and metabolic standpoint in the context of aging. Physiologically speaking, aging occurs with a pattern of sensory decline [3,34], translating into reduced appetite and sensory perception, which are well-known dimensions underlying frailty, sarcopenia, and physical decline [35,36]. This sensory deficiency carries serious implications for safety, nutrition, quality of life, and social relationships [37]. On this latter point, impairing physical functioning easily leads to a cluster of social deprivations with a concurrent steady loss of conviviality and social drinking opportunities. In other words, the less physically fit you are as you age, the less likely you are to drink "social" coffee or other drinks in company with other people. From a biological perspective, previous evidence consistent with our findings points to the protective bromatological properties of coffee that promote physical well-being. Both animal and human reports discuss putative mechanistic explanations of a causal protective effect of coffee on physical and musculoskeletal health in aging. Guo and colleagues found, in aged mice, a preventive in vivo effect of coffee treatment on sarcopenia progression, along with an increase in muscle mass, grip strength, and regenerating capacity of injured skeletal muscles, likely explained by a decrease in low-grade systemic inflammation, which is one of the causative drivers of sarcopenia, thanks to the antioxidant and anti-inflammatory properties of coffee drinking [38]. The same report found increased proliferation rates, DNA synthesis, and activation of the Akt signaling pathway in satellite muscle cells of coffee drinkers [38]. Jang and colleagues found the same muscle hypertrophy in mice, possibly explained by a decrease in transforming growth factor-β (TGF-β) myostatin while increasing insulin-like growth factor (IGF) expression [39]. Furthermore, coffee bioactive has been shown to improve insulin sensitivity and muscle glucose uptake [40]. On the other hand, the loss of muscle mass and functionality in the aging population can be partially attributed to a loss of digestive functions, enzyme production [41], and appetite, thus causing malnutrition [42], especially regarding the availability of amino acids for protein synthesis. Coffee is known to stimulate digestive activity [43]. Recent randomized control trials (RCT) indicated a significantly higher salivary alpha-amylase production due to coffee drinking. It stimulates gastric secretions, gallbladder secretions, and pancreas secretions. These findings could be, in the majority, associated with caffeine effects. The best mechanistic explanation is an indirect effect of coffee on physical health. Physical activity is one of the most effective ways to maintain health and prevent physical decline, so feeling tired and lacking energy may be a hindrance. Caffeine is a well-established strong ergogenic aid whose performance-enhancing effects on strength and endurance have been documented in a wide range of physical tasks. Torquati and colleagues found that consuming 1-2 cups of coffee per day is associated with a 17% increase in the likelihood of meeting physical activity guidelines in middle-aged women, undoubtedly due to the caffeine increasing energy levels and reducing fatigue. Despite the fact that further trials are needed to corroborate the causal path, caffeine and some of its metabolites, including the main one, paraxanthine, show notable potential pharmacological interests, sometimes different from caffeine, for example on nitric oxide (NO) neurotransmission [44,45]. Jäger and colleagues showed interesting results of paraxanthine supplementation on grip strength, muscle mass, treadmill performance and NO in mice, all with seemingly less toxicity compared to caffeine [45,46]. On the other hand, theophylline has already shown an effect of reducing susceptibility to fatigue by improving ventilatory functions through its bronchodilator effect and on the contractility of the diaphragm [47] and is commonly used in the elderly in the context of asthma or chronic airway diseases [48]. Like caffeine, its effect has also been shown on physical performance. Despite an heterogenous literature [49,50] and discussed toxicity [46], extending theophylline, but also paraxanthine research into the prevention and treatment of negative physical outcomes, is of interest because of its prevention of exhaustion and its ergogenic effect. Lastly, in a public health perspective, and based on the coffee's beneficial effects so far reported for non-communicable degenerative illnesses of aging such as cancer, cardiovascular disorders, diabetes, Parkinson's disease, and cognitive impairment [41,42], consuming 2 to 3 cups of coffee may be protective against chronic disease occurrence, and therefore the functional deterioration closely linked to multimorbidity and polypharmacy in aging population settings. We acknowledge some limitations of this systematic review that could create critical bias. Firstly, coffee and the preparation methods are not the same worldwide. It has already been noted that preparation methods could impact cup content and coffee effects. For instance, filtered coffee is free of the diterpenes cafestol and kahweol, which are present in non-filtered coffee and could impact carcinogenic and cholesterolemic effects [43]. Unfiltered coffee is the coffee most commonly consumed in Spain, whereas, in the United States, coffee is consumed mainly after filtering. Only one of the selected ten studies performed a different analysis between caffeinated and decaffeinated coffee, thus discriminating the effect of caffeine's primary bioactive component. Furthermore, the selected studies did not stratify their analyses to include coffee consumption above 3 cups or 330 g per/day, and many stopped at 2 cups per day while a dose-response relationship is well-acknowledged [23]; thus, the opposite results with higher consumption are possible. Strengths include the cluster of physical functioning outcomes, embodying extensive evidence on the topic. Moreover, accounting for different levels of exposure to coffee consumption as comparators adds value to these research findings. Search Strategy and Data Extraction This systematic review adhered to the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) checklist of 27 items [51]. An a priori protocol for the search method and inclusion criteria was conceived and registered on PROSPERO, a prospective worldwide register of systematic reviews, with no modifications to the information supplied at registration (CRD42023338863). Ethical review and approval were waived for this study because a revision is involved and not an original article. We conducted separate searches in the US National Library of Medicine (PubMed), Medical Literature Analysis and Retrieval Online (MEDLINE), EMBASE, Scopus, Ovid, and Google Scholar databases to identify original articles investigating any possible association between coffee exposure and adverse physical outcome(s). The primary goal was to determine whether different exposures (comparators) to coffee consumption, as measured by dietary intake (cups per day, rising quintiles of daily consumption, or grams per day), were associated with unfavorable physical outcomes in the aging adult population. We also reviewed the gray literature at the study selection stage. To pinpoint abstracts of significant conferences and other information that specialists had not evaluated, we turned to the largest preprint repository, https://arxiv.org/ (accessed on 12 May 2022), as well as the database http://www.opengrey.eu/ (accessed 12 May 2022). We also aimed, particularly in the grey search step, https://www.base-search.net/ (accessed on 12 May 2022) to eliminate publication bias in contradictory and unfavorable results. Since we chose to include only observational studies, the search strategy followed the PECO (Populations, Exposure, Comparator, and Outcomes) concepts [52]. Thus, we took into account populations (only adults, 40 years or older population), exposure (coffee intake or consumption), comparators (different exposure levels), and adverse physical outcomes of the aging, i.e., sarcopenia, physical frailty, limited mobility, exhaustion, falls, IADL (Activities of Daily Living), ADL (Activities of Daily Living), disability, and slow gait. The exposure was limited to different levels of coffee consumption. The outcome factors were selected to include the primary adverse physical outcomes of aging, namely frailty syndrome, sarcopenia, mobility loss, muscle loss, ADL loss, IADL loss, disability, and gait impairment, regardless of the assessment tool used (disability and functional impairment, sarcopenia, frailty phenotype, gait, and global speed, falls, physical exhaustion), or the proxy tool applied (e.g., dynamometry, bioimpedance, Short Physical Performance Battery or SPPB, sitting down and standing up test, questionnaire, and others). The search strategy used in PubMed and MEDLINE and adapted to the other electronic sources is detailed in Supplementary Table S1. No time limit was set in the literature search, and articles were retrieved until May 2022. No language limitation was introduced. Two researchers (RZ, SM) searched the papers, reviewed titles, and abstracts of articles retrieved separately and in duplicate, checked the full text, and selected the articles for inclusion in the study. Technical reports, letters to the editor, and systematic and narrative review articles were excluded. Inter-rater reliability (IRR) was used to estimate inter-coder agreement, and then the κ statistic was used as a measure of accuracy and precision. A k coefficient of at least 0.9 was obtained in all data extraction steps based on PRISMA concepts and quality assessment steps [53]. Inclusion Criteria, Data Extraction, and Registration Exposure and outcomes needed to be referred to an aging adult population (at least 40 years of age). No criterion was applied to the recruitment settings (hospital, community, or others) or health status of the study population (general population or groups with specific features). Potentially eligible articles were identified by reading the abstract and, if suitable, reading the full-text version of the articles. For each selected article, the best statistical approach was applied when considering confounding, applied to assess the magnitude of the effect of the associations. The data were cross-checked, any discrepancies were discussed, and disparities were solved by a third researcher (RS). The following information was extracted by the two investigators (RZ, SM) separately and in duplicate in a piloted form: (1) general information about single studies (author, year of publication, country, settings, design, sample size, age); (2) level of coffee exposure (cups per day, increasing quintiles of daily consumption, or grams per day); (3) outcome(s) regarding all adverse physical outcome(s) included, regardless of the constructs or surrogate type of assessment; (4) main findings; (5) effect size of the association between exposure and outcome(s). All references selected for retrieval from the databases were managed with the MS Excel software platform for data collection by a biostatistician (FC). Lastly, data from the selected studies stored in the database were structured as evidence tables. Quality Assessment within and across Studies and Overall Quality Assessment The methodological quality of the included studies was independently appraised by paired investigators (RZ, SM) using the National Institutes of Health Quality Assessment Toolkits for Observational Cohort and Cross-Sectional Studies [54,55]. This tool contains 14 questions that evaluate a number of factors related to the risk of bias, type I and type II errors, transparency, and confounding factors. These aspects include the study question, population, participation rate, inclusion criteria, sample size justification, time of exposure/outcome measurement, time frame, levels of exposure, defined exposure, blinded assessors, repeated exposure, defined outcomes, loss to follow-up, and confounding factors. Cross-sectional studies are not mentioned in items 6, 7, or 13. For cross-sectional and prospective investigations, the maximum possible scores were 8 and 14, respectively. Disagreements between the two investigators regarding the methodological quality of the included studies were resolved through discussion until a consensus was reached with a third investigator (RS). Conclusions The decline in physical functioning and disability load during aging poses a significant challenge to easing the burden of population aging on public healthcare and quality of life. In the preventive models proposed to promote a better lifestyle, there is evidence that increased coffee consumption does not implicate poor physical functioning and may indeed be protective, potentially due to bioactive load. Supplementary Materials: The following supporting information can be downloaded at: https: //www.mdpi.com/article/10.3390/metabo12070654/s1. Table S1: Overview of quality ratings within selected studies. (N = 10). Table S2: Search strategy used in the US National Library of Medicine (PubMed) and Medical Literature Analysis and Retrieval System Online (MEDLINE) and adapted to the other sources, according to selected descriptors. Author Contributions: S.M., R.S. and R.Z. designed the study, performed searches, extracted data, assessed data quality, performed statistical analyses, and wrote the manuscript; S.M., F.C. and R.Z. performed searches, extracted data, assessed data quality, and reviewed the manuscript; F.C. assisted with data collection and analysis; M.R., F.P., F.F. and H.J.C.-J. reviewed the manuscript; G.D.P. and R.S. provided input into study design and analysis and reviewed the manuscript. All authors have read and agreed to the published version of the manuscript.
v3-fos-license
2023-02-19T16:03:11.103Z
2023-02-01T00:00:00.000
257001338
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": null, "oa_url": null, "pdf_hash": "095a80421a20dc0e43a701ad084f37cbb680aaed", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:1617", "s2fieldsofstudy": [ "Agricultural and Food Sciences", "Environmental Science" ], "sha1": "3c11c2bb512ce12711ad78873ed560ee6f0538e0", "year": 2023 }
pes2o/s2orc
Effects of Tea Plant Varieties with High- and Low-Nutrient Efficiency on Nutrients in Degraded Soil Tea plants are widely planted in tropical and subtropical regions globally, especially in southern China. The high leaching and strong soil acidity in these areas, in addition to human factors (e.g., tea picking and inappropriate fertilization methods) aggravate the lack of nutrients in tea garden soil. Therefore, improving degraded tea-growing soil is urgently required. Although the influence of biological factors (e.g., tea plant variety) on soil nutrients has been explored in the existing literature, there are few studies on the inhibition of soil nutrient degradation using different tea plant varieties. In this study, two tea plant varieties with different nutrient efficiencies (high-nutrient-efficiency variety: Longjing43 (LJ43); low-nutrient-efficiency variety: Liyou002 (LY002)) were studied. Under a one-side fertilization mode of two rows and two plants, the tea plant growth status, soil pH, and available nutrients in the soil profiles were analyzed, aiming to reveal the improvement of degraded soil using different tea varieties. The results showed that (1) differences in the phenotypic features of growth (such as dry tea yield, chlorophyll, leaf nitrogen (N), phosphorus (P), and potassium (K) content) between the fertilization belts in LJ43 (LJ43-near and LJ43-far) were lower than those in LY002. (2) RDA results showed that the crucial soil nutrient factors which determine the features of tea plants included available P, slowly available K, and available K. Moreover, acidification was more serious near the fertilization belt. The pH of the soil near LJ43 was higher than that near LY002, indicating an improvement in soil acidification. (3) Soil nutrient heterogeneity between fertilization belts in LJ43 (LJ43-near and LJ43-far) was lower than in LY002. In conclusion, the long-term one-side fertilization mode of two rows and two plants usually causes spatial heterogeneities in soil nutrients and aggravates soil acidification. However, LJ43 can reduce the nutrient heterogeneities and soil acidification, which is probably due to the preferential development of secondary roots. These results are helpful in understanding the influence of tea plant variety on improving soil nutrients and provide a relevant scientific reference for breeding high-quality tea varieties, improving the state of degraded soil and maintaining soil health. Introduction The tea plant (Camellia sinensis (L.)) is a perennial evergreen economic forest crop that is widely planted in tropical and subtropical regions of the world, especially in the south of China [1][2][3]. The soil in these areas is generally highly leachable and strongly acidic, resulting in a lack of nutrients in tea garden soil [4]. The lack of soil nutrients in tea gardens has been further exacerbated by human factors, such as the large amount of nutrients taken away by tea picking and inappropriate fertilization methods; these human factors have become universally limiting for tea production [5]. In recent years, the influence of biological factors (e.g., plant variety) on soil nutrients has been explored by the existing The Phenotypic Features of Growth for the Two Tea Plants The phenotypic features of growth for the two tea plants are shown in Supplementary Figure S1B. For LJ43 tea plants, the far (LJ43-far) and near (LJ43-near) fertilization belt had similar phenotypic features of growth, including leaf color and germination density, whereas the LY002, the LY002-far had shallower leaf color and lower germination density compared with that of the LY002-near. In terms of the yield of dry tea, LJ43-far was 80% of LJ43-near, whereas LY002-far was merely 50% of LY002-near ( Figure 1A). In terms of chlorophyll A, chlorophyll B, and total chlorophyll content, LJ43-far was above 65% of LJ43-near, whereas LY002-far was merely 50% of LY002-near ( Figure 1B). In terms of the contents of N, P, and K in leaves, LJ43-far was above 70% of LJ43-near, whereas LY002-far was merely 50% of LY002-near ( Figure 1C-E). The proportion of secondary roots was significantly higher for LJ43 than for LY002 (p < 0.05) ( Figure 1F). The phenotypic Features of Growth for the Two Tea Plants The phenotypic features of growth for the two tea plants are shown in Supplementary Figure S1B. For LJ43 tea plants, the far (LJ43-far) and near (LJ43-near) fertilization belt had similar phenotypic features of growth, including leaf color and germination density, whereas the LY002, the LY002-far had shallower leaf color and lower germination density compared with that of the LY002-near. In terms of the yield of dry tea, LJ43-far was 80% of LJ43-near, whereas LY002-far was merely 50% of LY002-near ( Figure 1A). In terms of chlorophyll A, chlorophyll B, and total chlorophyll content, LJ43-far was above 65% of LJ43-near, whereas LY002-far was merely 50% of LY002-near ( Figure 1B). In terms of the contents of N, P, and K in leaves, LJ43-far was above 70% of LJ43-near, whereas LY002-far was merely 50% of LY002-near ( Figure 1C-E). The proportion of secondary roots was significantly higher for LJ43 than for LY002 (p < 0.05) ( Figure 1F). shows the mean ± standard deviation (n = 3); there was a significant difference, p < 0.05, where the mean value is represented by different letters. Correlation between the Features of Soil Nutrient and Tea Plants The results of RDA analysis between the soil nutrient features and tea plants (LJ43 and LY002) are presented in Figure 2. Regarding the four data groups, LJ43-far, LY002-far, LJ43-near, and LY002-near were relatively separated between different groups but clustered within their respective groups. These results indicate that there was good data repeatability and obvious differences between the data in the four groups. There was a small difference between LJ43-far and LJ43-near but a large difference between LY002-far and LY002-near ( Figure 2), further indicating that the difference in soil nutrients between the far and near fertilization belts was less for LJ43 than for LY002. From the vertical distance from the sampling point to the arrow of soil nutrient factor, it can be seen that the soil pH values were negatively correlated with the distance to the fertilization belts. However, other soil nutrient features (e.g., available P, slowly available K, available K, and total P) were positively correlated with the distance to the fertilization belts. The angle between the arrow of various tea plant features and the arrow of soil nutrient features showed that the soil nutrient factors (e.g., total P and available K) had a positive relationship with the features of tea plants (e.g., tea yield; Figure 2). There was a negative correlation between features of tea plants and pH value. The angle between the soil nutrient arrows showed a positive relationship between the other nutrition features, except for pH value. This indicated that the crucial soil nutrient factors determining the features of tea plants included available P, slowly available K, and available K (p < 0.05) under the long-term single-side fertilization mode. Accordingly, the features of variation in pH value, available P, available K, and slowly available K should be highlighted in soil profiles. The results of RDA analysis between the soil nutrient features and tea plants (LJ43 and LY002) are presented in Figure 2. Regarding the four data groups, LJ43-far, LY002far, LJ43-near, and LY002-near were relatively separated between different groups but clustered within their respective groups. These results indicate that there was good data repeatability and obvious differences between the data in the four groups. There was a small difference between LJ43-far and LJ43-near but a large difference between LY002-far and LY002-near (Figure 2), further indicating that the difference in soil nutrients between the far and near fertilization belts was less for LJ43 than for LY002. From the vertical distance from the sampling point to the arrow of soil nutrient factor, it can be seen that the soil pH values were negatively correlated with the distance to the fertilization belts. However, other soil nutrient features (e.g., available P, slowly available K, available K, and total P) were positively correlated with the distance to the fertilization belts. The angle between the arrow of various tea plant features and the arrow of soil nutrient features showed that the soil nutrient factors (e.g., total P and available K) had a positive relationship with the features of tea plants (e.g., tea yield; Figure 2). There was a negative correlation between features of tea plants and pH value. The angle between the soil nutrient arrows showed a positive relationship between the other nutrition features, except for pH value. This indicated that the crucial soil nutrient factors determining the features of tea plants included available P, slowly available K, and available K (p < 0.05) under the longterm single-side fertilization mode. Accordingly, the features of variation in pH value, available P, available K, and slowly available K should be highlighted in soil profiles. Distribution Features of Crucial Nutrients in Soil Profile Before and after the test, the distribution of pH values in the soil profile for the growth of the two tea plants showed that there was a moderate decrease in soil pH, whereas the decrease in soil pH in the near fertilization belts (-near) was more severe (Figure 3). The soil pH in each layer decreased by 0.39-0.59 and 0.57-0.77, compared with pH levels before the test, for LJ43-near and LY002-near, respectively. Regarding near (former) and far (latter) from the fertilization belts, the soil pH of the two tea plants in the former was lower than in the latter. As for LJ43, the soil pH of far from and near to the fertilization belts was relatively close, and pH values in each layer of LJ43-near were decreased by 0.07-0.16 compared with those of LJ43-far. In the case of LY002, the difference in soil pH between the far and near fertilization belts was relatively larger, and soil pH values in Distribution Features of Crucial Nutrients in Soil Profile Before and after the test, the distribution of pH values in the soil profile for the growth of the two tea plants showed that there was a moderate decrease in soil pH, whereas the decrease in soil pH in the near fertilization belts (-near) was more severe (Figure 3). The soil pH in each layer decreased by 0.39-0.59 and 0.57-0.77, compared with pH levels before the test, for LJ43-near and LY002-near, respectively. Regarding near (former) and far (latter) from the fertilization belts, the soil pH of the two tea plants in the former was lower than in the latter. As for LJ43, the soil pH of far from and near to the fertilization belts was relatively close, and pH values in each layer of LJ43-near were decreased by 0.07-0.16 compared with those of LJ43-far. In the case of LY002, the difference in soil pH between the far and near fertilization belts was relatively larger, and soil pH values in each layer of LY002-near were reduced by 0.27-0.45. The pH values in each layer of LJ43-near were higher than those of LY002-near ( Figure 3). each layer of LY002-near were reduced by 0.27-0.45. The pH values in each layer of LJ43near were higher than those of LY002-near ( Figure 3). The distribution features of available P in the soil profile for the two tea plants are presented in Figure 4. The relative enrichment area of available P (>400 mg kg −1 ) was 30-40 and 50 cm distance from the main roots of the tea plants for LJ43-near and LY002-near, respectively ( Figure 4A, B). The relative enrichment area of available P (>400 mg kg −1 ) was 45-50 and over 50 cm distance from the main roots of the tea plants for LJ43-far and LY002far, respectively. The distribution features of slowly available K in the soil profile for the two tea plants are presented in Figure 5. The slowly available K content of the two tea plants near the fertilization belt (LJ43-near or LY002-near) gradually decreased from the 50 cm distance from the fertilization belt to the roots. Moreover, the relative enrichment areas of slowly available K (>850 mg kg −1 ) were all concentrated within 40-50 cm of the main roots of the tea plants. The slowly available K in the LJ43-near soil, at 0-10 cm distance to the main roots, had a relatively larger variation in the horizontal direction than in the vertical section. The slowly available K in the LJ43-near soil had a larger variation in the tillage layer (0-20 cm) than in the lower layer, whereas the slowly available K in the LY002-near soil had a small variation in the entire vertical section, which was smaller than that of LJ43near. The relative enrichment area of slowly available K in the LJ43-far soil, which was far away from the fertilization belt (>620 mg kg −1 ), was relatively expansive, whereas the slowly available K in the LY002-far soil was relatively narrower compared with that in the enrichment area. The slowly available K in most LJ43-far-area soils, at 0-30 cm distance to the main roots in the horizontal direction, was above 600 mg kg −1 , whereas the slowly available K in most LY002-far area soils was above 600 mg kg −1 . In addition, the relative enrichment areas of available K (>200 mg kg −1 ) were concentrated 30-50 and 35-50 cm away from the main roots of the tea plants for LJ43-near and LY002-near, respectively ( Figure 6A), and their content gradually changed along the distance from the fertilization belt for LJ43-near. The largest enrichment area of available K was 15-30 cm away from the ground ( Figure 6B). The relative enrichment areas of available K were 45-50 cm away from the main roots for LJ43-far, and the content of available K was above 175 mg kg −1 , whereas the available K was above 150 mg kg −1 for LY002-far. The distribution features of available P in the soil profile for the two tea plants are presented in Figure 4. The relative enrichment area of available P (>400 mg kg −1 ) was 30-40 and 50 cm distance from the main roots of the tea plants for LJ43-near and LY002-near, respectively ( Figure 4A,B). The relative enrichment area of available P (>400 mg kg −1 ) was 45-50 and over 50 cm distance from the main roots of the tea plants for LJ43-far and LY002-far, respectively. The distribution features of slowly available K in the soil profile for the two tea plants are presented in Figure 5. The slowly available K content of the two tea plants near the fertilization belt (LJ43-near or LY002-near) gradually decreased from the 50 cm distance from the fertilization belt to the roots. Moreover, the relative enrichment areas of slowly available K (>850 mg kg −1 ) were all concentrated within 40-50 cm of the main roots of the tea plants. The slowly available K in the LJ43-near soil, at 0-10 cm distance to the main roots, had a relatively larger variation in the horizontal direction than in the vertical section. The slowly available K in the LJ43-near soil had a larger variation in the tillage layer (0-20 cm) than in the lower layer, whereas the slowly available K in the LY002-near soil had a small variation in the entire vertical section, which was smaller than that of LJ43-near. The relative enrichment area of slowly available K in the LJ43-far soil, which was far away from the fertilization belt (>620 mg kg −1 ), was relatively expansive, whereas the slowly available K in the LY002-far soil was relatively narrower compared with that in the enrichment area. The slowly available K in most LJ43-far-area soils, at 0-30 cm distance to the main roots in the horizontal direction, was above 600 mg kg −1 , whereas the slowly available K in most LY002-far area soils was above 600 mg kg −1 . In addition, the relative enrichment areas of available K (>200 mg kg −1 ) were concentrated 30-50 and 35-50 cm away from the main roots of the tea plants for LJ43-near and LY002-near, respectively ( Figure 6A), and their content gradually changed along the distance from the fertilization belt for LJ43-near. The largest enrichment area of available K was 15-30 cm away from the ground ( Figure 6B). The relative enrichment areas of available K were 45-50 cm away from the main roots for LJ43-far, and the content of available K was above 175 mg kg −1 , whereas the available K was above 150 mg kg −1 for LY002-far. . The fertilization zone is located at the right-most side (i.e., 50 cm distance to LJ43-near or LY002-near main roots). Regarding the tea plants near the fertilization belts (LJ43-near or LY002-near), the fertilization belt was 50 cm distance from main roots of the tea plants; regarding the tea plants far from the fertilization belts (LJ43-far or LY002-far), the fertilization belt was 100 cm distance from the main roots of the tea plants. Variation Coefficient of Soil Nutrient Indicators The variation coefficients of the soil nutrients of LJ43 were less than those of LY002 in the far and near fertilization belts (Table 1). There was an obvious difference in the variation coefficients of soil pH, available P, available K, and total K between the far and near fertilization belt (p < 0.01). The variation coefficients of soil pH, available P, available K, and total K between the far and near fertilization belts of LY002 were 1.5, 1.4, 1.7 and 1.6 times higher than those of LJ43, respectively. It was revealed that LJ43 plantation was able to relatively reduce the differences of soil nutrients in the far and near fertilization belts. Note: SOM, AP, AK, AN, SAK, TN, TP, and TK stand for soil organic matter, available phosphorus, available potassium, alkali hydrolyzable nitrogen, slowly available potassium, total nitrogen, total phosphorus and total potassium, respectively. ** and * indicate significant differences at p < 0.01 and p < 0.05 levels, respectively. Discussion The long-term one-side fertilization mode of two rows and two plants was able to lead to the differentiation in the physical and chemical properties of the soil in the far and near fertilization belts. Significantly, the effects of fertilization zone distance on different varieties of tea plants were obvious. For the high nutrient efficiency genotype (LJ43), the distance to the fertilization belt had little influence on their growth status in fields, yield of dry tea, leaf chlorophyll, N, P, and K contents of the two tea plants, whereas distance had a greater influence on the low nutrient efficiency genotype (LY002). The ratio of Variation Coefficient of Soil Nutrient Indicators The variation coefficients of the soil nutrients of LJ43 were less than those of LY002 in the far and near fertilization belts (Table 1). There was an obvious difference in the variation coefficients of soil pH, available P, available K, and total K between the far and near fertilization belt (p < 0.01). The variation coefficients of soil pH, available P, available K, and total K between the far and near fertilization belts of LY002 were 1.5, 1.4, 1.7 and 1.6 times higher than those of LJ43, respectively. It was revealed that LJ43 plantation was able to relatively reduce the differences of soil nutrients in the far and near fertilization belts. Note: SOM, AP, AK, AN, SAK, TN, TP, and TK stand for soil organic matter, available phosphorus, available potassium, alkali hydrolyzable nitrogen, slowly available potassium, total nitrogen, total phosphorus and total potassium, respectively. ** and * indicate significant differences at p < 0.01 and p < 0.05 levels, respectively. Discussion The long-term one-side fertilization mode of two rows and two plants was able to lead to the differentiation in the physical and chemical properties of the soil in the far and near fertilization belts. Significantly, the effects of fertilization zone distance on different varieties of tea plants were obvious. For the high nutrient efficiency genotype (LJ43), the distance to the fertilization belt had little influence on their growth status in fields, yield of dry tea, leaf chlorophyll, N, P, and K contents of the two tea plants, whereas distance had a greater influence on the low nutrient efficiency genotype (LY002). The ratio of secondary and primary roots of LJ43-far and LJ43-near was obviously higher than that of LY002-far and LY002-near. In addition, the development of secondary roots can help determine the ability of tea plants to nutrient-capture [27]. Secondary rooting can promote plants to absorb soil nutrients because they have a large root surface area, which helps to increase the contact area between roots and soil [28,29]. In this study, LJ43 developed secondary roots earlier than LY002 far and near the fertilization belt. Furthermore, the contents of N, P, and K in the leaves of LJ43 were obviously higher than in the leaves of LY002. Ruan et al. [25] indicated that LJ43 had a higher ability in efficiency of nutrient absorption than that of LY002, which was possibly related to its feature of excelling in developing secondary roots. Obviously, the prioritized development of secondary roots in LJ43 was conducive to promoting its absorption of soil nutrients. On the other hand, with the development of secondary roots, LJ43 was also conducive to enhancing the distribution of the roots of tea plants in the soil, and was therefore able to promote tea plants to absorb more nutrients in the soil as well as speed up the reorganization of soil nutrients in the soil profile [28,29]. The RDA analysis results showed that the difference in soil nutrients in the far and near fertilization belt was LJ43 > LY002 ( Figure 2). Meanwhile, the variation coefficient of soil nutrients showed that the variation coefficient of soil nutrients of LJ43 in the far and near fertilization belt was higher than that of LY002 (Table 3). Apparently, LJ43 was able to decrease the differences in soil nutrient features in the far and near fertilization belt, with the ability possibly originating from the prioritized development of secondary roots. Secondly, the pH and bulk density indicated soil degradation in the tea garden. Compared with CK, the pH decreased by 0.45-0.79 units, while the bulk density increased by 0.03-0.26 g cm −3 . Compared with CK, the total P, total K, and CEC decreased by 0.2-0.56 g kg −1 , 0.2-0.62 g kg −1 , and 0.7-1.1 (cmol(+) kg −1 , respectively ( Table 2). This indicated that the tea garden soil was degraded. The degree of soil acidification caused by tea gardens and fertilization will vary with different tea plant varieties. RDA analysis showed that the other nutrient features had a positive relationship except for soil pH, which is a crucial and basic property of soil, as well as one of the factors affecting soil fertility ( Figure 2). The topdressing urea application promoted the development of soil acidification over the long term [30], and ammonium and H + release from tea plants led to the intensification of soil acidification. There was a reduction in the mean value of soil pH for tea plants to some extent before and after the experiment compared with the mean value calculated before this study was conducted. The varieties of tea plants, rather than distance of fertilization zone, had greater effect on soil pH value (Figure 3). 13.8 ± 0.06 a 13.1 ± 0.05 b 12.9 ± 0.02 c 13.0 ± 0.04 b 12.7 ± 0.01 d Note: Total P and K represent the concentration of total P and K, respectively. CEC represents cation exchange capacity. CK represents the original soil without tea planting and fertilization. The data show the characteristics of the soil profile. Data are means ± SEs (n = 3). Different letters represent significant differences at p < 0.05 levels. In addition, different tea varieties had an important impact on the distribution of crucial soil nutrient factors. The crucial soil nutrient factors that determine the characteristics of the tea plants in this study included available P, slow available K, and available K, rather than alkali hydrolysable N and total N, for the following reasons: (1) tea plants were picked and pruned every year to remove a large amount of nitrogen [31]; (2) the base and top fertilizers used in this study were rapeseed cake (N ≥ 5.25%, P 2 O 5 ≥ 3.91%, K 2 O ≥ 2.7%) and urea (N 46%), and the annual total nitrogen input is much higher than that of P and K [32]; and (3) the order of soil nutrient mobility was ranked as N > K > P, and N move rapidly and was absorbed by tea plant roots accordingly, whereas P and K are easily fixed and are hardly absorbed by tea plant roots in this regard [33]. Thus, the crucial soil nutrient factors that determined the differences in tea plant features in the research were P and K, as opposed to N. As K and P move slowly in the soil, their availability and distribution range in the soil is greatly influenced by root system morphology [34]. In the case of P, the excretion of P mobilizing agents may be relevant considering the cultivar-dependent P effect on plant yield [14]. For the varieties with high nutrient efficiency, soil nutrients can be effectively absorbed by maintaining good root system morphology [35]. In this study, LJ43 performed better in developing secondary roots, which have relatively greater advantages than primary roots in terms of nutrient absorption. For instance, secondary roots have stronger root vitality and larger absorption areas [36], so it is possible for them to effectively enhance the nutrient absorption efficiency of plants. It was revealed by the results obtained in both this study, as well as previous studies, that LJ43 has a higher nutrient absorption efficiency than that of LY002. The contents of available P and slowly available K in the LJ43 soil near to and far from the fertilization belts, within 0-10 cm of the root system in horizontal direction (LJ43-near and LJ43-far), were higher than those for LY002 (Figures 4 and 5). Furthermore, the available P and slowly available K in the LJ43 soil were relatively concentrated within the topsoil layer (0-15 cm). From the perspective of the spatial distribution of available K, the available K in the LY002 soil near the fertilization belt was concentrated blow 15 cm ( Figure 6). It seems that LJ43 has a better ability to lead P and K from the fertilization belt to the root system, so P and K were relatively concentrated near the root system of LJ43. This may be related to the relatively denser secondary roots of LJ43 being more evenly distributed in the soil, which makes it possible for LJ43 to absorb more P and K from the soil. There was a concentration gradient of P and K being formed in the soil far from and near to the root systems, which promoted the diffusion and mobility of P and K in the soil. Plant and Soil Materials Field trials were carried out in a place close to Yunqi Zhujing in Hangzhou, Zhejiang Province, Southeast China, which is a major production area for class-I Longjing tea in China [37]. The area is categorized as having a subtropical monsoon climate with an annual average precipitation of 1139 mm and an annual average temperature of 17.5 • C [38]. The soil parent material is river alluvium, and the area has flat ground and valley terraces, with an altitude of about 30 m and a slope of about 1.5 • . The soils are classified as Luvisols according to Chinese Soil Taxonomy [39]. The tested soil background is comparable according to the soil profile morphology, which provides a prerequisite for the subsequent comparative study of different treatments. Before the test, the basic physical and chemical properties in 0-30 cm of soil were determined, as follows: pH 4.4-4.6, bulk density 1.18-1.29 g cm −3 . According to the United Nations soil texture classification, the particle size distribution was as follows: clay (<2 µm) 18.7-20.8%, silt (2-63 µm) 73.5-75.7%, sand (>63 µm), 4.6-5.2%. For the purposes of this study, two tea varieties (Longjing43 (LJ43) and Liyou002 (LY002)) were offered by the Tea Research Institute, Chinese Academy of Agricultural Sciences. According to previous studies [25], these two tea genotypes have different nutrient efficiencies (high-nutrient-efficiency variety: Longjing43 (LJ43); low-nutrientefficiency variety: Liyou002 (LY002)). A previous study [25] described the tea planting mode and fertilization treatment in detail (Figure 7): 14-month-old tea cuttings were planted in unilaterally fertilized double rows in the experimental tea garden (i.e., with the cultivation of double plants at equal distances; the minimum row distance was 40 cm, the maximum row distance was 150 cm, the hole distance was 33 cm, and the fertilizer was side-dressed on the unilateral root base) (Supplementary Figure S1A). In October of every year of the study period, 4500 kg ha −1 rapeseed cakes (N ≥ 5.25%, P 2 O 5 ≥ 3.91%, K 2 O ≥ 2.7%) were used as basic fertilizers. In March of every year of the study period, 45 kg ha −1 urea fertilizer (N46%) was used as additional fertilizer. The above fertilizer treatments started on 10 October 2015. The tea plants close to the fertilization belt were marked as -near in accordance with its distance to the fertilization belt; the tea plants far from the fertilization belt were marked, accordingly, as -far. The soil field investigation and multiple cross-sectional interviews were carried out on 15 September 2021. According to the field investigation, four typical soil profiles were selected: LJ43-near, LJ43-far, LY002near, and LY002-far, representing the near and far fertilization belt for LJ43 and LY002, respectively. In the horizontal direction of each soil profile, soil samples were collected at equal intervals of 10 cm from the main root. In the vertical direction of each soil profile, soil samples were collected from bottom to top (including topsoil, core soil, and subsoil samples). The soil samples were collected from three plots in the same way, repeatedly. Meanwhile, the plant samples (leaves and roots) were collected and washed carefully. The soil profiles and layer divisions are shown in Figure 3A. The description for the root collections was as follows: Firstly, the representative tea plants were selected. The whole root systems were carefully excavated from the soil. The root systems were separated from the aboveground parts. The attached soil on the roots was carefully shaken off, and the rhizosphere soils were brushed down. Secondly, the root systems were taken back to the laboratory for cleaning and the primary and secondary roots were carefully separated. Thirdly, the cleaned primary and secondary roots were dried at 80 • C for 30 min and dried at 65 • C for 24 h. Finally, the primary and secondary roots were pulverized and weighed. The description and characterization of the soil profiles are shown in Table 3 and Figure 3A. The specific method of soil profile sampling has been described in previous studies [40]. Treatments of Samples and Analysis in Lab To determine the tea yield, young shoots with one bud and two leaves were collected from each tea row during the spring tea period; they were spread and exposed overnight, then underwent tea green removal at 220-280 °C using a hand rolling process in a Longjing pot. They were baked for 5 min at 120 °C with a 6CHM-901 electric heating dryer, Figure 7. The flowchart of the whole experiment (including when applied treatments, irrigation, fertilization, when sampling of soil or plants and how, etc.) [25,41]. Treatments of Samples and Analysis in Lab To determine the tea yield, young shoots with one bud and two leaves were collected from each tea row during the spring tea period; they were spread and exposed overnight, then underwent tea green removal at 220-280 • C using a hand rolling process in a Longjing pot. They were baked for 5 min at 120 • C with a 6CHM-901 electric heating dryer, then dried at 80 • C to constant weight. Finally, they became the dried green tea sample for weighing the dry tea output. The dry tea yield per plant was calculated in accordance with the total tea plants in the tea row. Thirty mature leaves were selected and measured using the acetone method [42]. Then, 0.100 g of fresh leaf sample was added into 10 mL of extraction solution (V acetone:V95% ethanol = 1:1) and the liquid was shaken evenly with the soaking extract, in the dark, at room temperature, for 24 h. After that, it was centrifuged at a speed of 3000 r min −1 for 10 min. A SHIMADZU UV-2550 spectrophotometer was used to measure the light absorption values at 663 nm and 645 nm. Measurements were repeated three times for each sample. The chlorophyll content was measured using the modified formula of the Arnon method [35]: Ca is the content of chlorophyll a (mg g −1 ), Cb is the content of chlorophyll b (mg g −1 ), and Ct is the total amount of chlorophyll (mg g −1 ). D 645 and D 663 represent the optical density at the wavelengths of 645 nm and 663 nm, respectively, V is the constant volume (mL), and m is the sample weight (g). To determine the nutrient content in leaves, the cleaned leaves were initially dried at 80 • C for 30 min, and subsequently dried at 65 • C for 24 h. Then, the leaves were pulverized. The crushed leaves were digested using the H 2 SO 4 -H 2 O 2 digestion method. After that, the Kjeldahl method, vanadium molybdenum yellow colorimetry, and flame photometry were used to measure the N, P, and K of the leaves [43,44]. Plant roots and litter were picked out from the collected soil samples after they had dried naturally. Then, soil samples were taken using the quartering method and recollected for further use after being ground and filtered through 10 mesh-, 60 mesh-, and 100 meshnylon screens, in that order. The physical and chemical properties of the soil (e.g., pH value, soil organic matter, alkali hydrolyzed N, available P, available K, slowly available K, and total N, P, and K) were measured according to the methods outlined by Zhang and Gong [27]. Statistical Analysis For statistical analysis, SPSS 18 and Microsoft Excel 2019 were employed. Origin 9.0 was used for drawing. One-way ANOVA was used to analyze the significance of differences in the data. Redundancy analysis (RDA) between soil nutrients and tea plant features was performed using Canoco 5.0 (Microcomputer Power, Clover Lane, Ithaca, NY, USA). Conclusions The high leaching and strongly acidic soils in tropical and subtropical regions, tea picking methods, and inappropriate fertilization methods, aggravate the lack of nutrients in tea garden soil. Through a high-nutrient-efficiency variety of tea plant (LJ43), both nutrient heterogeneity and soil acidification can be reduced, so as to greatly improve the soil degradation in tea gardens. The results show that the preferential development of secondary roots makes LJ43 efficiently absorb nutrients in the soil, promoting nutrient diffusion and migration in the soil, providing a chance for the plant to adapt to the nutrient spatial heterogeneity, and promoting efficient rapid nutrient absorption. Furthermore, long term emphasis on N fertilizer aggravates soil acidification, and so P and K become the crucial nutrient factors in soils that determine the tea plant features. LJ43 performs excellently in alleviating such soil acidification. This study lays a theoretical foundation for understanding the efficient utilization of soil nutrients by tea plant varieties. It also provides a relevant reference for improving soil nutrients using tea plant varieties, for breeding high-quality tea varieties, and for improving the state of degraded soil.
v3-fos-license
2023-02-19T16:20:14.665Z
2023-02-17T00:00:00.000
257000198
{ "extfieldsofstudy": [], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.mdpi.com/2674-1164/2/1/6/pdf?version=1677029953", "pdf_hash": "369837467006219915b18c5acc71475fab8f2fa7", "pdf_src": "MergedPDFExtraction", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:1618", "s2fieldsofstudy": [ "Agricultural and Food Sciences", "Environmental Science" ], "sha1": "5ca43876944459e98d9daee38bdc098301b743d1", "year": 2023 }
pes2o/s2orc
Evaluation of a Triple Buffered Peptone Broth for Detection of Salmonella in Broiler Feed : The pH of pre-enrichment media containing feed/ingredients can become acidic during incubation due to bacterial utilization of feed carbohydrates. This decrease in pH can result in cell injury or death, negatively impacting the detection of Salmonella . Our objective was to evaluate a new triple buffered peptone ( TBP ) against buffered peptone water ( BPW ) and lactose broth ( LB ) for the recovery of Salmonella from feed. Liquid cultures of nalidixic acid resistant strains of Salmonella (Enteritidis, Heidelberg, Kentucky or Typhimurium) were added to the pre-enrichment media alone, to pre-enrichment media containing feed or to artificially inoculated feed stored 1 or 7 d to evaluate the effect of the medium on the recovery of Salmonella . Three replicates per treatment were conducted. After incubation at 37 ◦ C for 24 h, the pH of the medium was measured prior to plating onto brilliant green sulfa agar plates supplemented with 200 ppm nalidixic acid ( BGS NA ). Plates were incubated and evaluated for presence of typical Salmonella colonies. The experiment was replicated. TBP was observed to exhibit significantly better buffering capacity than BPW or LB . Additionally, TBP was able to recover Salmonella 100% of the time compared to BPW (97.9%) and LB (61.5%). TBP shows promise to maintain neutral pH during pre-enrichment which may allow for a more accurate detection of Salmonella in feed. Introduction Salmonella enterica is a zoonotic pathogen readily passed from animal to man through the consumption of contaminated food. Salmonella species are commonly associated with the alimentary tract of animals and are considered a common commensal member of the gut microflora of poultry species [1]. Non-typhoidal Salmonella accounted for approximately 1.0 million cases of foodborne illnesses in the United States [2] with poultry meat products being associated with a higher percentage of outbreaks and infections than other food sources [1,2]. Salmonella contamination of broiler chickens can occur during grow-out, which can lead to contaminated birds arriving at the slaughter/processing plant. Despite elaborate post-harvest intervention strategies, contaminated poultry products on occasion reach the supermarket shelf and thus pose a health risk to the consumer. The poultry industry has long understood that pre-harvest intervention is necessary to control human enteropathogens, such as Salmonella and Campylobacter associated with poultry products. The grow-out farm is a horizontal transmission site, and these bacterial human pathogens can be recovered from multiple sources. Feed is one of the possible sources for the introduction of Salmonella into the farm. Numerous published studies have reported poultry feed as a potential source of Salmonella colonization of poultry [3][4][5][6][7][8]. However, only a low percentage of feed samples tested are typically reported as Salmonella positive [5,7]. Recovering Salmonella from feed poses many challenges. It is well known that Salmonella in feed is not uniformly distributed and the level of Salmonella in feed is <20 cfu/100 g. Mitchell and McChesney in 1991 suggested that at least 30 individual test samples would be required to adequately determine that a particular lot of feed was Salmonella negative [9]. A second challenge is that Salmonella in feed may exist in a stressed or injured state and therefore require a pre-enrichment step for resuscitation. Recent research has indicated some pre-enrichment media become acidic during the incubation periods due to fermentation of carbohydrates by background microflora. Cox et al. (2013) reported that the pH of various pre-enrichment media could decrease from an initial pH of 6.1-7.2 to a final pH of 3.9-4.1 depending on the type of pre-enrichment media and feed/ingredient type [10]. The inability of the pre-enrichment media to maintain a near neutral pH impacts the recovery and detection of Salmonella [11][12][13]. Berrang et al. in 2015 developed and reported a triple buffered peptone (TBP) medium which they found was able to maintain a pH closer to neutral than lactose broth (LB) or buffered peptone water (BPW) when used to incubate poultry feed [14]. The hypothesis is that by maintaining a pH closer to neutral, the TBP will have a better recovery rate of poultry related Salmonella serovars. The objective of the current study is to compare LB, BPW and TBP pre-enrichment broths for their ability to maintain a near neutral pH and determine their impact on recovery of poultry related Salmonella strains (unstressed and stressed) from feed. The authors approached this objective by incubating each broth with feed inoculated with one of four poultry relevant Salmonella serovars or incubating each broth with the addition of feed and one of four poultry relevant serovars of Salmonella. Broth pH was monitored, and the recovery of Salmonella was compared for all test broths at 1-and 7-days post inoculation. Preparation of Broths Two commonly used pre-enrichment buffers, buffered peptone water (BPW; Neogen Culture Media, Lansing, MI, USA) and lactose broth (LB; Becton-Dickinson, Sparks, MD, USA) were prepared according to the manufacturer's directions and autoclaved for 15 min at 121 • C. Triple buffered peptone (TBP) was prepared according to the method of Berrang et al. [14] and filter sterilized using 0.22 µm polyethersuflone, low protein binding membrane filters (PES Membrane Filters, Corning Costar, Corning, NY, USA). The pH of each broth was measured using a pH meter (SevenCompact, Mettler Toledo, Columbus, OH, USA) and found to be within the appropriate specifications. Salmonella Cultures and Liquid Inoculum Four nalidixic acid resistant serovars of Salmonella (Enteritidis, Heidelberg, Kentucky and Typhimurium) were grown on brilliant green sulfa agar (BGS NA ; Becton-Dickinson, Sparks, MD, USA) plates supplemented with 200 ppm nalidixic acid (NA; Sigma-Aldrich Chemicals, St. Louis, MO, USA) at 35 • C for 24 h. Cells on the plate were harvested and a liquid inoculum of each Salmonella serotype was prepared by suspending the cells in phosphate buffered saline. Salmonella was enumerated by serial dilution and plating on BGS NA agar plates. Plates were incubated for 24 ± 2 h at 35 • C prior to enumerations. Cell suspensions were stored at −80 • C in tryptic soy broth (TSB; Becton-Dickinson, Sparks, MD, USA) with 15% glycerol (Sigma-Aldrich Chemicals, St. Louis, MO, USA) until use. Inoculation of Feed Non-medicated grower feed obtained from a local research farm was used in the experiments (Table 1). One hundred g of feed (n = 4 per replication) was placed into sterile plastic freezer bags (Ziploc, S.C. Johnson and Johnson, Sturtevant, WI, USA) and inoculated with 10 mL of the 10 3 cfu/mL cell suspension while mixing. Inoculum was prepared for each serotype and contained~10 3 cfu Salmonella/g of feed. Inoculated feed was held at 22 ± 2 • C for 1 and 7 days. Evaluation of Broths Pre-enrichment broths were evaluated for their ability to maintain a near neutral pH during incubation and the subsequent impact on the recovery of each strain of Salmonella. Broths (45 mL) were dispensed into individual sterile specimen cups containing the following treatments: (1) pre-enrichment media + 5 mL of the 10 1 cfu ml cell suspension, (2) pre-enrichment media + 5 mL of the 10 1 cfu ml cell suspension + 5 g uninoculated feed, (3) pre-enrichment media + 5 g of inoculated feed (10 2 cfu/g) stored for 1 day or (4) pre-enrichment media + 5 g of inoculated feed (10 2 cfu/g) stored for 7 days. Salmonella inoculated feed samples were prepared with a higher cfu/g to ensure desiccation did not eliminate all the viable Salmonella in the feed samples. Broths were incubated for 24 ± 2 h at 37 • C. Three replicates per treatment in two replicate studies were conducted (N = 288). Analysis Due to laboratory standard operating procedures which prohibit the use of pH meter probes in samples which have been inoculated with known pathogens, the pH of the broths after incubation was measured using disposable pH test strips (colorpHast, EM Science, Gibbstown, NJ, USA). Two replicate test strips were analyzed per sample. Salmonella recovery was determined by streaking a 10 µL aliquot of the broth onto individual BGS NA agar plates and incubation at 37 • C for 22 ± 2 h. Presumptive positive colonies on the BGS NA agar plates were transferred to triple sugar iron (TSI, Becton-Dickinson, Sparks, MD, USA) and lysine iron agar (LIA, Becton-Dickinson, Sparks, MD, USA) slants for biochemical characterization. Slants with typical reactions were verified by O-antigen serogrouping (Becton-Dickinson, Sparks, MD, USA) to confirm the isolate as belonging to the same serogroup as the original nalidixic acid resistant Salmonella serovar (data not included in tables). Statistics Data (pH and Salmonella recovery) from the two experiments was combined for statistical analysis (n = 6). Data from the pH measurements of the broths was analyzed by least significant difference test t-test to determine differences among broths and treatments. Salmonella recovery (% recovery) data was compared using Fisher's Exact test. Significance was assigned at a p-value of <0.05. Results The initial pH of pre-enrichment broths (uninoculated and non-incubated) and the pH of the pre-enrichment broths containing the 10 1 cfu/mL of cell suspension of Salmonella that was incubated for 24 h were identical. However, differences were observed between broth types when feed was included in the treatment group (Table 2). No difference in pH was observed among Salmonella strains incubated in the same broth type and an average pH value for all strains was used for comparison purposes. It was observed that the pH of LB and BPW incubated with feed had become acidic at a pH of 4.3 and 5.1, respectively, during incubation, while the pH of TBP incubated with feed was near neutral at a pH of 6.8. The overall mean pH for each pre-enrichment broth was significantly (p < 0.05) different. No differences in broth pH were observed between broth containing feed and a cell suspension of Salmonella and broths containing Salmonella inoculated feed. Based on these data it appears that the decrease in pH of the broth is related to the production of acidic byproducts from the growth of other background microorganisms in the feed and not necessarily the resuscitation or growth of the Salmonella serovars [10]. Salmonella recovery data is presented in Table 3. In broths which were inoculated with only a cell suspension of Salmonella, broth type did not impact Salmonella recovery with Salmonella recovery from 100% of the inoculated buffer samples as expected. However, when uninoculated feed plus the cell suspension or inoculated feed were added to the broths, differences in recovery were observed among the three broths and among the four Salmonella strains. Of the strains evaluated, the recovery of S. enteritidis and Kentucky were most adversely affected. No S. enteritidis was recovered from lactose broth in either the broth with feed and S. enteritidis or the broth with feed stored for 7 days after inoculation with S. enteritidis. Additionally, no S. Kentucky was recovered from lactose broth or the broth with feed and S. Kentucky inoculation and there was only 50% recovery from broth with feed stored for 7 days after inoculation with S. Kentucky. Overall, the recovery rates for lactose broth were 100, 8.3, 75 and 62.5 % from the broth plus cell suspension, broth plus Salmonella culture plus feed, broth plus feed stored for 1 day after inoculation with the Salmonella culture, and broth plus feed stored 7 days after inoculation with the Salmonella culture. A significant difference was noted between the recovery rates for lactose broth plus feed and Salmonella suspension and lactose broth plus inoculated feed stored for 7 days after inoculation when compared to the same two treatments for both buffered peptone water and triple buffered phosphate broth. Discussion Research on Salmonella and acid exposure is often focused on adaptation due to continuous exposure. In some analytical methods, a short incubation period can preclude any acid adaptation and sensitivity to pH must be considered. Our data agrees with the data of Cox et al., who observed that the decrease in pH of the broth is related to the production of acidic byproducts from the growth of other background microorganisms in the feed and not the resuscitation or growth of the Salmonella serovars present in the feed or feed ingredients [10]. Blankenship reported that at a pH of 3-3.5 cell injury and cell death occurred in Salmonella Bareilly [15]. The percentages of cell injury and/or death were dependent on temperature and time. Cox et al. examined the sensitivity of cell suspensions of various Salmonella serotypes to acidic conditions (pH 4.0-5.5) at commonly used pre-enrichment times (18-48 h) [11]. It was observed that even at a pH of 5.5, the recovery of Salmonella could be negatively impacted. Recent research by Richardson et al. has shown that the recovery of Salmonella is dependent on pH, strain of Salmonella and stress status (i.e., cell suspension vs. naturally contaminated) [12]. Data indicated that if the pH of a pre-enrichment medium was not maintained at a near neutral pH, then >50% of the population of some strains of Salmonella would die or be injured at a pH of <5.8. Our data supports the findings of previous researchers that lactose broth, a nonbuffered broth, is not satisfactory for recovering Salmonella from feed and other dry, environmental samples. In the experiments of Richardson et al., the ability to recover Salmonella from the pre-enrichment broth was directly related to the buffering capacity of the media [12]. Lactose broth is an unbuffered medium, and the observation was that it was not particularly effective in recovering Salmonella from feed. The difference in the recovery of Salmonella from lactose broth amended with a cell suspension and uninoculated feed versus inoculated feed in the current study may be due to the stress status (presumption is of dry-stress or desiccation) of the organism. In the current work, the lack of recovery of S. enteritidis from lactose broth and inoculated feed stored either 1 or 7 days after inoculation and the lack of recovery of S. Kentucky from inoculated feed stored 1 day after inoculation and 50% recovery from lactose broth and feed stored 7 days after inoculation were expected due to the unbuffered capacity of lactose broth. Richardson et al. demonstrated that the acid sensitivity of the same isolate of Salmonella grown in cell suspension (unstressed) and in contaminated feed (presumably dry-stressed) was different and was dependent on the pH of the enrichment broth and serovar of the Salmonella isolate [16]. Longer storage time of inoculated feed has been reported to result in more desiccation of the cell suspensions that were added to feed resulting in a reduction in viable cell numbers present in the feed for recovery [17]. This may explain the observation of lower S. enteritidis recovery in BPW from inoculated feed (7dpi). Reduced recovery may be a combination of dry stress in feed stored for 7 days after inoculation and the acid stress associated with low pH of the pre-enrichment broth during incubation. The use of a pre-enrichment medium that maintains a near neutral pH, such as TBP, is essential for evaluating the incidence and types of Salmonella in feed. Acidic conditions can bias both the recovery of Salmonella and the serotype recovered. In the present study, TBP was the only broth with 100% recovery of all four Salmonella serovars from all broth, Salmonella, and feed. BPW had a near identical recovery, with a 97.9% recovery rate. This supports the finding of Cox et al. [10] and Richardson et al. [12]. In addition, detection of recovered Salmonella can be impacted by the pH buffering capability of the broth used to inoculate plates for incubation and identification of colonies. Blankenship [15] showed that S. Bareilly lost the ability to decarboxylate lysine and produce H 2 S when acid-injured at a pH of 4.0. Richardson et al. reported that the ability to produce H 2 S on selective agar occurred at a less acidic pH and was dependent on the isolate of Salmonella and stress status (dry vs. liquid culture) [18]. Laboratory technicians and personnel are trained to select colonies based on phenotypic characteristics and the inability of injured Salmonella to produce typical reactions on selective media could limit detection. This low pH has been shown to be detrimental to the recovery of Salmonella [11]. Therefore, a properly pH buffered pre-enrichment broth may preserve the typical colonial morphology and characteristics of Salmonella species making the recognition and identification of Salmonella more likely in feed samples and other dry environmental samples abundant in background microflora. Selection of pre-enrichment media and methods may be critical to prevent false negative test results for Salmonella testing and the coinciding misunderstanding or lower expectations for the potential of Salmonella species to be transmitted to growing poultry from feed and feed ingredients. Conclusions (1) Significant decreases in the pH of broiler feed pre-enriched in lactose broth or buffered peptone water during incubation at 37 • C may injure or kill Salmonella, thereby preventing recovery of the pathogen and possibly producing false negative results. (2) Triple buffered peptone used as a pre-enrichment was able to maintain a neutral pH and allowed the most recovery of Salmonella after incubation. Use of this newly developed medium may allow for recovery of more Salmonella from feed samples. Improved detection of Salmonella from feed could allow the poultry industry to find new and/or better intervention strategies for feed and may eventually help to lead to a reduction of Salmonella on the finished carcass. (3) Pre-enrichment of feed in LB produced a more drastic change in pH than preenrichment in BPW or TBP. The low pH level of feed in LB has been shown to kill and injure Salmonella. The continued use of lactose broth for the evaluation of feed may need to be re-evaluated by regulatory agencies to ensure that Salmonella is recovered from feed samples contaminated by this pathogen. Funding: This research received no external funding. Institutional Review Board Statement: No animals or humans were used in this experiment; therefore, no institutional review board was necessary. Informed Consent Statement: No humans were used as test subjects; therefore, no informed consent was required. Data Availability Statement: Data is available upon request from the corresponding author.
v3-fos-license
2020-09-02T13:06:35.439Z
2020-09-02T00:00:00.000
221399322
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.frontiersin.org/articles/10.3389/fendo.2020.00454/pdf", "pdf_hash": "11332bcec5bc408397e72f6d88299680399a1b88", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:1621", "s2fieldsofstudy": [ "Medicine" ], "sha1": "11332bcec5bc408397e72f6d88299680399a1b88", "year": 2020 }
pes2o/s2orc
No Effect of Thyroid Dysfunction and Autoimmunity on Health-Related Quality of Life and Mental Health in Children and Adolescents: Results From a Nationwide Cross-Sectional Study Background: In adults, a significant impact of thyroid dysfunction and autoimmunity on health-related quality of life (HRQoL) and mental health is described. However, studies in children and adolescents are sparse, underpowered, and findings are ambiguous. Methods: Data from 759 German children and adolescents affected by thyroid disease [subclinical hypothyroidism: 331; subclinical hyperthyroidism: 276; overt hypothyroidism: 20; overt hyperthyroidism: 28; Hashimoto's thyroiditis (HT): 68; thyroid-peroxidase antibody (TPO)-AB positivity without apparent thyroid dysfunction: 61] and 7,293 healthy controls from a nationwide cross-sectional study (“The German Health Interview and Examination Survey for Children and Adolescents”) were available. Self-assessed HRQoL (KINDL-R) and mental health (SDQ) were compared for each subgroup with healthy controls by analysis of covariance considering questionnaire-specific confounding factors. Thyroid parameters (TSH, fT4, fT3, TPO-AB levels, thyroid volume as well as urinary iodine excretion) were correlated with KINDL-R and SDQ scores employing multiple regression, likewise accounting for confounding factors. Results: The subsample of participants affected by overt hypothyroidism evidenced impaired mental health in comparison to healthy controls, but SDQ scores were within the normal range of normative data. Moreover, in no other subgroup, HRQoL or mental health were affected by thyroid disorders. Also, there was neither a significant relationship between any single biochemical parameter of thyroid function and HRQoL or mental health, nor did the combined thyroid parameters account for a significant proportion of variance in either outcome measure. Importantly, the present study was sufficiently powered to identify even small effects in children and adolescents affected by HT, subclinical hypothyroidism, and hyperthyroidism. Conclusions: In contrast to findings in adults, and especially in HT, there was no significant impairment of HRQoL or mental health in children and adolescents from the general pediatric population affected by thyroid disease. Moreover, mechanisms proposed to explain impaired mental health in thyroid dysfunction in adults do not pertain to children and adolescents in the present study. INTRODUCTION Up to 30 percent of adult patients affected by thyroid dysfunction have been reported to suffer from psychological impairments despite proper medical treatment (1), and up to 20 percent are affected by depression (2). Whether children and adolescents with thyroid disease are likewise affected by impaired healthrelated quality of life (HRQoL) or mental health has not yet been rigorously investigated, despite thyroid disorders being a common endocrine disease in this age group. Aberrant laboratory findings of the hypothalamus-pituitary-thyroid axis are detected in about 3.5% of children and adolescents on occasional testing (3), and thyroid disorders are among the top 5 reasons for referral to pediatric endocrinology (4). Subclinical hypothyroidism (HYPO SC ), defined by an elevated thyrotropin (TSH) level above the age-specific reference range despite a normal free T4 (fT4), is a common finding in children and adolescents with an estimated prevalence of 1.7 to 2.9% (5,6). Studies investigating HRQoL and mental health in pediatric HYPO SC are rare, and findings are ambiguous. While Holtmann et al. (7) found significantly higher TSH levels in children and adolescents affected by impaired mental health, neither Cerbone et al. (8) nor Zepf et al. (9) could relate HYPO SC to affective and behavioral dysregulation. However, pediatric sample sizes so far have been small. Considering only subtle effects of HYPO SC on clinical and metabolic outcomes (5), a larger sample size might be necessary to disclose effects of HYPO SC on HRQoL and mental health in children and adolescents. In overt hypothyroidism (HYPO OVERT ), there is a significantly elevated TSH accompanied by a fT4 level below the age-specific reference range. The most common cause of HYPO OVERT is Hashimoto's thyroiditis (HT) (5), a condition that is the most common autoimmune disease in children and adolescents (10,11). In adults, several studies found an impaired HRQoL in HT [e.g. (12)(13)(14)(15)(16)], even in patients treated with levothyroxine (1,17). Notably, most of the studies reported a significant linear and inverse relationship between TPO-AB levels and HRQoL (14,(17)(18)(19). Data regarding the relationship between HT and HRQoL or mental health in children and adolescents are rare. A single study focusing on adolescent patients with type 1 diabetes showed an impaired HRQoL in those patients with levothyroxine treated HT (20). However, no research investigating the relationship between mental health and HT has been conducted even though HT and hypothyroidism have been related to depression and anxiety disorders in adults by a recent large-scale meta-analysis (21). In recent years, improved TPO-AB assays (22) and the resampling of reference populations for assay calibration (23) have resulted in lower cut-off levels for TPO-AB positivity. Also, there is a growing awareness of an increase in TPO-AB positivity with the onset of puberty, especially in girls (24)(25)(26). However, the clinical phenotype, including HRQoL and mental health, of children and adolescents who only evidence increased TPO-AB levels but no signs of thyroid dysfunction (i.e., normal thyroid hormone levels, unremarkable ultrasonography of the thyroid gland; subsequently TPO only ) is not well-characterized and, thus, information on the relevance of this condition is lacking. Subclinical hyperthyroidism (HYPER SC ), defined in the literature by a TSH level below the age-specific reference range and a normal fT4 [e.g., (27,28)], is observed in 2.3% of adolescents aged 13-16 years (29). In adults, there is ambiguous evidence regarding the relationship between HYPER SC and HRQoL as well as mental health. While Biondi et al. (30) found an impaired HRQoL in HYPER SC and Kvetny et al. (31) could show a slightly increased risk of subclinical depression, mood and cognition were not affected by HYPER SC in 3 population-based studies (27,32,33). No study has been conducted in children and adolescents so far. HYPER OVERT , characterized by a TSH level below and an fT4 above the age-specific reference range, is a rare condition observed in 0.1 to 3 cases in 100,000 children and most commonly caused by Graves' disease (34,35). In adults, HRQoL is significantly reduced before (36) but also after treatment of Graves' disease (32,37,38), independent of the treatment modality. Abraham-Nordling et al. (32) and Riguetto et al. (38) also investigated the relationship between thyroid hormone status and HRQoL, but there was no consistent relationship across studies. Zader et al. (39) recently published findings on the prevalence of mental health disorders in children and adolescents with physician-diagnosed HYPER OVERT and reported a significantly increased risk of a diagnosis of ADHD, adjustment disorder, anxiety, and bipolar disorder as well as depression. Due to the prevalence and therefore also importance of thyroid dysfunction and thyroid autoimmunity, the present study was intended to investigate the relationship between thyroid disorders and HRQoL as well as mental health in children and adolescents relying on data from a large nationwide crosssectional study. Considering the current literature, we tested the following hypotheses: Study Population 'The German Health Interview and Examination Survey for Children and Adolescents' (KiGGS) was conducted by the Robert Koch Institute (RKI) between 2003 and 2006 to provide information on the health status of German children and adolescents. Details on the study design, the sampling strategy, and the study protocol have been described in detail elsewhere (40). Briefly, a two-stage random, clustered, and representative sample of German children and adolescents was selected (40). The final sample included 17,641 children and adolescents (8,656 girls, 8,985 boys; 0-6 years: 6,680, 7-13 years: 7,224, 14-17 years: 3,734). Parents and children aged 11 years and older completed self-administered, standardized questionnaires, and parents additionally participated in a computer-assisted personal interview conducted by a specially trained study physician that also performed a physical examination, thyroid ultrasonography, and phlebotomy for laboratory assessment (40). The study was approved by the Charité Berlin ethics committee as well as the Federal Office for the Protection of Data (40). Written informed consent was obtained from parents as well as from children aged 14 years and older. Only children and adolescents with information on thyroid function, thyroid autoimmunity, the intake of thyroid medication, and pre-existing thyroid disease as well as HRQoL and mental health (SDQ) were included for analyses. Thus, in total, the sample comprised 8,052 children and adolescents (3,902 girls, 4,150 boys; 3-5 years: 92, 6-12 years: 4,492, 13-18 years: 3,466). Questionnaires The SDQ is a well-established, multi-informant questionnaire with verified reliability and validity conceptualized to screen for mental health symptoms as well as positive attitudes in children and adolescents assessing 5 dimensions (emotional symptoms, conduct problems, hyperactivity/inattention, peer relationship problems, and prosocial behavior) by 25 items on a 3-point Likert scale (0-2). By summing the subscores from each dimension, a total difficulties score (subsequently SDQ-TD score) can be calculated with higher scores indicating more problems (for information on the normal range of SDQ-TD and KINDL-R scores s. Table 1B) (41). HRQoL was measured using the KINDL-R. The KINDL-R is a widely used either self-or parent-reported questionnaire with good psychometric properties and consists of 24 items assessing 6 dimensions of HRQoL (physical well-being, emotional wellbeing, self-esteem, family, friends, everyday functioning) scored on a five-point Likert scale (1)(2)(3)(4)(5). A total score can be calculated and transformed to values between 0 and 100. Higher scores indicate a better quality of life (44). While in children younger than 11 years we used the scores on the parent-proxy form of self-administered questionnaires, in children older than 11 years we referred to the scores of the self-report form of the questionnaires for analyses (45). The computer-assisted interview covered the medical history of the participating children and adolescents and asked parents for selected, previously physician-diagnosed chronic conditions as well as other chronic conditions, including thyroid disease, in an open-ended question format. Moreover, the computerassisted interview comprised a detailed section on the use of medication within the last 7 days, either prescribed or sold as an over-the-counter drug. To verify the reported medication, parents were asked to bring the original containers or package inserts on the day of the interview. Specific ATC (Anatomical Therapeutic Chemical) codes were recorded for all reported medication (40). Additional questionnaires covered information on the educational and professional status as well as on the total household income of the participants' families as well as comprehensive information on the migration background of participants, including nationality (40). Laboratory Studies Blood samples were obtained by a venous puncture after a median fasting period of 2 h using a vacutainer system. Whole blood was stored at 4 • C and serum at −40 • C before the transfer of samples to the central laboratory of the RKI within 3 days after sample collection (46). Analyses of serum TSH and fT4 levels were performed with the Elecsys 2010 R immunoassay analyzer (Roche Professional Diagnostics, Rotkreuz, Germany). TSH and fT4 assays employed are sandwich electrochemiluminescence immunoassays (ECLIA) with an inter-assay variation of <3.9% and <5.3% and a detection range of 0.005 to 100 IU/ml and 0.023 to 7.77 ng/ml, respectively (46). TPO-antibody levels were measured on a Phadia 250 R immunoassay system (Thermo Fisher Scientific, Uppsala, Sweden) in conjunction with the ImmunoCap TPO assay, a fluoro-enzyme-immunoassay (FEIA) with an interassay variation of <4.9%. According to the package insert, the measurement range of the assay lies between 33.4 and 3600 IU/ml, and the cut-off level for TPO-positivity has been established at > 100 IU/ml. TSH, fT4, and fT3 levels were z-transformed according to ageand gender-specific reference ranges (for details, please refer to the Supplementary Material). Urinary iodine excretion was determined from spot urine samples according to the Sandell-Kolthoff reaction as Mean and standard deviation in brackets or percentage. *Significant differences in comparison to healthy controls FDR-corrected at q < 0.05. § Significant mean difference in comparison to HT FDR-corrected at q < 0.05 (also compare to Table S3). # By definition, + not statistically evaluated additionally to irregularities on ultrasonography, § good or very good (parent-rated). • Does not include antiepileptics or phytopharmaceuticals. Note, irregularities on ultrasonography include altered echogenicity. US = ultrasonography. Mean, standard deviation as well as the number of cases considered in brackets. *Significant mean differences in contrast to controls FDR-corrected at q < 0.05, also compare to Table 3. § Significant mean difference in comparison to HT FDR-corrected at q < 0.05, also compare to Physical Examination In children 2 years and older, height was determined in upright posture using a calibrated stadiometer without wearing shoes. Height was recorded with a precision of 0.1 cm. Weight was measured wearing underwear by an electronic scale displaying weight with a precision of 0.1 kg. BMI was determined by the ratio of weight in kg and the height in meters squared (kg/m 2 ) (48). Ultrasonography In children 6 years and older, thyroid volume was measured by ultrasonography. The length (l), width (w), and depth (d) of each lobe was measured in transverse and longitudinal views by a 7.5 MHz linear-array transducer in supine position. The volume of each lobe was calculated according to Brunn et al. (49) by multiplying the obtained thyroid dimensions considering a correction factor (l × w × d × 0.479). Total thyroid volume was determined by summing the volume of both lobes. Irregular echogenicity, cysts, nodules as well as uncommonly located thyroid tissue were recorded. Before the study, all investigators were trained on performing a standardized volumetry of the thyroid gland. Throughout the study, average measurements of thyroid volume were compared between investigators to identify and correct systematic errors of measurement [(47); for details on the construction of percentile curves for thyroid volume, please see the Supplemental Material]. Core Definitions HYPO SC and HYPER SC were defined by z-standardized TSH levels above and below 1.96 SDS (97.5 percentile), respectively, and z-standardized fT4 and fT3 levels within normal range (±1.96 SDS). In HYPO OVERT and HYPER OVERT , z-standardized fT4 or fT3 levels were additionally required to be above and below 1.96 SDS, respectively. A diagnosis of HT was assumed in participants of the KiGGS study with a TPO-AB level above 100 IU/ml as well as no diagnosis of Graves' disease and at least one of the following conditions: HYPO SC or HYPO OVERT , irregularities of echogenicity on thyroid ultrasonography, an increased or decreased thyroid volume, prescription of thyroid medication (levothyroxine), or a previously physician-diagnosed HT. TPO only participants were characterized by elevated TPO-AB levels but no evidence of thyroid dysfunction (normal zstandardized TSH, fT4, fT3 levels, normal thyroid volume and echogenicity, no previously physician-diagnosed thyroid disease, no prescription of thyroid medication). Note, these definitions of HT and TPO only excluded that an euthyroid participant of the KiGGS study with a previously physician-diagnosed HT was misclassified as TPO only . Participants of the KiGGS study without a previously diagnosed thyroid disease, without thyroid medication (levothyroxine, iodine), without elevated TPO-AB levels above the assay cut-off as well as with normal thyroid hormone levels (TSH and fT4) and normal thyroid volume on ultrasonography were classified as healthy controls. According to this definition, patients with congenital hypothyroidism and thyroidectomy were either excluded from the group of healthy controls or only considered when on inadequate levothyroxine replacement therapy. Statistics-Overview All analyses, including post-hoc contrasts, if applicable, without hypotheses, were tested two-tailed and corrected for multiple comparisons controlling the false-discovery-rate (FDR) at q ≤ 0.05 (50) if not otherwise specified. Due to the large sample size rendering minor effects significant, results were also assessed considering effect size (sr 2 , R 2 , d, partial η 2 ) and only deemed meaningful if there was at least a small effect (sr 2 ≥ 0.02, R 2 ≥ 0.01, d ≥ 0.2, partial η 2 ≥ 0.01) according to Cohen (51). Data handling and statistical analyses were performed with SPSS 25.0 (Armonk, NY: IBM Corp.). Effect size calculations and power analyses were either performed with SPSS, GPower [3.1, HHU Düsseldorf (52)], or an online calculator (53). Analysis of (Co-)Variance Comparisons between participants affected by thyroid autoimmunity (HT and TPO only ) and thyroid dysfunction (subclinical or overt hypothyroidism or hyperthyroidism) with controls as well as between TPO only and HT participants were performed by analyses of (co-)variance (AN(C)OVA) regarding the dependent variables KINDL-R and SDQ-TD scores, zstandardized thyroid hormone levels (TSH, fT4, and fT3), TPO-AB titers, urinary iodine excretion, and z-standardized thyroid volume. Covariates known to affect KINDL-R and SDQ-TD scores (age, gender, social status, migration background, and residency in East or West Germany) have previously been identified (44,54). Despite partially ambiguous evidence, we also considered covariates affecting thyroid hormone levels (use of levothyroxine and iodine), BMI (55,56), smoking (57), vitamin D status (58), combined oral contraceptives (59), TPO-AB titers [iodine exposition (60)], urinary iodine excretion (iodine supplementation), and thyroid volume [BMI (47)] for analyses. Covariates were considered for ANCOVA testing if there was a significant correlation and at least a small effect (r > ± 0.1) (61). Correlation analyses were performed considering the scale of measure (interval scaled variables: Pearson correlation r; interval scaled and dichotomous variable: point-biserial correlation r pb ; interval scaled and ordinal scaled variable: Kendall τ ) as indicated (for detailed information on the statistics performed and the evaluation of assumptions made by the statistical procedures employed, please refer to the Supplementary Material). Multiple Regression The relationship between KINDL-R and SDQ-TD scores and z-standardized thyroid hormone levels (TSH, fT4, fT3), zstandardized thyroid volume, urinary iodine excretion, TPO-AB levels was assessed for each thyroid disease group by multiple regression combining a standard hierarchical approach with stepwise regression. Covariates identified earlier to affect either the dependent variable (KINDL-R or SDQ-TD scores) or the independent variables (z-standardized TSH, fT4, fT3, and thyroid volume, urinary iodine excretion as well as TPO-AB level) were entered as the first block of regressors and assessed via stepwise regression. Thereafter, the independent variables outlined above were entered as a second block of regressors. The combined variance accounted for in KINDL-R and SDQ-TD scores by the independent variables (thyroid parameters) was assessed by testing the change in R 2 against zero. Demographics and Sensitivity Analysis Differences in the frequency of gender, the use of levothyroxine or iodine, previously physician-diagnosed thyroid disease, and irregular echogenicity on ultrasonography were assessed in comparison to healthy participants as well as between HT and TPO only participants by χ2 tests of independence and Fisher's exact test in case of cell counts < 5. Age and BMI were compared between groups by ANOVAs and post-hoc tests. Sensitivity analyses were conducted to verify results. Considering thyroid autoimmunity, we repeated analyses with a TPO-AB cut-off level of 200 IU/ml. Analyses regarding SC HYPO and HYPER SC were also conducted considering only participants with a TSH level above and below 3 SDS (99.9 and 0.01 percentile), respectively. KINDL-R and SDQ-ANCOVA Neither transformed KINDL-R scores (p = 0.114; see Table 1B and Table S2 for detailed statistics regarding the analysis of KINDL-R and SDQ-TD scores) nor transformed SDQ-TD scores (p = 0.308) did significantly differ between participants with thyroid autoimmunity and healthy controls considering the via correlation analysis identified covariates age [KINDL: r (7,295) = −0.354, p < 0.001; SDQ: r (7,396) = −0.170, p < 0.001] and social status [SDQ: r (7,229) = −0.132, p < 0.001; s. Table 3 for detailed results]. Thus, H 1 (impaired HRQoL and increased risk of mental health problems in HT) was rejected. Power for detecting a small effect (d = 0.3) regarding a mean difference in either KINDL-R or SDQ-TD scores between HT participants and healthy controls was adequate (0.79), however insufficient for TPO only participants ( Table 2). In participants affected by thyroid dysfunction, age [KINDL: r (7,809) = −0.350, p < 0.001; SDQ: r (7,922) = 0.167, p < 0.001] and social status [SDQ: r (7,738) = −0.136, p < 0.001, respectively] were identified as significant covariates. There was a significant difference regarding transformed SDQ-TD scores (p = 0.029) but not transformed KINDL-R scores (p = 0.253) in comparison to healthy controls. Post-hoc contrasts revealed a significant difference between the small subsample of participants with HYPO OVERT and healthy controls (see Table 3 for a summary of contrasts performed) even though SDQ-TD scores in HYPO OVERT were still within the normal range (M: 11.43, normal range < 15). A likewise significant post-hoc contrast between participants with HYPER SC and healthy controls, however with negligible effect size, did not survive the correction for multiple comparisons (p = 0.028, d = 0.14). In consequence, H 5 (impaired HRQoL and increased risk of mental health problems in HYPO overt ) and H 6 (impaired HRQoL and increased risk of mental health problems in HYPER overt ) could not be confirmed since there was no significant effect of overt thyroid dysfunction on HRQoL or mental health. H 3 (unaffected HRQoL and mental health in HYPO SC ), however, was supported as we did not observe any effect of HYPO SC on HRQoL. While power for detecting a small effect (d = 0.3) regarding a difference in either KINDL-R or SDQ-TD scores between HYPO SC and HYPER SC and healthy controls was sufficient (0.99), power in participants affected by HYPO OVERT and HYPER OVERT was not ( Table 2; for results of the sensitivity analysis, please see Supplementary Material). KINDL-R and SDQ-Multiple Regression Only in healthy controls but in neither group affected by thyroid dysfunction or autoimmunity, thyroid parameters (zstandardized TSH, fT4, and fT3 levels, z-standardized thyroid volume and urinary iodine excretion) accounted for a significant, however negligible, proportion of variance in KINDL-R scores (R 2 = 0.002) in addition to age, social status, and BMI. Moreover, in neither group, thyroid parameters accounted for a significant proportion of variance in SDQ-TD scores when analyses were corrected for multiple comparisons ( Table 4). Despite a significant but again negligible linear relationship (sr 2 = 0.001) between z-standardized TSH levels and KINDL-R scores in healthy controls, no such association was detected between any single thyroid parameter and KINDL-R or SDQ-TD scores in any other group affected by thyroid dysfunction or autoimmunity when considering the confounding factors age, social status, and BMI and a correction for multiple comparisons. These findings are in line with H 4 (no correlation between HRQoL and thyroid hormone status in HYPO SC ) but result in rejection of H 2 (negative correlation between TPO-AB levels and HRQoL in HT). Also in healthy controls, z-standardized fT4, fT3, thyroid volume, and urinary iodine excretion did not account for a significant proportion of variance in KINDL-R or SDQ-TD scores when accounting for multiple testing and considering the covariates age, social status, and BMI. Note, TPO-AB levels were only regressed on KINDL-R and SDQ-TD scores in participants with thyroid autoimmunity but not in participants with thyroid dysfunction as the number of participants with detectable TPO-AB levels in the latter case was insufficient for meaningful analyses. This also applied to urinary iodine excretion, which was excluded from regression analyses in participants with HYPO OVERT and HYPER OVERT . Significant contrasts are marked in bold, d refers to Cohen's effect size measure comparing means. Please note that differences in means may not correspond to results presented in Table 1B in case of ANCOVA testing and correction of means for covariates. *Results refer to differences in ranks but not z-standardized thyroid volume. Mean differences and the 95% confidence intervals in brackets between the respective group as outlined in each column and healthy controls FDR-corrected for multiple testing at q < 0.05. TPO only There was no significant difference between the TPO only and the HT group regarding transformed KINDL-R (p = 0.062; see Table S3 for detailed statistics) or transformed SDQ-TD scores (p = 0.142). Regarding thyroid parameters, both groups significantly differed with regard to z-standardized TSH (p < 0.001, d = 0.82; lower in TPO only ), TPO-AB titers (p < 0.001, d = 0.27; lower in TPO only ) as well as the frequency of a previously physiciandiagnosed thyroid disease (p Fisher ′ s exact test < 0.001, d = 2.14), the frequency of levothyroxine prescription p Fisher ′ s exact test < 0.001, d = 1.86), and structural irregularities on thyroid ultrasonography p Fisher ′ s exact test < 0.001, d = 1.11). DISCUSSION Studies on HRQoL and mental health in children and adolescents affected by thyroid disease are rare, and findings are ambiguous. Moreover, children and adolescents with subclinical thyroid dysfunction and subclinical thyroid autoimmunity have not yet been sufficiently characterized with regard to thyroid function parameters as well as demographic aspects. Thus, based on a nationwide cross-sectional study, we investigated the relationship between HRQoL as well as mental health on the one hand and thyroid dysfunction and autoimmunity on the other hand. Contrary to our hypotheses which based on findings in adults, neither HRQoL nor mental health was significantly affected in children and adolescents with subclinical or overt thyroid dysfunction and with thyroid autoimmunity. The only exception were higher scores on the SDQ in HYPO OVERT in comparison to healthy controls. However, these scores were nonetheless within the normal range of normative data (Tables 1B, 3). Moreover, in neither group affected by thyroid dysfunction or autoimmunity, thyroid hormone levels (TSH, fT4, fT3), TPO-AB levels, urinary iodine excretion, or thyroid volume accounted for a significant proportion of variance in KINDL-R or SDQ-TD scores nor did these variables show a significant (linear) relationship with HRQoL or mental health ( Table 4). Hashimoto's Thyroiditis Despite sufficient statistical power, HT did not have an effect on HRQoL or mental health in children and adolescents, which was confirmed by sensitivity analysis. Therefore, H 1 and H 2 were rejected. Remarkably, these impairments seem to persist in a relevant proportion of patients despite treatment with levothyroxine (1,17). Reasons discussed to explain this finding have been summarized by Jonklaas (62) and include an elevated BMI in HT, disease awareness as well as comorbidities in patients with HT. Significant findings FDR-corrected for multiple testing at q < 0.05 are marked in bold, sr 2 is the squared semipartial correlation. # Indicates regressors not tested in the respective case for reasons outlined in the results section. *Indicates the proportion of variance in KINDL and SDQ scores accounted for by all the above thyroid parameters. Standardized regression coefficients (β) from multiple regression separately for participants with thyroid autoimmunity and thyroid dysfunction with the dependent variable KINDL-R score and SDQ total difficulties score. Frontiers in Endocrinology | www.frontiersin.org Children and adolescents with HT in the present study, however, did not significantly differ from healthy controls regarding their BMI, had not been pre-diagnosed with an autoimmune affection of the thyroid gland in most of the cases and were in either good or even very good general health (Table 1A). Thus, the psychological effects of a diagnosis of chronic disease and misattribution of symptoms caused by comorbidities are unlikely. In addition to these considerations, Nexø et al. (63) studied experiences of patients with thyroid dysfunction by qualitative interviews and identified social support and acceptance as an important aspect of coping in HT. Indeed, while various aspects of HRQoL are unaffected, impairment of social functioning is a consistent finding in adults with HT [e.g., (1,12,15,16)] and, therefore, could be one of the main drivers of compromised HRQoL. In children and adolescents, there was no evidence of adversely affected social functioning, as evidenced by exploratory subscale analyses of the KINDL-R and SDQ (data not shown). In summary, reasons discussed to explain an impaired HRQoL in adults with HT do not apply to children and adolescents, and this may explain why HRQoL is unaffected by HT in the latter. Despite reasons concerning disease-related cognition and self-perception discussed above, autoimmunity itself may be the cause for impaired HRQoL and behavioral problems in HT in adults. Autoimmunity in HT may not only affect the thyroid gland but also act systemically (64) and thereby target the brain as summarized by Leyhe and Müssig (65). It has been shown that TPO-ABs bind to cerebellar astrocytes (66), a mechanism by which a direct effect of TPO-ABs on the brain is feasible. Moreover, TPO-ABs may only indicate an epiphenomenon of systemic autoimmunity as ABs directed against the central nervous system (CNS) have been found in a significant proportion of adult patients with HT. These CNS ABs disturb myelogenesis, induce inflammation, and thereby potentially impair neurotransmission and contribute to the clinical phenotype of HT, including an impaired HRQoL and mental health observed in adults (21). In children and adolescents, no study so far has investigated the prevalence of AB targeting the CNS. However, children and adolescents with HT are at an increased risk for multiple autoimmune affections (67-69) that may (or may not) become apparent over time on an undetermined scale (68,70). Thus, there could be an increased risk for an autoimmune disease of the CNS in pediatric HT, which may prospectively impair HRQoL. Considering the present results and the hypothesized effect of CNS-AB on mental health, an autoimmune affection of the CNS does not seem to affect a significant proportion of children and adolescents examined in the present study. TPO only In TPO only participants, there was no effect of thyroid autoimmunity on HRQoL and mental health by the analysis of (co-)variance and multiple regression. A rise in the prevalence of TPO-AB positivity has been described with the onset of puberty, especially in girls (24)(25)(26). Even though many children and adolescents affected by TPO-AB positivity do not develop overt thyroid dysfunction (26,71), the same autoimmune mechanisms may be at work as in HT. Thus, we also investigated the relationship between HRQoL, mental health, and TPO-AB positivity in the absence of thyroid dysfunction. As in children affected by HT, there was no significant impairment of HRQoL or an increased risk of mental health problems, most likely for the reasons outlined above. However, regarding thyroid function, TPO only participants evidenced thyroid hormone levels, TPO-AB titers, and thyroid volume intermediary between participants affected by HT and healthy controls. While findings were still within the normal range, this may indicate harboring overt disease as in participants with HT. This is in line with conclusions derived by Prummel and Wiersinga (72) that increased TPO-AB titers pose affected individuals at risk for developing overt autoimmune disease. Concerning demographic variables, there was no significant difference between TPO only and HT participants supporting the notion that they originate from the same population but may face autoimmunity with a different pace of progression. However, as already pointed out by Beastall (71), longitudinal studies are needed to gain further insights into this condition and to determine whether TPO-AB positivity in affected individuals is a transient phenomenon associated with puberty, a separate entity of thyroid affection in adolescence and early adulthood or just an early stage of HT. Moreover, the present study was underpowered to detect small effects on either outcome measure and, therefore, additional studies with an adequate sample size are needed. Subclinical Hypothyroidism In line with H 3 and H 4 , we did neither find an effect of HYPO SC on HRQoL and mental health, nor did we observe a significant linear relationship between thyroid function parameter and our outcome measures. Previously, Holtmann et al. (7) reported significantly higher TSH levels in children and adolescents with severe mood problems, which could not be confirmed by Zepf et al. (9), relying on the same questionnaire and a similar research methodology but a larger sample. However, only a small fraction of children and adolescents in either study was affected by HYPO SC when applying a pediatric reference range for TSH. Drawing on a sample of children and adolescents affected by HYPO SC , Cerbone et al. (8) found no relationship between HYPO SC and mental health, which is supported by results of the present study as well as the two most recent meta-analyses of the relationship between HYPO SC and depression in adults (73,74). Tang et al. (74) showed in a subgroup analysis that only adults aged 50 and above were more likely to experience depression which agrees with considerations by Dayan and Panicker (75), arguing that an underlying chronic condition, more likely to occur in the elderly, could be the cause of increased rates of depression in adults rather than HYPO SC . This suggestion is plausible since there is no known physiological mechanism by which HYPO SC would cause significant effects on HRQoL and mental health in the presence of peripheral euthyroidism and without concurrent thyroid autoimmunity. Subclinical Hyperthyroidism In the present study, we did not find a difference in participants affected by HYPER SC and healthy controls with regard to HRQoL and mental health despite sufficient statistical power also to detect even small effects. In adults, there is ambiguous evidence regarding the effect of HYPER SC on HRQoL and mental health if at all investigated (27,(30)(31)(32)(33). Unfortunately, most studies do not report subscale findings of the HRQoL measures employed. However, as can be concluded from the study conducted by Klaver et al. (33), there may be a slight impairment of HRQoL due to symptoms caused by HYPER SC , even though other areas of HRQoL were unaffected. In contrast to these findings in adults, most children and adolescents affected by HYPER SC were not diagnosed with a thyroid disorder, which argues against significant clinical symptoms, either physical or mental. Moreover, 86% of children and adolescents were in either good or very good general health, and, as in HYPO SC , there is no plausible physiological mechanism by which HYPER SC would act on HRQoL and mental health in the presence of peripheral euthyroidism and without concurrent thyroid autoimmunity. Also, note that a sensitivity analysis considering only participants of the survey with severely reduced TSH levels verified these results. Overt Hypothyroidism As hypothesized (H 5 ) and previously found in adults (21), also children and adolescents affected by HYPO OVERT scored higher on the SDQ, indicating more mental health problems. However, scores were still well within the normal range of normative data (M: 11.42; normal range: <11 years: <16; >11 years: <17). Using depression as a proxy for (severe) behavioral and emotional problems, different mechanisms have been proposed to explain the relationship between HYPO OVERT and depression in adults. Among those mechanisms discussed, there is altered serotonin and catecholamine signaling, as well as a disturbed hypothalamic pituitary adrenal axis (76)(77)(78). For some of these mechanisms, the direction of effect and causality has not yet been conclusively established. The small effect of HYPO OVERT on mental health found in the present study indicates that either these mechanisms are not at work in children and adolescents or that their impact on mental health is mitigated, likely due to moderating factors. As can be seen from Table 1A, a significant proportion of participants with HYPO OVERT was affected by HT. Therefore, the same reasons as outlined above in the section related to findings in HT may not only explain why there was only a small effect of HYPO OVERT on mental health in children and adolescents but also why there was no impact on HRQoL. This especially applies to the observation that most participants of the KiGGS study with HYPO OVERT were in at least good general health, and only a small fraction was diagnosed with pre-existing thyroid disease. Thus, in most participants with HYPO OVERT , there was likely no disease awareness, which may in itself impair mental health to a significant extent (62). However, the number of participants affected by HYPO OVERT was small and, therefore, we argue for replication of findings with a sufficient sample size before any further conclusion can be drawn. Overt Hyperthyroidism In adults, there is consistent evidence of a significantly impaired HQRoL with HYPER OVERT (32,37,38). Moreover, only recently, Zader et al. (39) published evidence of a profoundly increased risk of a mental health disorder in children and adolescents affected with HYPER OVERT based on the analysis of a large data repository coding previously physician-diagnosed medical conditions. However, in contrast to these findings and our hypotheses (H 6 ), there was neither a reduced HQRoL nor an increased risk for mental health problems in the present study. These discrepancies might be related to the fact that Zader et al. (39) focused on children and adolescents seeking health care advice due to significant symptoms, which eventually resulted in physician-diagnosed thyroid dysfunction, while the present study focused on the general pediatric population. Differences between both study populations also become apparent when considering etiologically relevant aspects of the multifactorial relationship between hyperthyroidism and mental health problems suggested by Zader et al. (39). Zader et al. (39) argued that an overlap of symptoms associated with hyperthyroidism and mental health disorders, the detrimental effect of a diagnosis of a chronic disease or a biological process, namely autoimmunity, may explain their observation. However, about 90% of parents in the present study indicated good or even very good health in their children with HYPER OVERT , and the majority (about 80%) had not been diagnosed with thyroid disease (Table 1A). With regard to autoimmunity, the same considerations, as made above with regard to HT, may apply. The children and adolescents studied did not (yet) seem to be affected by systemic autoimmunity to a significant proportion. The number of participants affected by HYPO OVERT in the present study was small and, therefore, also the power to detect a significant effect with regard to HRQoL and mental health in HYPER OVERT was insufficient. As in the case of HYPO OVERT , the present findings need to be replicated with an adequate sample size to detect an even small affection of HRQoL and mental health in these individuals. General Pediatric Population In the general pediatric population without evidence of thyroid dysfunction, there was no significant relationship between HRQoL or behavioral problems and thyroid function parameters. While it has previously been shown that in the same sample, TSH levels within the reference range were associated with lipid levels (79) and blood pressure (80), there is no evidence of an association of either thyroid hormone parameter with HRQoL. Prior studies, however, did not account for multiple testing. Moreover, either no effect size measure was reported, or effects sizes were small or even negligible despite the large sample analyzed that favors statistical significance in the presence of irrelevant clinical effects. Thus, there is no conclusive evidence of a relationship between thyroid hormone levels and either physical or psychological functioning in children and adolescents from the KiGGS study. Limitations The present study was cross-sectional by design and, therefore, causal inference is limited. Some authors argue that a diagnose of HYPO SC and HYPER SC should only be established by 2 blood samples with evidence of thyroid dysfunction taken on different occasions (5). Especially in epidemiological studies, however, this is not possible. Moreover, and as already outlined above, further longitudinal studies are needed to classify euthyroid TPO-AB positivity as either a transient phenomenon associated with puberty, a separate entity of thyroid affection in adolescence and early adulthood, or just an early stage of HT. The number of participants with HT may have been underestimated as only TPO-AB but not Tg-AB were determined. Moreover, (ultrasound) investigators were only trained on performing a standardized volumetric evaluation of the thyroid gland but not to identify structural irregularities indicative of HT. This may also explain that in the present study the frequency of hypoechogenicity in HT was only about 60% the frequency previously reported, for example, by Kaloumenou et al. (25) in a likewise epidemiological study. However, this does not compromise the validity of results concerning HRQoL and mental health as findings in participants with thyroid autoimmunity were confirmed by sensitivity analysis, as discussed above. CONCLUSIONS This is the first study to investigate the relationship between HRQoL, mental health, and thyroid (dys-) function and thyroid autoimmunity in children and adolescents in a large, nationwide study. Importantly, the study was sufficiently powered to detect even small effects of thyroid functioning on mental health. The conclusions we draw rely on stringent control of type I error, and findings are interpreted considering effect size and not mere statistical significance, which is inadequate due to the large sample size. In contrast to adults, children and adolescents affected by thyroid disease and autoimmunity did not show significantly impaired HRQoL or mental health. These findings should, therefore, result in efforts to better understand the socio-psychological and (patho-)physiological differences between adults and children and adolescents. Thus, we believe our findings hold (prospective) value to basic research but also clinical care in dealing with children and adolescents as well as adults affected by thyroid disease. DATA AVAILABILITY STATEMENT The datasets for this article are not publicly available as the results reported are based on a secondary analysis of data provided by the Robert Koch Institute (RKI), Germany. Requests to access the datasets should, therefore, be directed to the RKI (kiggsinfo@rki.de). ETHICS STATEMENT The KiGGS study was reviewed and approved by Charité Berlin ethics committee as well as the Federal Office for the Protection of Data. Written informed consent to participate in this study was provided by the participants' legal guardian/next of kin. AUTHOR CONTRIBUTIONS RH, AK, and CG conceptualized the study. RH analyzed and interpreted the data and wrote the manuscript. BH, AH, HH, and CG participated in scientific discussions and revised the manuscript. All authors contributed to the article and approved the submitted version.
v3-fos-license
2016-05-12T22:15:10.714Z
2012-09-28T00:00:00.000
31522648
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://bmcproc.biomedcentral.com/track/pdf/10.1186/1753-6561-6-S5-O11", "pdf_hash": "a6d6dae6698a3051737233bd82fc815536a48fff", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:1623", "s2fieldsofstudy": [ "Political Science", "Medicine" ], "sha1": "82b5de807e8e1b62324502d547dac475e0ad7f8f", "year": 2012 }
pes2o/s2orc
Health system strengthening and the importance of public investment: a study of National Rural Health Mission in Bihar Description of the problem (Max. 100 words): Chronic under-investment in government health systems since 1990s has led to significant weakening of the systems and crippled its ability to deliver good quality health care. Through the introduction of National Rural Health Mission, there were efforts to increase public spending in health and strengthen health systems in rural areas. However, considerable amount of funds remained under-utilised initially. It was argued that further increase in spending is unwarranted as the public health system does not have the capacity to absorb funds. The present study is an attempt to draw linkages between system strengthening and fund absorption capacity to argue that consistent investment can only make health systems stronger, which in turn allows the system to absorb more funds. Methods (Min.100 words & Max. 150 words) The attempt here is to track fund flow and fund absorptive capacity under NRHM at various levels starting from state to district and subsequently to blocks and understand reasons of under-spending and study the various aspects of quality of spending at a district in Bihar. Public expenditure data obtained from various levels would be analysed to study the above mentioned objectives. The data will be analysed in order to get an overall picture of the nature and trend of resource allocation vis-a-vis utilisation at state and district levels. For studying the qualitative issues regarding under-utilisation of resources like its causes, systems and processes, institutional practices, and organisation structure leading to it, documents like Financial and Management Report, Audit Reports, Utilisation Certificates, sanction letters, bank statements would be consulted. Key informant interview, based on semi-structured questionnaire of some of the key functionaries would help us understand the processes more clearly. Main findings (Min.150 Max. 200 words) Public investment in health is among the lowest in India, when measured in terms of share in Gross Domestic Product. As the results of the study show, since the introduction of NRHM, there has been some growth in the funds disbursed to the states. But, the increase in spending at the state, district and block levels was not adequate, leaving huge unspent balances. The quarterly break-up of fund flow shows that a bulk of the funds reaches the districts only during the latter part of the financial year, hampering the ability to spend. A major bottleneck is the issue of capacity building of staff to deal with the boost in level of spending and comprehending the advance guidelines of NRHM. Utilisation had also been more in activities in the form of entitlements like JSY and Family Planning or activities run on Mission mode like Pulse Polio Immunisation. Activities that require innovation had largely remained under-utilised. However, in the study state fund utilisation has increased and quality of spending has improved, fund flow processes have become more efficient, funds for system strengthening are being utilised and hence fund absorptive capacity has increased over the last few years. Discussion including recommendations (Max. 200 words) – The lack of absorptive capacity of the state was an outcome of chronic lack of investment on fundamental issues of infrastructure, availability of drugs, skilled human resource, planning, data management etc. Improving absorptive capacity is a long-term process and would require sustained efforts towards strengthening management and institutional capacities, filling up of vacant posts, higher salaries, much greater expenditure on drugs and other consumables etc. States that have some system in place have made better use of the current mechanism while others have failed. Ironically, NRHM, which was meant for strengthening health systems in states with greater developmental deficits, has ended up enhancing those very deficits. The task of health systems strengthening has gained some momentum over the last two-three years. A rearrangement of the financial mechanism with greater share of states and redoubled emphasis on decentralisation may be pointers for a future roadmap on the government’s health interventions in the country. NRHM is at best a small step forward in the endeavour to guarantee universal access to health. A gradual increase in health spending with long term perspective would be crucial to supplement the measures initiated through this umbrella health intervention. Introduction Chronic under-investment in government health systems has led to significant weakening of these systems and crippled their ability to deliver good quality health care. Through the introduction of the National Rural Health Mission (NRHM), there have been efforts to increase public spending in health and strengthen health systems in rural areas. However, evidence shows that the increase in fund did not necessarily result in its utilisation. In the initial period of NRHM, a considerable amount of funds were under-utilised. It was argued that further increase in spending is unwarranted as the public health system does not have the capacity to absorb funds. The present study is an attempt to draw linkages between system strengthening and fund absorption capacity to argue that only consistent investment can make health systems stronger. This in turn would allow the system to absorb more funds. We tracked the fund flow and fund absorptive capacity under the NRHM at various levels: state, district, block and village. We sought to understand reasons for under-spending and study the various aspects of quality of spending in a district in Bihar. Methods Public expenditure data obtained from various levels were analysed. Document analysis of Financial and Management Reports, Audit Reports, Utilisation Certificates was undertaken. Select key informant interviews of stakeholders were conducted using a semi-structured questionnaire to understand causes, processes, institutional practices and organizational culture effecting fund utilization. The data were analysed in order to get an overall picture of the nature and trend of resource allocation visa-vis utilisation at state and district levels. Results Public investment in health is among the lowest in India, when measured in terms of share in Gross Domestic Product. As the results of the study show, since the introduction of NRHM, there has been some growth in the funds disbursed to the states. But, the increase in spending at the state, district and block levels was not adequate, leaving huge unspent balances. The quarterly break-up of fund flow shows that a bulk of the funds reached the districts only during the latter part of the financial year, hampering the ability to spend. A major bottleneck was the issue of capacity building of staff to deal with the increase in level of spending and comprehending the advance guidelines of the NRHM. Compared to previous years (before the National Rural Health Mission) fund utilisation has increased and quality of spending has improved. Fund flow processes have become more efficient. Fund absorptive capacity has relatively increased compared to last few years. However, utilisation had been more in activities in the form of entitlements like cash incentives for promoting family planning, safe motherhood programme or activities that run on a vertical mode like the Pulse Polio Immunisation. Funds for activities that required innovation remained under-utilised. Discussion Improving absorptive capacity of funds by the states is a long-term process. It would require sustained efforts towards strengthening management and institutional capacities, filling up of vacant posts, higher salaries, greater expenditure on drugs and other consumables. The task of health systems strengthening has gained momentum over the last two-three years through the NRHM. A rearrangement of the financial mechanism with greater share of states and much greater emphasis on decentralisation may be pointers for a future roadmap on the government's health interventions in the country. NRHM is at best a small step forward in the endeavour to guarantee universal access to health. A gradual increase in health spending with long-term perspective would be crucial to supplement the measures initiated through this umbrella health intervention programme. Funding statement The study was funded by the Save the Children.
v3-fos-license
2023-11-23T14:50:49.190Z
2023-11-01T00:00:00.000
265352874
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "HYBRID", "oa_url": "https://www.nature.com/articles/s41586-023-06658-5.pdf", "pdf_hash": "6affa19b43c63e583e18fcc4d9082bbba76c54a1", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:1624", "s2fieldsofstudy": [ "Physics" ], "sha1": "f5c62c3f69185b60a6d7e2ef7b24a3412a030ac7", "year": 2023 }
pes2o/s2orc
Hopfion rings in a cubic chiral magnet Magnetic skyrmions and hopfions are topological solitons1—well-localized field configurations that have gained considerable attention over the past decade owing to their unique particle-like properties, which make them promising objects for spintronic applications. Skyrmions2,3 are two-dimensional solitons resembling vortex-like string structures that can penetrate an entire sample. Hopfions4–9 are three-dimensional solitons confined within a magnetic sample volume and can be considered as closed twisted skyrmion strings that take the shape of a ring in the simplest case. Despite extensive research on magnetic skyrmions, the direct observation of magnetic hopfions is challenging10 and has only been reported in a synthetic material11. Here we present direct observations of hopfions in crystals. In our experiment, we use transmission electron microscopy to observe hopfions forming coupled states with skyrmion strings in B20-type FeGe plates. We provide a protocol for nucleating such hopfion rings, which we verify using Lorentz imaging and electron holography. Our results are highly reproducible and in full agreement with micromagnetic simulations. We provide a unified skyrmion–hopfion homotopy classification and offer insight into the diversity of topological solitons in three-dimensional chiral magnets. 2) A major concern is the contrast arising in the Lorentz TEM images from the hopfion rings.Since this is a projection method of imaging, the net result is due to integrated magnetic induction throughout the thickness of the sample.Therefore, the contrast observed is very similar to a Bloch domain wall that extends in 3D.The authors should consider clarifying this ambiguity, to conclude that the observed contrast is solely due to hopfion rings. 3) Although the authors do refer that their observations are different than the skyrmion bags that have been observed before, but overall the difference appears to be minimal.They have presented some homotopy analysis but that is restricted to skyrmion topologies.How does that inform us about the hopfion rings?What is the fundamental difference between the two structures (rings and bags) is not clear.Since the twisted structure they are referring to only appear in the central plane of the 3D plate, how this is different that the ones referred to as skyrmion bags is also not apparent.4) To this point, a secondary signature of the hopfion ring should be presented to confirm their twisted spin texture either 3D mapping of magnetic field (or alteast images at different tilts to show the effect of twist and differentiate from simple Bloch wall texture) or electrical driven signature. 3. The authors strongly distinguish their hopfion rings from the hopfions observed in confined geometries, as discussed in issue 1.However, later in the paper they write that the confined geometry is important for the generation of the hopfion rings (but not for the stability).I am not sure if I understand the authors correctly but, in my understanding, this should eliminate all advantages of hopfion strings versus the hopfions in confined geometries because without a confined geometry you are not able to nucleate the hopfion rings in the first place. 4. If a hopfion ring forms around two or more skyrmion strings, the strings are braided.Is the braiding essential for the stabilizing mechanism or could the hopfion ring form around straight skyrmion strings as well? 5.In Fig. 3 a hopfion ring around a single skyrmion string is labeled an 'exotic state'.From a naïve approach this state should be the simplest version of a hopfion ring.Why is it 'exotic'?Is the probability for the nucleation of this type of hopfion ring smaller than for other types?6.About the probabilistic character of the nucleation: I think the study would benefit from quantifying the probability to nucleate a hopfion ring.What are the chances that a hopfion ring is nucleated per alternation of the magnetic field direction?Are there ways to increase this percentage?How does it depend on the number of skyrmions inside of the hopfion?How does it depend on temperature?7. Concerning the last point: The authors write that the protocol becomes more reliable at higher temperatures and that it requires more field swapping cycles to become stable at lower temperatures.How can we understand this? 8. Please clarify whether or not a hopfion ring always remains stable if the magnetic field orientation is continued to be alternated after a hopfion ring has successfully been nucleated.This would help to assure that a hopfion ring forms with a high probability in potential devices because you could simply change a field direction many times.9.The authors write that Fig. 2 shows that the symmetry changes reversibly.What exactly does that mean?The process in the top row is certainly not reversible.10.Out of interest: Do you think it is possible to better analyze these objects by three-dimensional techniques such as holography?If not, what are the problems?Referee #3 (Remarks to the Author): In their paper "Hopfion rings in a cuboc chiral magnet" the authors Fengshan Zheng et al. present evidence for the existence of so-called Hopfions using Lorentz Transmission Electron Microscopy (LTEM) and electron holography compared to micromagnetic simulations.Hopfions are three dimensional solitons predicted to be present in three dimensional chiral magnets.In contrast to skyrmion strings which (ideally) penetrate the whole specimen, Hopfions are truly localized in space, detached from sample boundaries and thus are discussed as topologically protected information carriers.The authors provide a special magnetic field protocol applied to thin FeGe platelets that allows controlled creation of stable magnetic structures with different topological comprising of Hopfions (essentially a closed string) wrapped like a belt around skyrmion tubes with various topological numbers.While Hopfions in magnetic thin film materials might have been detected previously in ref 24 (even though in this paper the evidence is not very strong) this is the first observation of what the authors call Hopfion rings in cubic chiral magnets.This is a beautiful paper and I favour publication in Nature.I would like to ask the authors to comment on the following minor issues: In the abstract potential applications are mentioned, but the authors do not expand on this.I suggest removing this half sentence or expanding the discussion concerning applications in the main text. In the caption of Fig. 1 please correct the dimensions of the platelet and please add information concerning the sample temperature. Please provide more details concerning the sample preparation using FIB and the damage layer and move this part to the main text, this seems to be important information. Chiral bobbers are mention e.g. in the caption of Fig. 2. Could the authors comment on their stability?Do they occur at defects, or why are they stable?On page 3 right side bottom.The authors mention that intermediate configurations are observed seldomly.What does seldom mean?How many instances have been observed in how many experiments in total? At the bottom of page three the authors mention that thermal fluctuations have an influence on the nucleation induced by the magnetic field protocol.Can this be corroborated in temperature dependent simulations (see also Fig. 3 for simulations of images taken at different temperatures)?Did the authors use different platelet sizes (in-plane size)?Does this have an influence on nucleation? Can the authors move the composite particles with tilted magnetic field?The authors mention that tilt angles should not exceed 5 degrees for nucleation (this information might also be moved to the main text if space allows), but what happens if the particle has been nucleated and the field is tilted? In Fig. S1a a section of a ring is visible with slightly higher contrast at the ends, why is this so?Can rings rip?Or is this section pinned by defects? Point-by-point response to the reports from the reviewers: We are very grateful to all three reviewers for their constructive comments.We believe that we have addressed all of their concerns and questions, in particular about the novelty of our work.We have conducted new experiments and carried out additional numerical simulations to strengthen both the experimental and the theoretical parts of our manuscript. For the convenience of the editor and reviewers, the major changes that we have made in the revised manuscript are as follows: 1.A new co-author (Dr.Wen Shi, Forschungszentrum Jülich) has been added, due to her contribution to new experiments during the revision process.All of the previously listed co-authors have agreed to the change in co-authorship.2. Eight supplementary videos have been added to illustrate the process of hopfion ring creation in our experiments (Videos 1-5) and to illustrate the intriguing properties of hopfion rings (Videos 6-8).3. In-depth analysis and discussion have been added about the topology and homotopy of the solutions discovered in our work.A rigorous homotopy classification, which accounts for the hopfion rings and corresponding skyrmion-hopfion topological charge, has been introduced.4. The content of Fig. 4 in the main text and Extended Data Fig. 10 has been modified.Following the suggestion of Reviewer #1, the redundant panel showing isosurfaces of magnetic textures with opacity has been deleted and results for hopfion rings calculated for bulk systems have been added.Extended Data Fig. 8 has been redesigned to make it more readable.5. Improvements have been made to the text of the manuscript.Redundancy and typos have been corrected.The new and revised parts of the text are highlighted in red in a separate document. A point-by-point response to the reviewers' questions and comments is provided below. Reviewer #1 (Remarks to the Author): In this paper, the authors present their work on creating and stabilizing complex magnetic spin textures comprising of hopfion rings and skyrmion strings in patterned thin plates of FeGe.They identify a specific magnetic field protocol to stabilize these spin textures based on micromagnetic simulations.Then they provide experimental evidence based on Lorentz TEM images and electron holography data that maps the projected magnetic induction maps.Overall there is good agreement between the micromagnetic simulations and experimental data presented.However, the novelty of this work does not match the standard required by Nature journal.There are several ambiguities in the results that need to be clarified.Moreover, only the existence of these states has been shown, but apart from that there is a lack of understanding that is presented in the results about reasons for their stability, and ablity to control them.Therefore, publication of this work is premature at this stage. Response: We sincerely appreciate the referee's positive comments about our experimental results and about their agreement with our micromagnetic simulations. In response to the referee's remarks about novelty, we would like to emphasize that our work represents an essential advance in the field of complex magnetic spin textures.We have discovered a magnetic hopfion that has never been observed before in magnetic crystals.Although theoretical investigations about hopfions have been ongoing for several decades, their experimental realization in crystals has remained elusive.The fact that world-leading teams actively continue to search for hopfions is a testament to the significance of the problem. In order to illustrate the complexity and competitiveness of the field, we would like to refer to recent works by Donnelly et al. (Nature Physics, 2021) and Kent et al. (Nature Communications, 2021).At the 18th minute of Donnelly's presentation available at https://www.youtube.com/watch?v=QNcI9AXwl9A, the authors express their disappointment about not finding hopfions.However, in their paper they acknowledge their potential existence.Kent et al. employed reverse engineering to create a synthetic material that artificially favored the stability of a single hopfion.However, it is important to note that such an approach is different from an observation in a natural crystal system.In contrast, our work, which focuses on the discovery and stabilization of hopfion rings in a magnetic crystal, represents a truly unique contribution to the field and a turning point in the field of magnetic hopfions. In the revised version of the manuscript, we cite the papers of both Donnelly et al. (Nature Physics, 2021) and Kent et al. (Nature Communications, 2021), which provide valuable insight into the exploration of 3D magnetic textures, in the abstract. With regard to stability and control of hopfions, we would like to emphasize that our hopfion rings never appear as the ground state of the system.Therefore, additional efforts are required to create and stabilize them.Just as for other magnetic solitons, a competition between different energy terms in the Hamiltonian plays a crucial role in the stabilization of hopfion rings.We have a thorough understanding of these stability mechanisms, and have provided detailed discussions on this topic in the revised manuscript. In response to the referee's concerns about ambiguities in our results, we have made significant revisions to the manuscript to provide a more transparent and comprehensive picture of the hopfion rings and their properties.We have included a detailed discussion about the topological aspects of our discovery, and have provided further experimental information and accompanying videos, which illustrate the process of hopfion ring creation and exclude any ambiguities and provide a solid foundation for our findings. We believe that our responses and revisions to the manuscript address all of the concerns raised by the reviewer.We are confident about the novelty, significance and clarity of our work. Below are several comments and questions about this work. 1) The authors have presented the simulated data on how field swapping protocol nucleates the novel spin textures.However, experimental data that follows this field swapping protocol should also be shown to clearly identify the switching of the magnetization directions. Response: In the revised version of the manuscript, we have provided five supplementary videos, which illustrate the process of hopfion ring creation.We have also added the following paragraph to the main text of the manuscript: "The process of hopfion ring nucleation is demonstrated in Supplementary Videos 1-5.These videos were captured in situ at a temperature of T = 180 K.We performed several field swapping cycles in the initial stage with a small amplitude of approximately ±50 mT.This step was designed to generate edge modulations that formed closed loops and propagated towards the center of the sample.Once one or a few of these loops had been created, we gradually increased the applied magnetic field up to approximately 150 mT, resulting in the formation of various hopfion rings."Moreover, we have added detailed descriptions of the videos in the Methods section. 2) A major concern is the contrast arising in the Lorentz TEM images from the hopfion rings.Since this is a projection method of imaging, the net result is due to integrated magnetic induction throughout the thickness of the sample.Therefore, the contrast observed is very similar to a Bloch domain wall that extends in 3D.The authors should consider clarifying this ambiguity, to conclude that the observed contrast is solely due to hopfion rings. Response: We fully understand this concern and thought that this aspect had already been addressed in the original manuscript.In Extended Data Fig. 8, we compared the contrast of a compact hopfion ring and modulations extended over the thickness to our experimental data.We have updated Extended Data Fig. 8 to further clarify this point.The revised version of this figure clearly explains the difference between hopfion rings and modulations that extend over the thickness.It is important to emphasize that our micromagnetic simulations were performed without any parameter fitting, using micromagnetic constants for FeGe from Zheng et al. (Nature Nanotechnology, 2018).All other parameters, such as field, thickness and defocus distance, matched the experimental conditions. We have also provided several supplementary videos that illustrate the difference in contrast between skyrmion bags surrounded by helical modulations and hopfion rings localized in a smaller sample volume.The two images shown below are snapshots taken from Supplementary Video 1. The image on the left illustrates a skyrmion bag composed of four skyrmions surrounded by 360°Bloch domain walls that extend across the thickness of the sample.The image on the right shows double hopfion rings, which have much weaker contrast. 3) Although the authors do refer that their observations are different than the skyrmion bags that have been observed before, but overall the difference appears to be minimal.They have presented some homotopy analysis but that is restricted to skyrmion topologies.How does that inform us about the hopfion rings?What is the fundamental difference between the two structures (rings and bags) is not clear.Since the twisted structure they are referring to only appear in the central plane of the 3D plate, how this is different that the ones referred to as skyrmion bags is also not apparent. Response: In the revised version of the manuscript, we have extensively examined the distinctive features of hopfion rings, highlighting their fundamental differences from skyrmions and skyrmion bags. The sketch shown below explains this difference schematically. First, we have expanded the section on homotopy analysis in the Methods section, providing a more comprehensive discussion about the homotopy group.We have introduced the skyrmion-hopfion topological charge and included it in the revised version of Fig. 4 and Extended Data Fig. 10, and added corresponding discussions in the main text. Second, it is important to note that skyrmion bags are primarily two-dimensional structures extending through the plate's thickness.In contrast, hopfion rings are fully localized in three dimensions, introducing new and intriguing physics that sets them apart from skyrmion bags.We have illustrated this distinction by presenting a wide range of solutions for bulk systems, and by providing rigorous theoretical results about the zero mode for hopfion rings, corresponding to the rotational-translational movement of the hopfion along the string.Supplementary Videos 7 and 8 illustrate such motion, and details are provided in the Methods section.4) To this point, a secondary signature of the hopfion ring should be presented to confirm their twisted spin texture either 3D mapping of magnetic field (or alteast images at different tilts to show the effect of twist and differentiate from simple Bloch wall texture) or electrical driven signature. Response: The reconstruction of a 3D magnetic texture typically involves conducting experiments in zero external magnetic field.In contrast, in the case of a hopfion ring or skyrmion it is essential to maintain a fixed direction of the external magnetic field relative to the sample. Numerous efforts have been made to develop experimental techniques that can be used to measure and visualize 3D spin textures, typically by recording and analyzing tilt series of phase contrast images, diffraction patterns or spectra recorded using electrons, X-rays or neutrons.In transmission electron microscopy, since the conventional objective lens is currently used to apply a magnetic field to the specimen, it is not possible to tilt the field together with the specimen.A change in the direction of the field relative to the sample causes a change in the magnetic texture.Although it is in principle possible to use a magnetizing holder to tilt the applied field and the specimen together, this approach has not been developed reliably so far.It is therefore not yet technically possible to meet this requirement experimentally. Nevertheless, we would like to emphasize that our results provide remarkable agreement between experimental observations and micromagnetic simulations, leaving no room for alternative explanations about the weak contrast of the rings that we observe.In addition to the rings themselves, we have successfully replicated the process of hopfion ring nucleation using micromagnetic modeling.By using micromagnetic modeling, we have reproduced more than twenty distinct textures, demonstrating high quantitative correspondence with our experimental observations. In the revised version of the manuscript, we have included Supplementary Videos from new experiments, which provide additional evidence to support our findings. We would like to note that Reviewer #2 raised a similar question, but labeled it "out of interest."Reviewer #3 did not request additional experiments to provide additional confirmation of the hopfion ring. 5) In figure 1, the color representation is quite confusing because it does not represent solely a single magnetic vector (mx, my or mz) but changes between various subpanels.For example, in the 3D renderings it is not clear what does the color represent? Response: In our manuscript, we use a unified color code to represent directions in three-dimensional space based on the HSV (hue, saturation, value) color space.This color coding scheme is utilized widely in various domains and has been adopted by numerous authors, including those whose works we reference.We acknowledge that the previous caption accompanying Fig. 1 did not explain this color code adequately.We have therefore revised the caption to Fig. 1, in order to provide a more comprehensive description of the color coding scheme.6) For figure 2, although the external field is given as a grayscale bar on top, numerical values would be more useful.Also there is some missing text on the bottom most row for Q=6. Response: Since the images shown in Fig. 2 were recorded at different temperatures, the exact values of the field may look confusing to readers.The only option to avoid this confusion would be to indicate the temperature in each image, which would hide the main message of the figure.We therefore excluded this option at an earlier stage of manuscript preparation.Instead, the values of external field and temperature are provided explicitly in the figures in the Extended Data.We have improved the caption to Fig. 2 to reflect this point. With regard to the lowermost row of images, we have added the following sentences to the main text: "For instance, the hopfion ring shown in the bottom row has a triangular shape at low field.With increasing magnetic field, it adopts a pentagonal and then circular shape."7) For figure 6, the last column with different opacity is also not terribly useful.What was the purpose of showing this? Response: We appreciate this constructive criticism.In the revised version of the manuscript, we have replaced this panel with more informative images, which show hopfion rings calculated for bulk systems.We have also explicitly indicated the skyrmion-hopfion charges of the magnetic texture, where it is applicable.They are discussed in a new paragraph, which is devoted to topological classification of the novel magnetic textures. 8) The formatting of some references is not correct.For e.g.reference 32 is missing the title. Response: We are grateful to the reviewer for their careful reading of the manuscript.The formatting of the references has been revised and updated. Reviewer #2 (Remarks to the Author): The authors Zheng et al. have submitted a highly interesting manuscript in which they analyze the stability of magnetic hopfion rings.The topic is relevant and the experimental and theoretical data is of excellent quality.Therefore, the manuscript should be published in a high-impact journal. Response: We appreciate the highly positive evaluation of our work.However, currently, I doubt that the study deserves publication in Nature because of a lack of significance.My main concern is the following: 1.This is not the first observation of hopfions.As the authors correctly write in the introduction, magnetic hopfions have already been observed in Ref. 24.The authors distinguish the hopfions in confined geometries, as presented in that reference, from their observation of hopfions that exist as rings around one or multiple skyrmion strings.The authors label them 'hopfion rings'.I understand that the stabilizing mechanism is different but why is that of crucial importance?Fundamentally, I do not see a higher significance of hopfion rings, especially since they can only exist around skyrmion strings.In terms of applications, I do not see that great of an advantage either.For example, in terms of current-driven motion, hopfions have been predicted to not suffer from the skyrmion Hall effect.However, in combination with the skyrmion string(s) the skyrmion Hall effect should be present.I am looking forward to read the authors' reply. Response: We provide a detailed response about the importance of our discovery, in comparison to Ref. 24, in the next item below. With regard to applications, we note that our group does fundamental research and applied physics is not our expertise.Following the suggestion of Reviewer #3, we have therefore excluded any mention of possible applications from the abstract. We agree that some aspects of our findings may require additional emphasis.In particular, we would like to highlight the distinct properties of hopfion rings from the point of view of topology and homotopy groups.We believe that intriguing aspects of the topology of these solutions, which we have included in the revised version of the manuscript, are more valuable than speculations about possible applications. With regard to the work of Kent et al., we provide extended comments below. Please elaborate on the drastic improvement of hopfion strings over conventional hopfions that justifies a publication in Nature.After all, Ref. 24 was published in Nature Communications, so why should this study be published in Nature? Response: There are two important aspects that distinguish our work from the seminal work of Kent et al.: -We report the observation of hopfions in a crystal, while Kent et al. observe their hopfion in a specially synthesized system.We did not emphasize this aspect in our initial submission.We have corrected the manuscript by adding corresponding statements in the abstract and the main text. -Our study focuses on the investigation of 3D topological magnetic solitonslocalized magnetic configurations that possess the properties of ordinary particles, enabling them to move and interact with each other and their environment.It is important to note that the texture studied by Kent et al. does not meet the criteria of a soliton in this sense.The hopfion described by Kent et al. can be described as an imprinting of a Hopf fibration in a specially designed and shaped magnetic heterostructure. Our results also have many intriguing consequences that do not overlap with the work described in Ref. 24, including: -In the revised version of our manuscript, we provide rigorous homotopy group analysis, which introduces a new approach for the topological classification of 3D magnetic solitons with skyrmion-hopfion topological charge. -In addition to the experimental observation of a wide diversity of hopfion configurations in a plate of particular thickness, we describe a broad family of hopfions that can be found in bulk and extended samples. -We report on the hopfion zero mode, which corresponds to the screw-like motion of a hopfion along a skyrmion string. In summary, our work not only provides direct experimental evidence that hopfions exist in crystals, but it also suggests new directions for further research.We strongly believe that the originality, novelty and significance of our work meets the high publication standards of Nature. Further minor comments are as follows: 2. The authors mention a damaged layer in the experiment as well as the simulation.Is it possible to stabilize the hopfions without that damaged layer?How can we understand the importance of the damaged layer?Can you somehow 'quantify' the damaged layer so that other groups will be able to reproduce the results in experiments? Response: Yes, it is possible to stabilize the hopfions without the damaged layer: • In the Methods section, we wrote: "It should be noted that the presence or absence of a damaged layer in our simulations has almost no effect on the stability of the solutions shown in Fig. 4. The contrast in theoretical Lorentz TEM images in Fig. 3 also does not change significantly when the presence of a damaged layer is ignored." • The damaged layer is always present on the surfaces of samples prepared using focused ion beam (FIB) milling.Additional polishing of the sample surface is typically performed at a low FIB current to minimize the damaged layer thickness.Our samples were cleaned thoroughly, resulting in a minimal damaged layer thickness of approximately 5 to 10 nm.A more precise estimate of the damaged layer thickness is challenging.It is important to note that no additional sample preparation, apart from standard FIB preparation, is required to reproduce our results, so long as the sample size is approximately the same as that in our experiments.The five supplementary videos in the revised version of the manuscript can be used to guide researchers through the entire process of hopfion ring creation.We have added corresponding descriptions to the main text of the manuscript, as well as to the captions of the supplementary videos. • In the revised version of the manuscript, we have also added simulations for the bulk model, which confirm that hopfion rings are stable without any boundary or shape effects, including damaged layers. 3. The authors strongly distinguish their hopfion rings from the hopfions observed in confined geometries, as discussed in issue 1.However, later in the paper they write that the confined geometry is important for the generation of the hopfion rings (but not for the stability).I am not sure if I understand the authors correctly but, in my understanding, this should eliminate all advantages of hopfion strings versus the hopfions in confined geometries because without a confined geometry you are not able to nucleate the hopfion rings in the first place. Response: The hopfions that we describe do not require confinement for their stability or nucleation: • In the revised version of the manuscript, we emphasize the fact that hopfion rings are stable in bulk systems, without needing to consider the influence of boundary or shape effects, including damaged layers. • Confinement is always present in a real system.Although the approach used for the nucleation of these solitons in our current work relies on geometrical confinement, we anticipate that efficient techniques for their generation in bulk systems will be developed in the future.We also anticipate that hopfions may appear spontaneously (e.g., by quenching). 4. If a hopfion ring forms around two or more skyrmion strings, the strings are braided.Is the braiding essential for the stabilizing mechanism or could the hopfion ring form around straight skyrmion strings as well? Response: The stability of hopfion rings does not rely on braiding of the skyrmion strings, and hopfion rings can form around straight skyrmion strings.This point is evident from the fact that we observe hopfion rings even around single skyrmion strings. It is worth noting that skyrmion braiding can contribute to the Hopf charge of a magnetic texture.In the revised version of the manuscript, we have included an extended discussion of this effect, along with references to earlier and recent papers that explore this aspect: "Figures 4e-h and Extended Data Figs 10e-h illustrate a wide diversity of stable solutions for hopfion rings in bulk systems (see Methods).Based on the general principles of classical field theory, it is understood that the Hopf charge of skyrmion strings can be affected by longitudinal twists of skyrmions, as well as by skyrmion braiding 36-38 .However, due to the chiral nature of the DMI present in the system studied here, stable states with skyrmion twists of multiples of 2π are not observed.Nevertheless, we speculate that such states may be possible in systems that have frustrated exchange interactions 9,39 .On the other hand, the phenomenon of skyrmion braiding has already been demonstrated in chiral magnets 20 .Extended Data Fig. 10h shows an example of a skyrmion braid with two hopfion rings and H = 12.This example can be compared with Fig. 4h, which shows straight skyrmion strings surrounded by two hopfion rings and H = 10." 5.In Fig. 3 a hopfion ring around a single skyrmion string is labeled an 'exotic state'.From a naïve approach this state should be the simplest version of a hopfion ring.Why is it 'exotic'?Is the probability for the nucleation of this type of hopfion ring smaller than for other types? Response: Following the protocol discussed in the manuscript, we usually obtain hopfion rings around several skyrmion strings, as illustrated in Fig. 3.The hopfion ring around a single skyrmion string appears less often in our experiment.For this reason, we described this particular configuration as an "exotic state''.We have added the following note to the text: "Figure 3 shows exotic states with negative and positive topological charges obtained using the above protocol in a 180-nm-thick sample.Magnetic textures with such contrast in our experiment are observed more seldom than those depicted in Fig. 3." In the revised version of the manuscript, we provide five supplementary videos, which illustrate the in situ process of hopfion nucleation in our experimental setup. 6.About the probabilistic character of the nucleation: I think the study would benefit from quantifying the probability to nucleate a hopfion ring.What are the chances that a hopfion ring is nucleated per alternation of the magnetic field direction?Are there ways to increase this percentage?How does it depend on the number of skyrmions inside of the hopfion?How does it depend on temperature? Response: The quantification of a hopfion ring nucleation protocol is only informative when the efficiencies of several methods are compared.We do not suggest that the protocol presented in our work is optimal or unique.The efficiency and reliability of the protocol are not the main topics of our study, which focuses on the discovery of a novel type of 3D topological magnetic soliton.The details of the protocol are provided to ensure the transparency and reproducibility of our results.As we mention above, in the revised version of the manuscript we have provided a series of videos that illustrates the process of hopfion ring nucleation in detail.We believe that the publication of our work will stimulate research in this field, and that alternative approaches for hopfion ring nucleation will be found. As we wrote above, we observe a hopfion ring on a single skyrmion string less often than states with more skyrmions.This fact may reflect the peculiarities of our protocol, rather than the stability of these states.In general, the number of skyrmions does not contribute significantly to the probability of hopfion ring nucleation.This point is also illustrated in the videos provided in the revised version of the manuscript. The role of temperature is discussed in detail in the original version of the manuscript.Please see the paragraph starting with the following sentence: "The above protocol becomes more reliable at higher sample temperature, indicating the significant role of thermal fluctuations for hopfion ring nucleation." 7. Concerning the last point: The authors write that the protocol becomes more reliable at higher temperatures and that it requires more field swapping cycles to become stable at lower temperatures.How can we understand this? Response: In the paragraph in which we discussed the influence of temperature on hopfion ring nucleation, we wrote: "Below 180 K, nucleation of hopfion rings is still possible, but typically requires more field-swapping cycles.At lower temperature, the edge modulations can move toward the edges of the sample and disappear.In contrast, at higher temperatures the edge modulations can contract towards the center of the sample." We believe that this description is clear.However, we also accept that it is better to show it visually.In the revised version of the manuscript, Supplementary Videos 1-5 illustrate the entire process of hopfion ring nucleation in detail.They show that the field swapping cycles induce additional edge modulations around the perimeter of the sample.At higher temperature (180 K), such modulations often form closed loops and propagate towards the center of the sample.At lower temperature, they seldom form closed loops around the perimeter of the sample.8. Please clarify whether or not a hopfion ring always remains stable if the magnetic field orientation is continued to be alternated after a hopfion ring has successfully been nucleated.This would help to assure that a hopfion ring forms with a high probability in potential devices because you could simply change a field direction many times. Response: A hopfion ring is stable in a specific range of external magnetic fields.When the magnetic field is reduced, skyrmions inside the hopfion ring experience an elliptic instability.The probability of obtaining the same hopfion ring when the field is increased remains finite, but it depends on how much the magnetic field was reduced.Supplementary Video 4 illustrates this effect.After hopfion ring nucleation, the field is reduced below the threshold value for the elliptic instability.When the field is then increased, the same hopfion ring appears in a slightly different place of the domain (in the corner).Supplementary Video 4 ends with an increase of the field in a positive direction, ultimately leading to hopfion ring collapse. In the revised version of the manuscript, a detailed discussion of the supplementary videos has been included in the "Magnetic imaging in the TEM" part of the Methods section.9.The authors write that Fig. 2 shows that the symmetry changes reversibly.What exactly does that mean?The process in the top row is certainly not reversible. Response: For clarity, we have corrected the corresponding paragraph as follows: "Figure 2 shows that the symmetry of the magnetic texture of skyrmions and hopfion rings also changes with increasing applied field.For instance, the hopfion ring shown in the bottom row has a triangular shape at the low field.With increasing magnetic field, it adopts a pentagonal and then circular shape.Such symmetry transitions are found to be reversible with respect to increasing and decreasing fields."10.Out of interest: Do you think it is possible to better analyze these objects by three-dimensional techniques such as holography?If not, what are the problems? Response: By default, electron holography provides only a two-dimensional projection of a magnetization field and not a three-dimensional reconstruction.The phase shift image of a hopfion ring shown in Fig. 1g was recorded using electron holography. In combination with tomography, i.e., by recording holograms at different simple tilt angles, three-dimensional mapping of magnetic fields is in principle possible.Such a three-dimensional technique can be referred to as electron holographic tomography. However, in its present form this technique has several limitations.The major issue is that the magnetic solitons that are studied in our work are stable only in the presence of an external magnetic field, which is applied perpendicular to the plane of the thin TEM sample.In our TEM, the conventional objective lens is used to apply an external magnetic field to the specimen.The direction of this external magnetic field is therefore fixed in the direction of the electron beam, and it is not possible to tilt the magnetic field together with the specimen. When the sample is tilted with respect to the electron beam, the change in the relative direction between the specimen and the external magnetic field affects the magnetic texture of the soliton.Two different states are then imaged at two different sample tilt angles.Since the stability of a hopfion ring requires a fixed direction of the external magnetic field relative to the sample, the hopfion ring moves and becomes unstable when the sample tilt is changed.In our experiments, the hopfion ring would be attracted to the edge of the sample and collapse. Although a magnetizing holder or stage could in principle be used to tilt the field together with the specimen, such a solution is not yet readily available, especially in combination with cooling of the sample. Reviewer #3 (Remarks to the Author): In their paper "Hopfion rings in a cuboc chiral magnet" the authors Fengshan Zheng et al. present evidence for the existence of so-called Hopfions using Lorentz Transmission Electron Microscopy (LTEM) and electron holography compared to micromagnetic simulations.Hopfions are three dimensional solitons predicted to be present in three dimensional chiral magnets.In contrast to skyrmion strings which (ideally) penetrate the whole specimen, Hopfions are truly localized in space, detached from sample boundaries and thus are discussed as topologically protected information carriers.The authors provide a special magnetic field protocol applied to thin FeGe platelets that allows controlled creation of stable magnetic structures with different topological comprising of Hopfions (essentially a closed string) wrapped like a belt around skyrmion tubes with various topological numbers.While Hopfions in magnetic thin film materials might have been detected previously in ref 24 (even though in this paper the evidence is not very strong) this is the first observation of what the authors call Hopfion rings in cubic chiral magnets.This is a beautiful paper and I favour publication in Nature. Response: We appreciate the highly positive evaluation of our work. I would like to ask the authors to comment on the following minor issues: In the abstract potential applications are mentioned, but the authors do not expand on this.I suggest removing this half sentence or expanding the discussion concerning applications in the main text. Response: We agree with the reviewer.Although we are able to suggest several concepts for the potential application of hopfion rings, our paper belongs to fundamental research rather than applied physics.We have therefore removed any mention of possible practical applications from the abstract. In the caption of Fig. 1 please correct the dimensions of the platelet and please add information concerning the sample temperature. Response: We thank the reviewer for their thorough review of the manuscript.We acknowledge that there was a typo in the figure caption, which was initially written as "size of 1 μm × a1 μm".We have corrected it to read "size of 1 μm × 1 μm."We have also added information about the sample temperature in the figure caption. Please provide more details concerning the sample preparation using FIB and the damage layer and move this part to the main text, this seems to be important information. Response: Although the sample was prepared using a standard procedure, we acknowledge the significance of the damaged layer.Due to length limitations, we cannot incorporate the entire paragraph that discusses the damaged layer in the main text.Instead, we have added an extra note to draw the readers' attention to the relevant part of the Methods section: "We also took into account the presence of a thin damaged layer on the sample surface (Fig. 1a), which typically results from sample preparation by focused ion beam milling.(See the Methods section for more details about micromagnetic calculations, the properties of the damaged layer, and Lorentz image simulations)."Chiral bobbers are mention e.g. in the caption of Fig. 2. Could the authors comment on their stability?Do they occur at defects, or why are they stable? Response: Chiral bobbers are statically stable magnetic solitons.They are stable in a certain range of fields, and can either appear as isolated entities or interact with other solitons, including skyrmions strings.Such coupled states, in which chiral bobbers are coupled to one or a few skyrmion strings, were observed experimentally in Ref. 31 (Zheng et al., Nature Nanotechnology 13, 451-455, 2018). On page 3 right side bottom.The authors mention that intermediate configurations are observed seldomly.What does seldom mean?How many instances have been observed in how many experiments in total? Response: "Seldom" means that we did not observe these intermediate configurations frequently in our experiments and cannot provide many images of them.These intermediate states are not statically stable configurations and typically only appear dynamically.In the revised version of the manuscript, we have included five movies that illustrate the in situ nucleation of hopfions in our experimental setup.We believe that these videos illustrate all of the details of the hopfion nucleation process. At the bottom of page three the authors mention that thermal fluctuations have an influence on the nucleation induced by the magnetic field protocol.Can this be corroborated in temperature dependent simulations (see also Fig. 3 for simulations of images taken at different temperatures)? Response: As we wrote in our reply to Reviewer #2 above, the nucleation process presented in our work is probably not unique, and we expect that more efficient methods will be proposed in the future. With regard to temperature-dependent simulations: -Our micromagnetic model includes thermal effects to an extent via the material parameters.In particular, our parameters are adopted for T = 95 K, which is a typical temperature for a TEM experiment performed using liquid nitrogen.The adoption of these parameters was previously established in the work of Zheng et al. (Nature Nanotechnology 13, 451-455, 2018).The results presented in our study illustrate the predictive power of our model and adopted parameters. -For higher temperatures, the role of thermal fluctuations increases significantly. Our sample comprises approximately 10 9 magnetic atoms, whose dipole-dipole interactions are important for the energy balance.Unfortunately, accurate calculations that include temperature (in an atomistic model) for this number of atoms are not possible using current computers. Did the authors use different platelet sizes (in-plane size)?Does this have an influence on nucleation? Response: We attempted to reproduce the hopfion structure in a larger sample, which had a lateral dimension of 3 µm and a thickness of 180 nm.However, we encountered difficulties in forming a closed loop of edge modulations for such a larger sample.The primary challenge resulted from limitations of FIB technique, as it is extremely challenging to achieve a uniform thickness across a larger sample.Conversely, when working with smaller sample sizes, we observed that there was insufficient space for a hopfion to stabilize due to edge effects.Taking these practical considerations into account, we chose a lateral dimension of approximately 1 µm for our experiments. Can the authors move the composite particles with tilted magnetic field?The authors mention that tilt angles should not exceed 5 degrees for nucleation (this information might also be moved to the main text if space allows), but what happens if the particle has been nucleated and the field is tilted? Response: An additional tilt of the specimen will introduce an in-plane component of the applied field, as the direction of the external magnetic field is fixed to the direction of the electron beam (the z direction).The hopfion will then be guided by the in-plane field and attracted to the sample edges. In the figure shown below, we provide a series of images that shows this effect when the magnetic field is reduced after the nucleation of a hopfion ring (left: 256 mT; middle and right: 144 mT).It can be seen that the hopfion ring is then attracted to the edge and moves along it towards the corner. In terms of the nucleation process illustrated in Supplementary Videos 1-5, edge modulations would occur and propagate from only one edge of the sample in a tilted external magnetic field, and closed loops would not be formed. Following the reviewer suggestion, we have added the following note in the main text: "The tilt angle of the external magnetic field to the plate normal is an essential parameter for hopfion ring nucleation.In our experiment, we found that the tilt angle of the field should not exceed 5 degrees.Otherwise, the edge modulations primarily form on one side of the sample, resulting in a strongly asymmetric configuration." In Fig. S1a a section of a ring is visible with slightly higher contrast at the ends, why is this so?Can rings rip?Or is this section pinned by defects? Response: In low magnetic fields, clusters of skyrmions exhibit skyrmion braiding (to a small extent).In the figure mentioned by Reviewer #3 and reproduced below, in a field of 165 mT the skyrmion exhibits braiding with the hopfion ring next to it. The figure shows that the presence of this skyrmion becomes more evident with increasing magnetic field, as the skyrmion becomes stable and moves to the edge and then to the corner.
v3-fos-license
2019-09-10T09:09:44.283Z
2019-05-01T00:00:00.000
202129008
{ "extfieldsofstudy": [ "Computer Science" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.e3s-conferences.org/articles/e3sconf/pdf/2019/37/e3sconf_clima2019_02038.pdf", "pdf_hash": "6f6659caf83a544fafa831ba1da93c70f12df996", "pdf_src": "Adhoc", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:1627", "s2fieldsofstudy": [ "Psychology", "Environmental Science" ], "sha1": "5e58a0bcdc620f432b0d490a8358002be8124848", "year": 2019 }
pes2o/s2orc
Hue-Heat Hypothesis: A Step forward for a Holistic Approach to IEQ . For many years different human factors contributing to the IEQ have been studied separately. Concerning thermal perception, despite it is almost accepted that thermal comfort can be influenced by concomitant stimulation of non-tactile modalities, relatively few investigations have succeeded in delineating non-tactile stimulations as the visual ones. The hue-heat hypothesis is based on the idea that, when spectral irradiance pattern at the observer’s eye shows a great amount of short wavelengths, the space is perceived as cooler. Conversely, when long wavelengths are predominant, the space is perceived as warmer. This means that operating on light characteristics could help in improving thermal comfort for the occupants with possible energy savings obtained by acting on the set-point temperature of HVAC systems. To verify this hypothesis, this paper will deal with a subjective investigation carried out in a special mechanically conditioned test room equipped with white-tuning LED sources. Investigated subjects have been exposed to two different light scenes consisting of warm (3000 K) and cool light sources (6000 K) at a fixed task illuminance value. Preliminary results seem to demonstrate that cool light is effective in shifting to cool the perceived thermal sensation with a general increase of people under neutral conditions. Introduction Only in recent years the application of the human factors' principles stated the need for rethinking the whole indoor built environment design [1] which should conjugate low energy costs with high IEQ [2] (as a result of thermal, acoustic and visual comfort, and indoor air quality) and sustainability requirements [3,4]. IEQ strictly affects the overall building energy performances -as expressed again in the 2018/844 European Directive [5] -and it exhibits an antagonistic relationship with respect to the energy saving requirements. This means that the optimization of a single IEQ component should also consider possible antagonistic or synergic effects with the other IEQ components. This is also to reduce the onset of related illnesses and to optimize people's performance [6,7]. Concerning thermal perception, despite it seems to be a common belief that thermal comfort can be influenced by concomitant stimulation of non-tactile modalities, relatively few investigations have succeeded in delineating non-tactile stimulations as the visual ones. The belief that colour can impart such effects as apparent warmth is widespread and ancient [8]. A popular hypothesis is that lights or surfaces, whose dominant frequencies are toward the red end of the visual spectrum, are "warm" and, those toward the blue end, are "cool". This is the concept of the hue-heat hypothesis [8,9]. As stressed by Candas and Dufour [9] in a review devoted to the multisensory interactions on the thermal perception, several efforts have been made in the past to characterize the effect of the light colour on thermal comfort [10,11], with experiments giving conflicting results. From this perspective, preliminary studies on the interaction between the light colour and the thermal sensation carried out by Fanger in the 70's sound almost astonishing. Despite he found in 16 subjects that a slightly lower ambient temperature (0.4 °C about) was preferred in extreme red light compared to extreme blue light, he concluded that "did not seem to have any practical significance on man's thermal comfort" [10]. In addition, it is still unclear whether the colour of light affects the thermal sensation only under microclimatic conditions close to thermal neutrality as recently investigated by Toftum et al. [12]. Particularly, on a sample of 44 subjects (16 females) exposed to different lighting scenarios by operating on the Colour-Correlated Temperature (CCT) of LED sources at three levels of operative temperature (19, 22 and 27 °C), they concluded that CCT was associated with thermal sensation at the thermally neutral condition, but not when subjects felt slightly cool or slightly warm. As also remarked by Huebner et al. [13] and by Wang et al. [14] in two more recent papers, existing research is ambiguous regarding the association between coloured light/indoor surfaces and thermal perception. In addition, the different methodologies and protocols adopted in different researches complicate the comparability between studies and the ability to draw final conclusions [13]. This is especially true because previous studies also suffered from methodological issues, such as insufficient control of illuminance levels, microclimatic parameters according to the Standards in the field and other factors affecting thermal comfort [14]. To verify the hue-heat hypothesis, this investigation will be focused on the analysis of preliminary results obtained by a combined microclimatic and subjective investigation carried out in a special mechanically conditioned test room provided with white-tuning LED sources under winter conditions and with two different light scenes. Test room lighting characterization Tests were performed in the Photometry and Lighting Laboratory of the Department of Industrial Engineering of the University of Naples Federico II (Italy). It is an Lshaped room composed of two different rectangular parts. The wider space (see figure 1 on the top) is equipped with a false-ceiling, where different light sources are installed. Among these sources there are the two recessed LED luminaires used for the experiment. This area is a neutral environment: three white curtains cover the perimeter walls and another one, once closed, divides the two parts of the room. A desk and a chair are here placed. In the smaller space (see figure 1 on the bottom) there is the DALI (Digital Addressable Lighting Interface) control unit necessary to manage the luminaires. It consists in a touch panel allowing easily setting different light scenes, by varying luminous flux emission of luminaires and their Correlated Colour Temperature (CCT). LEDs used in this investigation were white-tuning ones, which allow to change CCT in the range 3000 K-6000 K. According to the technical sheet provided by the manufacturer, luminaires characteristics are the following: luminous flux= 4280 lm, power = 51 W, Color Rendering Index (CRI) > 80. For the experiment two light scenes were set: Scene 1 characterized by 3000 K CCT and Scene 2 characterized by 6000 K CCT. Figure 2 reports normalized spectral power distributions of the luminaires when CCT is set at 3000 K and 6000 K respectively. They were measured by means of a Konica Minolta CS-2000 spectroradiometer. For both CCTs, luminaires luminous flux was regulated so that the corresponding illuminance at the work-plane was equal to about 300 lx (consistently with a reading task [15]). To verify that, the illuminance was measured at the point P1 (see Figure 1) by means of a Konica Minolta T-10A luxmeter. Test-room microclimatic characterization The microclimatic characterization of the test room has been carried out by measuring all physical parameters affecting the thermal sensation (global and local) by means of a special Comfort Data Logger INNOVA 1221 provided with sensors for the air temperature, the plane radiant temperatures, the air velocity, the dew point and the floor temperature. In addition, three HOBO U12 Temp/RH/2 sensors for the measurement of air temperature and relative humidity have been used. All sensors were compliant with ISO 7726 accuracy requirements [16] The measurements were carried out following a special and robust protocol designed consistently to the International Standards in the field [17] and involved the main physical variables affecting the thermal sensation [18]: Sensors have been installed on a tripod (see figure 3) placed close to the occupant as prescribed by ISO Standard 7726 [16]. The calibration of the test room has been carried out by setting the HVAC system (a split unit by AERMEC) in the range from 18 °C to 25 °C and then measuring all the physical quantities each minute for 15 minutes. The procedure has been repeated three times to verify the attainment of steady-state conditions and the homogeneity of environmental conditions was preliminarily verified by measuring the physical quantities in different positions. Data summarized in table 3 show how microclimatic conditions of the whole experimental campaign are typical of quite uniform conditions as confirmed by the very close values of the air temperature and mean radiant temperature. In addition, the low SD values of the four physical quantities demonstrate the careful control of the microclimatic conditions with negligible effects on the subjective survey. Experimental procedure 81 subjects aged between 18 and 35 (41 females and 40 males) took part in the experiments. None of them had a background in the lighting field or revealed health or psychological (e.g. anxiety or stress) problems. Subjects were led one by one in the test room (settled at 20 °C in this investigation) and they were invited to fill a short questionnaire with general information (e.g. age, height, weight, nationality, possible health problems). Then they were asked to stay inside the testroom for a time of 10 minutes to adapt to the environmental conditions. During this short period, they were also invited to play a word puzzle. This served as a distraction with respect to the surrounding environment in such a way that they would not have had memory of it in the next test. After the subjects experienced the test room conditions, a sound signal was given to prompt them to fill a questionnaire focused on the thermal perception. After having completed the questionnaire, the subjects were invited to leave the test room for 10-15 minutes in order to change the light scene. At this point the subjects were accompanied again in the test room, and the procedures of adaptation and administration of the questionnaire were repeated. Questionnaire description A special questionnaire designed with the assistance of a team of psychologists and doctors [19] has been administered to each interviewed and specifically adapted to make it fast to be completed. The questionnaire consists of two sections: • personal information. In this section subjects have to describe their clothing at the moment of the survey; • thermal comfort. The questions of this section have been formulated in compliance with the recommendations of ISO 10551 Standard [20] and deal with the thermal status in terms of perception, evaluation and preference scale as showed in figure 4. The questions also deal with humidity and thermal preference. A final question is focused on the possible effect of environmental conditions on the playing ability (not discussed here). In the present investigation only the answer to the thermal perception (overall thermal state) has been considered. It is expressed in terms of a Thermal Sensation Vote (TSV) on the typical 7-point scale [18,21] from -3 (cold) to +3 (hot). Fig. 4. The section of the questionnaire devoted to the thermal comfort assessment. Statistical analysis To verify the significance of the hue-heat hypothesis from subjective investigations, the well-known 2-tailed t-student test has been used [22]. In particular, by means of special Matlab functions, the following parameters were calculated: -the 2-tailed Student's t-distribution P(T≤t); -the 2-tailed inverse of the Student's t-distribution P(μ) by assuming a statistical significance μ=0.10; -the confidence 1-P(T≤t). To verify that the differences among the samples were not related to the randomness, the occurrence of the condition P(T≤t)<P(μ) has been finally checked. Thermal comfort indices calculation To allow the calculation of the thermal comfort indices [18], personal parameters as the metabolic rate and the clothing insulation have been evaluated according to the ISO 8996 [23] and the ISO 9920 [24] Standards, respectively. The metabolic rate has been fixed at 1.3 met consistently to sedentary activities [19,23], whereas basic clothing insulation values were evaluated from questionnaires (see table 2) and corrected by the effect of air velocity and body movement [24,25]. Finally, the calculation of PMV and PPD indices, required for the objective assessment of overall thermal comfort conditions according to the ISO Standard 7730 [18], has been carried out from measured values by means of the TEE package [26,27]. This is a special software devoted to the assessment of the Thermal Environment in agreement with the whole International Standards devoted to the Ergonomics of the Thermal Environment. Results and discussion The results of the subjective investigation in terms of thermal perception are reported in tables 3 and 4 and in figure 5. In table 3 the PMV values calculated from the instrumental surveys and the real clothes worn by each interviewed have been also reported. Data summarized in table 3 seem to confirm the hueheat hypothesis for the sample of investigated subjects, being the thermal sensation votes recorded for cool light conditions (6000 K) systematically lower than those observed in case of warm (3000 K). In addition, these differences have not to be attributed to aleatory phenomena, but are statistically significant as confirmed by the verification of the t-test P(t≤T)<P(μ=0.10) for the overall sample and for each gender. Table 3. Summary of TSV from subjective investigation, comparison with PMV values obtained by the objective survey and main statistical parameters. (W) light scene at 3000 K, (C) light scene at 6000 K. SD=Standard Deviation. Females Males Overall Light scene Despite Toftum et al. [12] and similarly to what found by Wang et al. [14] our findings seem to demonstrate that CCT is associated with thermal sensation even under incipient warm discomfort conditions. Particularly, with reference to the sample as a whole, for the light scene at 3000 K, the thermal sensation by questionnaires is typical of slightly warm conditions (mean TSV=0.74 on the edge of class C comfort zone [18]) with a percentage of persons who voted a TSV≥1 equal to 59% (see table 4). In the presence of cool light (6000 K) mean TSV decreases to 0.32 and the percentage of those who felt slightly warm or warm decreased to 40%. In addition, the percentage of respondents under neutral conditions and those who felt slightly cold increased from 34 to 46% and from 5 to 14%, respectively. A reasonable explanation for the disagreement with Toftum et al.'s findings [12] could be the choice of the illuminance rate. Unlike Danish team who worked at 1000 lx (a common value for health care or special industrial tasks) and similarly to Wang et al. investigation [14], the present study has been carried out at 300 lux which is a typical value for some educational tasks [15]. Females Males Overall Light scene slightly cool 9 14 3 13 5 14 neutral 34 51 33 41 34 46 slightly warm 34 31 38 38 36 35 warm 20 3 26 8 23 5 hot 0 0 0 0 0 0 According to several literature studies [12,28,29,30], that seem to confirm a certain gender related perception of thermal conditions, resulting in a high sensitivity of women to low temperatures [28,29], this investigation has revealed a more pronounced effect of colour temperature on women's thermal perception. Based upon data in table 4, in case of warm light 54% of the women (64% of the men) voted warm o slightly warm with 34% of the respondents under neutral conditions (33% for men), whereas under cool light conditions the percentage of the women who voted TSV≥1 decreased to 34% (46% in case of the men). Consequently, the percentage of the women who felt neither cold nor warm increased from 34 to 51% (from 33 to 41% in case of men). Finally, data in table 3 highlight a certain inconsistency between the subjective investigation by questionnaires and the objective assessment carried out by means of the PMV index. Particularly, TSV values obtained by subjective analysis were systematically positive and generally consistent with comfort conditions only for cool light. To the contrary, PMV values were slightly negative and consistent with comfort levels typical of category C [18]. This is because PMV index does not account neither gender related differences nor other non-tactile stimulations as light. In addition, the standard deviation values of TSV votes were near to one point (ranging from 0.73 to 1.01 depending upon the group) revealing a significant inter-individual difference within the same group of interviewed. Conclusions In a general context where the attainment of energy saving goals (e.g. as in nZEB) has to be consistent with high indoor environmental quality levels, it is impossible thinking to maximize a single IEQ component (e.g thermal or visual) without considering possible mutual interactions or negative effects on energy costs. From this perspective studying the mutual interaction of the four aspects of the IEQ is a crucial need to obtain high overall comfort levels, protect the health of occupants and their productivity and avoid unbalanced design solutions resulting in negative effects on IEQ and buildings energy demand. In this paper the hue-heat hypothesis, based on the idea that light and colours can affect the thermal perception has been investigated. Based on this assumption, changing light characteristics could help in improving thermal comfort for the occupants by operating on the set-point temperature of HVAC systems increasing warmth (cold) sensation during winter (summer). Based upon preliminary results obtained in a special mechanically conditioned test room provided with whitetuning LED sources we can confirm that, under winter conditions, cooler light (6000 K) induces a shift of the thermal sensation toward cold. The effect seems to be more pronounced in case of women whose percentage under neutral conditions has almost doubled by changing the CCT from 3000 (warm light) to 6000 K (cool light). The results of this preliminary investigation will be integrated with the analysis of the answers given on the evaluation and preference scales. This is specially to verify whether the reduction of the thermal sensation votes induced by coo light is associated with higher preferred temperatures and different experienced comfort conditions. Further investigations will be also addressed to verify possible effects of lighting parameters (e.g. illuminance and/or CCT) under different microclimatic conditions (e.g. near thermal neutrality or cold discomfort also accounting for local thermal discomfort). Additional studies will be carried out on a wider and heterogeneous sample of interviewed also considering other human factors (e.g. psychological issues, stress, anxiety). Finally, further efforts will be addressed on the assessment of potential energy savings for heating and cooling obtained by operating on the CCT of the lighting system.
v3-fos-license
2016-05-12T22:15:10.714Z
2011-09-13T00:00:00.000
2971804
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "HYBRID", "oa_url": "https://www.nature.com/articles/bjc2011336.pdf", "pdf_hash": "851ac06cdcf7bafc134806901335bc7ad7c27231", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:1628", "s2fieldsofstudy": [ "Medicine" ], "sha1": "851ac06cdcf7bafc134806901335bc7ad7c27231", "year": 2011 }
pes2o/s2orc
Prognostic discrimination of subgrouping node-positive endometrioid uterine cancer: location vs nodal extent Background: The 2009 International Federation of Gynecologists and Obstetricians elected to substage patients with positive retroperitoneal lymph nodes as IIIC 1 (pelvic lymph node metastasis only) and IIIC 2 (paraaortic node metastasis with or with positive pelvic lymph nodes). We have investigated the discriminatory ability of subgrouping patients with retroperitoneal nodal involvement based on location, number, and ratio of positive nodes. Methods: For 1075 patients with stage IIIC endometrioid corpus cancer abstracted from the Surveillance, Epidemiology, and End Results databases for 2003–2007, Kaplan–Meier analyses, Cox proportional hazard models, and other quantitative measures were used to compare the prognostic discrimination for disease-specific survival (DSS) of nodal subgroupings. Results: In univariate analysis, the 3-year DSS were significantly different for subgroupings by location (IIIC 1 vs IIIC 2; 80.5% vs 67.0%, respectively, P=0.001), lymph node ratio (⩽23.2% vs >23.2% 80.8% vs 67.6% P<0.001), and number of positive lymph nodes (1, 2–5, >5; 79.5, 75.4, 62.9%, P=0.016). The ratio of positive nodes showed superior discriminatory substaging in Cox models. Conclusion: Subgrouping of stage IIIC patients by the ratio of positive nodes, either as a dichotomized or continuous parameter, shows the strongest ability to discriminate the survival, controlling for other confounding factors. Uterine cancer is the most common pelvic gynaecologic malignancy in the United States. Based on a Gynaecologic Oncology Group study of surgical staging of clinical stage I endometrial cancer, 9.3% of patients had positive pelvic lymph node involvement whereas 5.5% had positive paraaortic lymph nodes, with a total of 11.3% having either pelvic and/or paraaortic retroperitoneal metastasis (Creasman et al, 1987). The most recent modification of the International Federation of Gynecologists and Obstetricians (FIGO) staging system for endometrial cancer has elected to subclassify patients with retroperitoneal lymph node involvement (without other sites of distant metastasis) into two subgroups based on the location of the metastatic lymph nodes. Patients with only pelvic lymph node involvement are staged as IIIC 1 whereas those with positive paraaortic lymph node (with or without positive pelvic lymph nodes) are stage IIIC 2 (Pecorelli, 2009). Two recent Surveillance, Epidemiology, and End Results (SEER)-based analyses have demonstrated worse outcome for patients with stage IIIC 2 vs IIIC 1 disease (Lewin et al, 2010;Cooke et al, 2011). However these studies were limited because there was no accounting for confounders such as number of positive nodes (Touboul et al, 2001;Takeshima et al, 2006;Fujimoto et al, 2009) or lymph node ratio, which have also been shown to be prognostically important (Tang et al, 1998;Mariani et al, 2001a;Yasunaga et al, 2003;. In this current study, we investigated the prognostic significance of the new subdivision of stage IIIC disease and compared the discriminatory ability of location, number, and ratio of positive lymph nodes controlling for other confounding factors. The identification of other subgrouping based on characteristics of lymph node involvement may have therapeutic implications. MATERIALS AND METHODS The SEER Program database of the United States National Cancer Institute for endometrioid uterine cancer patients during the period from 1 January 2004 to 31 December 2007 was utilised (SEER, 2010, http://www.seer.cancer.gov). Patients with nonendometriod histologies were excluded. This time period was selected because in earlier periods patients with involved paraaortic lymph nodes were included with patients with stage IV disease. Of the 22 907 patients, 1235 (5.4%) had IIIC disease. A total of 160 patients who lacked information on lymph node dissection and/or lymph node distribution were excluded, leaving 1075 patients as the study cohort. All but four patients underwent some type of hysterectomy (three had no hysterectomy and for one the type of uterine surgery was not specified). Data on demographic, clinical-pathological, and treatment parameters were abstracted. Patients were divided into nodal subgroups based on the number of positive nodes (1, 2 -5, and 45 nodes), total number of nodes examined (p10, 11 -20, 420), and ratio of positive nodes, expressed as the percentage of positive lymph nodes to the total number of nodes examined (p10%, 10 -50%, 450%), as in our previous study . In addition, the nodal parameters were dichotomized by the median number of positive nodes (1, 41) and ratio of positive nodes (p or 4average (23.2%)) to permit comparison with the new FIGO dichotomized stage grouping. The primary endpoint of the study was the endometrial cancer disease-specific survival (DSS). Time to death was censored in patients who died from causes other than uterine cancer. Survival analyses were performed using the Kaplan -Meier method. Pearson's w 2 -and Student's t-test were employed to compare distributions of parameters between subgroups. Two-sided P-values of o0.05 were considered statistically significant. Pearson correlations were used to investigate for multiple colinearities between the subgrouping of lymph nodal involvement based on location, number of positive nodes, and ratio of positive nodes. Because of the potential correlation between the various subgroupings of lymph nodes, separate stepwise Cox regression models were employed entering only one of the three subgroupings of the lymph nodes in each model. Because preliminary analysis demonstrated a significantly higher number of positive lymph nodes for patients with IIIC 2 vs IIIC 1 disease, comparisons were also made between the three nodal subgroupings for a subset of 487 patients with only one positive lymph node. Three additional quantitative measures were used to compare the prognostic discrimination for DSS for nodal subgrouping (Gimotty et al, 2005). All statistical analyses were performed using the SPSS Statistics GradPack 17.0, Release 17.0.0 (3 August 2008, IBM, Armonk, NY, USA). RESULTS The demographic and clinical characteristics of the 1075 patients with stage IIIC endometrioid corpus cancers are delineated in Table 1. A total of 725 patients (67.4%) had positive pelvic nodes only (stage IIIC 1) whereas 350 (32.6%) had paraaortic node involvement with or without positive pelvic nodes (stage IIIC 2). The average number of lymph nodes examined was 17.3 (range: 1 -90). The average number of positive nodes was three (range: 1 -82) and the average lymph node ratio was 23.2% (range 0.01 -100%). Adjuvant radiation therapy was given to 638 (59.3%) of the patients. The median follow-up was 18 months (mean 19.4, range 0 -47). Significantly poorer DSS was seen in higher grade tumours (Po0.001) and with the lack of adjuvant radiation (Po0.001). In a separate analysis (data not shown) the patients were grouped by number of lymph nodes examined (o10, 10 -20, X20). Even for the subgroups with X20 nodes examined, significantly lower DSS was seen with increasing number of positive nodes, stage III C2 vs stage III C1, and higher ratio of positive nodes (Po0.001). On multivariate analysis, grade of tumour (Po0.001), ratio of positive nodes (P ¼ 0.005), adjuvant radiation (Po0.001), and marital status (Po0.01) were independent factors associated with DSS whereas location of the positive nodes, number of positive nodes, and number of nodes examined were not significant ( Table 4). The hazard ratio for the ratio of positive nodes as a continuous variable was 3.10. Pearson correlation coefficients were employed to look for colinearity between the subgroupings of positive lymph nodes (location, number of positive nodes, and node ratios) and showed small correlations between these variables (0.106, 0.263, and 0.321, respectively). Repeat multivariate analyses were performed by A subset analysis was performed on those patients with one positive node (n ¼ 487); of which 394 (81.7%) had one positive pelvic node whereas 89 (18.3%) had one positive paraaortic node (Supplementary Table S1). There was no significant difference in 3-year DSS for patients with one positive pelvic node compared with those with one positive paraaortic node (80% vs 77.3%, respectively, P ¼ 0.675, Figure 2). These findings were confirmed on multivariate analysis (Supplementary Table S2). The relative discriminatory ability of subgrouping patients with IIIC disease by location (IIIC 1 vs IIIC 2, number of positive nodes (1 vs 41), and ratio of positive nodes (p23.2% vs 423.2%) is shown in Supplementary Table S3. The hazard ratios for the ratio of positive nodes was 2.20, location (IIIC 1 vs IIIC 2) was 1.72 and 1 vs 41 positive nodes 1.15 (Table 5). DISCUSSION Uterine cancer is the most common gynaecologic malignancy in the United States with 43 470 new cases and 7950 deaths expected for 2010 (Jemal et al, 2010). There has been an increase in the number of deaths, particularly in those with advanced stage (III/IV) disease (Ueda et al, 2008). Although the majority have excellent prognosis with 5-year survival rates between 80 -91%, the B5 -10% of patients presenting with retroperitoneal lymph node involvement have inferior survival (Creasman et al, 2006;Lewin et al, 2010). Those with stage IIIC disease have survivals ranging from 10% to 75% . This, in part, reflects the heterogeneity in nodal and other prognostic parameters in stage IIIC cancers. The recent revision of FIGO staging of endometrial cancer has subdivided retroperitoneal node-positive patients into two subgroups based solely on the location of the positive nodes (Pecorelli, 2009). Prior studies have shown a wide variation in survival based on lymph node location. In several series patients with involvement of pelvic lymph nodes only had nonsignificant differences in survival compared with those with positive paraaortic lymph nodes (McMeekin et al, 2001;Mariani et al, 2002;Otsuka et al, 2002;Havrilesky et al, 2005;Hoekstra et al, 2009;Lewin et al, 2010;Todo et al, 2011). In other series better survival was noted for patients with pelvic nodes (Morrow et al, 1991;Hirahatake et al, 1997;Yokoyama et al, 1997;Watari et al, 2005;Fujimoto et al, 2007;Karube et al, 2010;Lewin et al, 2010). Furthermore one recent series has shown a better survival for patients with positive paraaortic lymph nodes compared with those with positive pelvic lymph nodes (80% vs 55% at 5 years), although the difference was not statistically significant (Klopp et al, 2009). However, many of these studies did not account for the number of positive and ratio of positive nodes in addition to the location of positive nodes. Our current study analysed 1075 patients with endometrioid uterine cancer to confirm the independent prognostic significance of this new subgrouping based on node location. In addition, we compared the prognostic discrimination of nodal location with the number of positive and ratio of positive nodes. Nodal location was found to be a significant prognostic factor both in univariate (Table 2, Figure 1A) and multivariate analysis when it was entered as the only term relating to the lymph nodes. However, the new subgrouping by nodal location was not shown to be statistically significant in multivariate analysis when the number of positive nodes and ratio of positive nodes were included (Table 4). As patients in our study with stage IIIC 2 disease had a higher number of positive lymph nodes than those with stage IIIC 1 disease (4.2 vs 2.0, respectively, Po0.001) we performed a separate analysis for those with only one positive retroperitoneal lymph node. This subgroup analysis showed no significant difference in DSS based on nodal location (Figure 2), while the ratio of positive nodes remained significant (Supplementary Table S2). Our findings on the prognostic significance of the number of positive lymph nodes in univariate analysis ( Figure 1B) confirm previous reports (Morrow et al, 1991;Takeshima et al, 1994;Touboul et al, 2001;. Watari et al (2005) demonstrated a better 5-year survival rate for patients with one positive paraaortic lymph node group compared with those with X2 positive paraaortic lymph node groups (60.4% vs 20.0%, respectively, P ¼ 0.0319) whereas Fujimoto et al (2009) reported better 5-year relapse-free survival for patients with one positive pelvic lymph node site compared with those with X2 positive sites (81.3% vs 41.2%, respectively, P ¼ 0.04). We also showed the prognostic significance of the ratio of positive nodes to the total number of lymph nodes examined, which confirms our prior report ). In the current study, the ratio of positive nodes was significant whether it was entered as a continuous variable in multivariate analysis, dichotomized at the mean of 23.2%, or subgrouped based on p10%, 10 -50%, or 450% involvement. The results from single institutional respective studies have also demonstrated the prognostic significance of ratio of positive lymph nodes (Tang et al, 1998;Mariani et al, 2001b). Studies in other malignancies have also attempted to define the most prognostically significant subgroupings for lymph node positive patients. Various classification schemes for lymph nodes in gastric cancer have been based on the distance, number, and anatomical location of metastatic nodes as well as the site of the primary tumour (Kajitani, 1981;Hermanek and Sobin, 1992;Adachi et al, 1995;Sobin and Wittekind, 1997;de Manzoni et al, 1999). Classification of involved regional lymph nodes in gastric cancer by the ratio of positive nodes was found to represent a simple, reliable, and reproducible staging system (Yu et al, 1997;Liu et al, 2007;Marchet et al, 2007;Persiani et al, 2008;Zhang et al, 2009;Maduekwe et al, 2010;Sianesi et al, 2010). The major shortcoming of any substaging of endometrial cancer based on measurements of nodal involvement is the lack of standardisation of the lymphadenectomy. There is wide variation in the extent of nodal dissection reflecting both surgeon's bias and patient selection. For example, this could include performing a more limited lymph node dissection following a resection of an involved bulky node, or performing a more extensive nodal dissection in patients without bulky nodes or co-morbidities (Smith et al, 2008). The issue of standardisation of lymph node dissection has been thoroughly reviewed previously (Boronow, 1980;Kilgore et al, 1995;Chang et al, 2008;Mariani et al, 2009). As discussed by Mariani et al (2009) in their commentary on the surgical staging of endometrial cancer, a standardisation of lymphadenectomy including the anatomical extent of the paraaortic lymph node dissection is lacking. The minimum requirement of lymphadenectomy, either in terms of nodal stations resected or total number of lymph nodes examined, has not been unambiguously defined in the FIGO staging system. Recommendations as to the minimum number of lymph nodes examined for adequate nodal staging have been in effect for colon cancer (12 nodes) (Nelson et al, 2001) and gastric cancer (15 nodes) (Green et al, 2010). We are in agreement with the NCCN guidelines for the treatment of uterine cancer recommending a complete pelvic and paraaortic lymphadenectomy (unless technically unfeasible or medically contraindicated), adhering to the ACOG surgical policy (ACOG, 2005). Two prospective randomized trials have failed to demonstrate a survival advantage from pelvic lymphadenectomy in endometrial cancer (Benedetti Panici et al, 2008;Kitchener et al, 2009). However the inclusion of low-risk patients, lack of standardisation of systemic postoperative treatments, and minimal or lack of paraaortic lymphadenectomy are limitations of these studies (Amant et al, 2009;Uccella et al, 2009;Seamon et al, 2010). A recent retrospective study in patients with stage III C endometrial cancer demonstrated the therapeutic significance of systematic lymphadenectomy including both pelvic and paraaortic node dissection (Todo et al, 2011). Additional limitations of our study include the lack of information on other patient and treatment factors that may be of prognostic significance in patients with retroperitoneal node involvement. In particular, there is a lack of information on the extent of the pelvic and/or paraaortic lymphadenopathy, the extent of surgical staging, the surgeon's subspecialty, the extent of lymph node debulking, involvement of other pelvic extrauterine sites including the adnexa and peritoneal cytology, involvement of the uterine cervix, depth of myometrial invasion, lymph vascular space invasion, and size of the lymph nodes. Our study was limited to patients with endometrioid histology, relatively short follow-up, and there was no central pathology review. There is also a lack of information on sites of recurrence and the use of adjuvant systemic chemotherapy and hormonal therapy. However, the Figure 2 Kaplan -Meier disease-specific survival for stage IIIC endometrioid cancer patients (n ¼ 487) with only one positive node based on lymph node location (pelvic vs paraaortic); P ¼ 0.675. recent years of diagnosis of the patients included in this study should make them more likely to have received adjuvant treatment with chemotherapy or volume-directed radiation therapy and chemotherapy than studies including earlier cohorts of SEER patients. Other general limitations of SEER-based research including variation in data registry, underreporting of radiation therapy, lack of details on adjuvant radiation therapy (fields treated and doses), and selection bias have recently been reviewed by Yu et al (2009). The strengths of our analysis include the large number of recently diagnosed patients with node-positive endometrioid uterine cancers studied within a wide geographic distribution in the United States. In addition, our univariate and multivariate analysis of the three major subgroupings of stage IIIC patients (based on the new FIGO substaging, number of positive lymph nodes reported, and lymph node ratio) has permitted identification of the subgroupings with better abilities to discriminate DSS in this heterogeneous group of stage IIIC patients. In summary, better classification of retroperitoneal lymph node-positive endometrioid uterine cancer patients may permit the identification of more homogenous subgroupings for prognostic purposes, stratification in clinic trials, and possible better selection for individualised adjuvant-combined modality treatments (Mariani et al, 2004). Higher risk subgroups, for example those with multiple pelvic and paraaortic nodal involvement, may require more intense chemotherapy regimens, whereas those with limited nodal disease may best be managed with volumedirected radiation therapy and less toxic systemic treatment protocols. Our study has confirmed the value of subgrouping stage IIIC patients based on nodal location, number of positive lymph nodes and ratio of positive nodes. However based on multivariate and discrimination analyses, nodal ratio was a stronger discriminator for DSS than nodal location, controlling for other confounding factors including tumor grade and the use of adjuvant radiation therapy. If our results are validated in other patient databases, these findings may permit better modifications of the substaging of retroperitoneal lymph node positive patients. However, it is stressed that standardisation of lymphadenectomy including the boundaries of resection, uniform processing of the nodal specimens, and the criteria for adequacy of lymph node resection are needed. Supplementary Information accompanies the paper on British Journal of Cancer website (http://www.nature.com/bjc) This work is published under the standard license to publish agreement. After 12 months the work will become freely available and the license terms will switch to a Creative Commons Attribution-NonCommercial-Share Alike 3.0 Unported License.
v3-fos-license
2018-04-03T03:42:30.271Z
2016-08-11T00:00:00.000
7779156
{ "extfieldsofstudy": [ "Biology", "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0160845&type=printable", "pdf_hash": "0759bf90c691ede2b155b9fa15d0ec93422fc53e", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:1631", "s2fieldsofstudy": [ "Biology" ], "sha1": "0759bf90c691ede2b155b9fa15d0ec93422fc53e", "year": 2016 }
pes2o/s2orc
Genetic Determinants for Pyomelanin Production and Its Protective Effect against Oxidative Stress in Ralstonia solanacearum Ralstonia solanacearum is a soil-borne plant pathogen that infects more than 200 plant species. Its broad host range and long-term survival under different environmental stress conditions suggest that it uses a variety of mechanisms to protect itself against various types of biotic and abiotic stress. R. solanacearum produces a melanin-like brown pigment in the stationary phase when grown in minimal medium containing tyrosine. To gain deeper insight into the genetic determinants involved in melanin production, transposon-inserted mutants of R. solanacearum strain SL341 were screened for strains with defective melanin-producing capability. In addition to one mutant already known to be involved in pyomelanin production (viz., strain SL341D, with disruption of the hydroxphenylpyruvate dioxygenase gene), we identified three other mutants with disruption in the regulatory genes rpoS, hrpG, and oxyR, respectively. Wild-type SL341 produced pyomelanin in minimal medium containing tyrosine whereas the mutant strains did not. Likewise, homogentisate, a major precursor of pyomelanin, was detected in the culture filtrate of the wild-type strain but not in those of the mutant strains. A gene encoding hydroxyphenylpyruvate dioxygenase exhibited a significant high expression in wild type SL341 compared to other mutant strains, suggesting that pyomelanin production is regulated by three different regulatory proteins. However, analysis of the gene encoding homogentisate dioxygenase revealed no significant difference in its relative expression over time in the wild-type SL341 and mutant strains, except for SL341D, at 72 h incubation. The pigmented SL341 strain also exhibited a high tolerance to hydrogen peroxide stress compared with the non-pigmented SL341D strain. Our study suggests that pyomelanin production is controlled by several regulatory factors in R. solanacearum to confer protection under oxidative stress. Introduction Ralstonia solanacearum is a soil-dwelling beta-proteobacterium that causes deadly wilt disease in over 200 plant species across 50 different plant families [1]. It causes disease in many commercially important plants, such as brown rot in potato, wilt in tomato, tobacco, and eggplant, and Moko disease in banana [2]. Based on its broad host range and wide geographical distribution, the pathogen holds the No. 2 position among the top ten plant pathogenic bacteria [3]. R. solanacearum invades host plants as a parasite and survives in soil or water as a saprophyte [4,5]. Its broad host range and survival in soil or water for extended periods suggest that this pathogen may adopt a variety of mechanisms to confront both biotic and abiotic stress conditions. Although melanins are not essential for the growth and survival of microorganisms, they provide some advantages to their producers to cope with different types of adverse challenges, such as UV radiation [20], toxic free radicals [21], oxidative stress [22], toxic heavy metals [23], iron reduction [24], and extreme cold and hot temperatures [25]. Melanin pigment also has a role in the expression of virulence factors in Vibrio cholerae [26]. Additionally, melanins are known to protect pathogenic microorganisms from the host immune response [27,28]. R. solanacearum produces a melanin-like brown pigment in the stationary phase in tyrosine-containing minimal medium. The genome of R. solanacearum carries genes for the melanin biosynthesis pathway [29] and also has two genes encoding tyrosinase [30]. The presence of a pyomelanin pathway (Fig 1) and multiple tyrosinases signifies the importance of melanin in this bacterium's life cycle. Being a plant pathogen, R. solanacearum encounters oxidative challenge from its host during the plant infection process [31]. Therefore, we hypothesized that melanin confers a protective effect against the oxidative stress response of the host plant. The roles of the oxidative stress response regulator (OxyR) [32] and the DNA binding protein from starved cells (Dps) [33] in oxidative stress in R. solanacearum have previously been described, but there is scant knowledge about the physiological role of melanin in this species. In this current study, we investigated the production of pyomelanin and its contribution to survival under oxidative stress in R. solanacearum. We also identified different genetic determinants that contribute to the regulation of pyomelanization in this bacterial species. Screening of melanin mutants and identification of the transposon insertion site A previously constructed transposon (Tn)-inserted mutant pool of SL341 [37] was screened to select mutants that produce no or less melanin relative to the wild-type SL341 strain. Mutants grown on MG agar plates were randomly selected and inoculated into 96-well culture plates containing 200 μl of minimal medium supplemented with tyrosine. The 96-well plates were incubated at 30°C in a shaking incubator at 200 rpm for 48 h. The mutant strains showing a reduced or non-pigmented phenotype relative to the wild type were selected and further confirmed on tyrosine-containing MG medium. The Tn insertion site in each mutant was identified according to a previously described method [37]. In brief, genomic DNA was extracted from each Tn mutant and randomly digested with Sac1. The digested DNA was ligated into pUC119 [38] that had been restricted with same restriction enzyme. After completion of ligation, the recombinant plasmid was transformed to E. coli DH5α [39] to select for transformants on LB agar plates containing kanamycin and ampicillin. Positive clones were selected from the plates and the recombinant plasmid was sequenced with Tn5-specific primers to identify the Tn-flanking DNA sequences. For complementation of all the selected mutants, the wild-type copy of each mutated gene (the full length with its Shine-Dalgarno sequence) was amplified from genomic DNA of wildtype SL341 using each gene-specific primer (S1 Table). PCR amplification was performed according to the following program: an initial denaturation step at 95°C for 5 min; 30 cycles of denaturation at 95°C for 30 s, annealing at the specified temperature (S1 Table) for 30 s, and extension at 72°C for 1 min; and a final extension step at 72°C for 7 min. The amplified PCR product of each gene was first cloned into the pGEM-T Easy vector (Table 1), and the DNA sequence of each gene was confirmed by DNA sequencing. Each gene was restricted from the pGEM-T Easy vector with a specific restriction enzyme (Table 1) and subsequently subcloned into pRK415 under the lac promoter [40]. The recombinant plasmid carrying the corresponding gene (Table 1) was introduced into each mutant through triparental mating using E.coli HB101 [41] harboring pRK2013 as a helper plasmid [42]. Triparental mating was performed as previously described [37]. Spectrophotometric analysis of melanin production To investigate the melanin production by R. solanacearum, the SL341 and mutants strains were grown in minimal medium supplemented with tyrosine at 30°C with shaking at 200 rpm. A 1 ml culture sample was collected from the main culture of these strains after every 24 h and centrifuged at 13,000 ×g for 5 min. The cell-free supernatants were collected and their absorbance value was checked at 400 nm using a spectrophotometer (Beckman Coulter, Brea, CA, USA). The experiment was performed in triplicates. High-performance liquid chromatography analysis High-performance liquid chromatography (HPLC) analysis of the culture filtrates of wild-type SL341 and its mutant strains was performed for homogentisate (HGA) detection. The bacterial strains were grown in minimal medium supplemented with tyrosine at 30°C in a shaking incubator at 200 rpm. Thereafter, 1 ml of culture sample was collected from the main culture of each strain at every 24 h. The samples were centrifuged at 13,000 ×g for 5 min and the supernatants were collected and filtered through a sterilized membrane filter (0.2 μm pore size; Corning, Tewksbury, MA, USA). Twenty microliters of the sample was injected onto an Agilent 1100 series HPLC system fitted with a Zorbax Eclipse Plus C18 column (5 μm particle size; 250 mm × 4.6 mm; Agilent, Santa Clara, CA, USA) and a photodiode array detector. The HPLC conditions for detection of tyrosine and HGA were as previously described [43]. In brief, for elution, water with 0.1% (v/v) trifluoroacetic acid was used as a solvent A, and acetonitrile with 0.1% (v/v) trifluoroacetic acid was used as a solvent B, at a flow rate of 1 ml/min. The following gradient was used: 8% solvent B for 12 min, gradient from 8% to 95% solvent B within 3 min, 95% solvent B for 1 min, gradient from 95% to 8% solvent B within 2 min, and finally 8% solvent B for 5 min. The total time for elution separation was 23 min. The tyrosine and HGA were detected at 280 nm and 290 nm, respectively. Commercially available tyrosine and HGA (Sigma-Aldrich, St. Louis, MO. USA) were used as the standards. Reverse transcription quantitative PCR study The expression levels of hppD encoding 4-hydroxyphenylpyruvate dioxygenase, and hmgA encoding homogentisate dioxygenase were determined in the wild-type SL341 and mutant strains SL341D, SL341S, and SL341G, using the reverse transcription quantitative polymerase chain reaction (RT-qPCR). Total RNA was extracted from a 1 ml culture aliquot from each strain collected at different time points, using an RNA Hybrid-R extraction kit (GeneAll Bio Inc., Seoul, Korea). The RNA was eluted in 50 μl of RNase-free water and used directly as a template for cDNA synthesis using a Tetro cDNA synthesis kit (Bioline, London, UK), according to the manufacturer's instructions. Before cDNA synthesis, each RNA sample was treated with DNaseI to remove any residual traces of DNA contamination. The cDNA concentration and purity of each sample were measured on a NanoDrop 2000 spectrophotometer (Thermo Scientific, Wilmington, DE, USA). The samples were normalized to the V3 region of the reference 16S rRNA gene of R. solanacearum SL341, and amplification reactions were performed using a CFX384 Real Time system (Bio-Rad, Hercules, CA, USA). The primers used for the RT-qPCR amplification reaction are given in (S1 Table.). Each reaction mixture contained SYBR Green Supermix (Bio-Rad), 4 μl of diluted cDNA template, 10 μM of both forward and reverse primers, and RNase-free water. The thermal cycling included two reaction steps; an initial preheat for 3 min at 95°C, followed by 39 cycles of 95°C for 5 s, 55°C for 10 s, and 72°C for 35 s. The RT-qPCR data were displayed using the CFX Manager ver. 3.1 software. Each reaction was performed in triplicates. The RT-qPCR results for the individual genes were evaluated using the iCycler iQ Real-Time PCR detection system (Bio-Rad). The C(t) values of the RT-qPCR products of each gene were used to determine the target cDNA concentration, based on relative comparison with the V3 gene expression. Tukey's multiple tests were used to compare the expression levels of hppD and hmgA in wild-type SL341 and in the mutant strains at the different incubation times. In vitro oxidative stress assay Oxidative stress survival of R. solanacearum was performed by adding hydrogen peroxide to the bacterial cultures. The wild-type SL341 and SL341D mutant strain were grown in minimal medium with and without tyrosine in a 30°C shaking incubator at 200 rpm. A 1 ml sample was collected from each culture after 72 h growth and centrifuged at 12,000 ×g for 5 min. The supernatants were discarded and the bacterial pellets were resuspended in 1 ml of sterile water. Viable cells in each sample were determined by culturing serial dilutions of the sample suspension on TTC agar plates. To evaluate the tolerance to the oxidative stress, hydrogen peroxide at a final concentration of 5 mM, 15 mM, 30 mM, and 50 mM were added to the remaining culture of each strain, in minimal medium with and without tyrosine, and the cultures were kept at 30°C in the shaking incubator for 1 h. Thereafter, the cultures were centrifuged, and the pellets were washed twice with sterile water and finally suspended in 1 ml of sterile water. Serially diluted cell suspensions of each culture were spread on TTC agar plates and kept at 30°C to determine the number of viable cells in each strain. The experiment was performed in triplicates. Tukey's multiple range test was used to compare the number of viable cells in the treated strains of R. solanacearum. Results Selection of melanin-defective mutants of R. solanacearum R. solanacearum SL341 produced blackish brown pigments similar to melanin in the medium supplemented with tyrosine (Fig 2A). The pigmentation was observed from most of the R. solanacearum strains tested; that is, in approximately 37 strains, including various phylotypes (data not shown). Using the race 1 (phylotype I) wild-type SL341 as a reference strain, we screened a total of 4,000 mutants from a Tn-based mutant pool of SL341 [37]. Based on visibly reduced pigmentation after 48 h of growth, 20 mutants without pigmentation were selected and further analyzed. Genetic determinants involved in pyomelanin production in R. solanacearum To identify the disrupted gene in each mutant strain, selected subclones of each mutant were sequenced with Tn-specific primers. Based on the sequencing results and a BLAST search comparison, the 20 selected mutants showing no or less pigment production were classified into four groups because some of the mutants carried a mutation in the same gene. When we performed the complementation analysis with 20 mutants, only 11 of them were successfully complemented for pigmentation with their original gene. Therefore, we used 11 selected mutants for further analysis (S1 Fig). Among the selected mutants, one strain (SL341D) had a Tn insertion mutation in the 4-hydroxyphenylpyruvate dioxygenase (hppD) gene that had already been reported to be involved in the pyomelanin synthesis pathway (Fig 1, S1A Fig) [44], suggesting that R. solanacearum produces pyomelanin. Three other mutants (SL341S, SL341G, and SL341R) had Tn insertion mutations in regulatory genes (i.e., rpoS, hrpG, and oxyR, S1 Fig) that are novel candidates for involvement in pyomelanin production. In order to confirm whether or not the observed phenotype was associated with the disrupted gene, complementation studies were performed. Complete restoration of the original pigmentation was achieved in strains SL341D (hppD -), SL341S (rpoS -), and SL341G (hrpG -), whereas strain SL341R (oxyR -) was only partially complemented (Fig 2A). Therefore, the SL341D, SL341S, SL341G, and SL341R strains were chosen for further comparison studies with the wild-type strain. Quantitative analysis of melanin production To investigate the effect of mutation on the level of pyomelanin produced in the four selected mutant strains of SL341, their culture supernatants were analyzed at different time intervals, along with that of the wild-type SL341 strain. The wild-type strain initially produced a small amount of pyomelanin at 24 h incubation, following which production increased slowly up to 48 h and then increased rapidly at 72 h incubation (Fig 2B). On the other hand, strain SL341D (carrying a Tn insertion in the hppD gene of the pyomelanin pathway) showed a very small amount of pigment production, even after 72 h growth. Similarly, SL341S revealed reduced pigment production even at 72 h incubation. On the other hand, SL341G and SL341R produced higher amounts of melanin at 72 h incubation although not comparable to the wild-type strain. This suggests that the regulation of pyomelanin production in R. solanacearum is likely by HrpG and OxyR. HPLC analysis for HGA detection To determine production of the pyomelanin intermediate HGA, HPLC analysis of the culture filtrates of the wild-type and mutant strains was performed at different time intervals. The standard tyrosine and HGA compounds generated single peaks at 10 min and 12.5 min, respectively, at their respective detection wavelength (Fig 3A and 3B). The wild type produced a significant amount of HGA after 48 h of growth (Fig 3C, S2 Fig), which remained stable and was still observed at 72 h culture (Fig 3D, S2 Fig) but not at 96 h (data not shown). In comparison, no HGA was observed in the cultures filtrates of SL341D (Fig 3E and 3F, S2 Fig) and the other three mutant strains over time (S2 and S3 Figs). Expression anlysis of hppD and hmgA in R. solanacearum The expression of level of hppD and hmgA genes involved in pyomelanin biosynthesis pathway was investigated in SL341 and its mutants. The hppD gene encodes 4-hydroxyphenylpurvate dioxygenase which converts 4-hydroxyphenylpurvate into homogentisate, an intermediate in pyomelanin pathway, which is subsequently auto-oxidized and produces pyomelanin. The expression of hppD in wild type SL341 was significantly high (p<0.05) compared to its expression in selected mutant strains at 48 h incubation (Fig 4A). At 72 h incubation, the expression of hppD was also significantly higher in the wild type strain SL341 compared to the mutants strains SL341D and SL341G, and SL341S. The hmgA gene encodes homogentisate dioxygenase to metabolize HGA, thus eliminating the pyomelanin intermediate. A time-course expression profile of this gene was investigated in SL341 and its mutants in minimal medium supplemented with tyrosine. The hmgA mRNA expression level in the wild type was relatively high at 24 h incubation, after which a consistent decrease in its expression was observed until 72 h (Fig 4B). The expression of hmgA in SL341D at 24 and 48 h was similar to that of the wild type, but the expression was significantly higher (p<0.05) at 72 h incubation compared with SL341 and the other mutant strains. In the case of both SL341S and SL341G, the gene expression level was high at 24 h incubation compared with the wild type and SL341D, but then decreased consistently until 72 h. Neither hppD nor hrpG were expressed at any time point in each corresponding mutant compared with the wild-type strain due to transposon insertion and gene inactivation (Data not shown). However, SL341S showed slight expression of rpoS at different incubation times (Data not shown). Pyomelanin role in hydrogen peroxide stress response Different concentrations of hydrogen peroxide were used to investigate the role of pyomelanin in the oxidative stress response in R. solanacearum wild-type SL341 and the hppD-disrupted mutant SL341D. The SL341 and SL341D grown in minimal media with and without tyrosine showed the similar bacterial survival pattern at 5 mM hydrogen peroxide concentration ( Fig 5). The wild-type SL341, grown in minimal medium supplemented with tyrosine and thus producing pyomelanin, showed a significantly higher number of viable cells (p<0.05) than wildtype cells grown in minimal medium without tyrosine at 15 mM and 30 mM hydrogen peroxide. In the case of SL341D, there was no significant difference in the numbers of viable cells in the presence or absence of tyrosine, displaying relatively high cell numbers with 15 mM hydrogen peroxide but still significantly lower than wild-type cells grown in tyrosine-containing medium. At the level of 30 mM hydrogen peroxide treatment, only wild type strain SL341 exhibited hydrogen peroxide stress tolerance when those cells were grown in minimal media containing tyrosine (Fig 5). However, both wild type SL341 and the mutant SL341D grown with or without tyrosine did not exhibit any noticeable survival after hydrogen peroxide treatment at the concentration of 50 mM (data not shown). Discussion Pyomelanin, a black-brown pigmented heterogeneous compound produced by a number of bacteria, fungi, and other organisms, has been associated with various physiological roles Although it has been extensively studied in human pathogens such as Pseudomonas aeruginosa, Vibrio cholerae, and Aspergillus fumigatus [22,26,45], there are no reports about the production and physiological role of this pigment in plant pathogenic bacteria. R. solanacearum produces a pyomelanin-like pigment, but the nature of this compound, its possible physiological role, and the genetic determinants involved in its production have not yet been elucidated. To the best of our knowledge, this is the first detailed study on pyomelanin production and regulation and its possible physiological role in R. solanacearum. The use of Tn-based mutagenesis to identify the genetic determinants for melanin production was previously described in Aeromonas media WS and P. aeruginosa [13,46]. The genes identified in these two studies were different from those of our current study, except for SL341D, which has a Tn insertion at gene hppD (reported to be involved in the pyomelanin biosynthesis pathway [44,47]. These three studies indicate that the genetic determinants for melanin production and regulation are diverse among different organisms. In our current study, we selected four mutants (including mutant SL341D) that had no or reduced pigmentation production ability. Specifically, the abolished pigment production in hppD-defective mutant SL341D provides genetic evidence of pyomelanin production by the wild-type R. solanacearum. Our current finding is contrary to a previous study, in which two genes encoding tyrosinases were identified in the genome of R. solanacearum, suggesting that this organism produces DOPA melanin [30]. We assumed that R. solanacearum produces pyomelanin, since the disruption of the hppD gene led to a complete loss of melanin production under our culture conditions. However, we concede that this bacterium may produce DOPA melanin under different medium conditions, although it is not clear why it would produce two different type of melanin in a tyrosine-supplemented medium. Our result also revealed that oxyR, rpoS, and hrpG, which are involved in regulating the expression of genes under different stress responses and pathogenicity [48,49,50], may positively regulate pyomelanin production at the level of transcription in R. solanacearum. Melanin was previously reported to have different functions in different organisms [22,26,27,28], and therefore, it may be possible that R. solanacearum will produce pyomelanin under different conditions for specific physiological roles. However, it will be premature at this stage to link the regulation of pyomelanin with a particular situation. To support our assumption that the absence of pigment production by mutant SL341D indicates pyomelanin synthesis ability in wild-type R. solanacearum, we performed HPLC of the culture filtrates of SL341 and its mutant strains. Although it is quite difficult to detect HGA by HPLC because of its readily oxidizable nature, we were able to detected HGA in the culture filtrates of SL341 at 48 and 72 h, but not at 96 h, by which time it may have been completely oxidized (data not shown). As expected, the hppD mutation in the SL341D strain (Fig 1) impaired its pigment production ability, and hence HGA was not detected in its culture filtrate. In the case of the other mutants, although a small amount of pyomelanin was detected in their culture filtrates, we could not detect any HGA, possibly because it was in undetectable trace amounts only and/or metabolized further. The relatively high expression of hppD in pyomelanin pathway in wild type SL341 compared to its mutant strains, suggests increased accumulation of HGA and subsequent high pyomelanin production. The expression of hmgA in SL341 decreased consistently over time, indicating that a small amount of HGA was converted into maleylacetoacetate in the tyrosine catabolic pathway, while most of the HGA was readily oxidized into pyomelanin. The expression of hmgA in SL341D and the other mutants was unexpected, as we assumed that HGA would not be produced in these non-pigment-producing mutants. A possible explanation is that hmgA expression is constitutive in R. solanacearum or that a deficiency of HGA may induce hmgA expression. The melanin pigment has been associated with a variety of functions in different organisms [51]. Melanin from both natural and synthetic sources has efficient reactive oxygen species scavenging ability, protecting the producing organisms from their toxic effects [52]. Oxidative stress can affect the cell wall, nucleic acids, and lipids, and elicit various cellular responses in microorganisms [53]. Our current study showed that pyomelanin provides considerable tolerance to hydrogen peroxide stress. A similar protective effect of pyomelanin against hydrogen peroxide stress was reported in Burkholderia cenocepacia [54]. The pathogenic fungus Cryptococcus neoformans produces a melanin pigment that protects melanized cells from nitrogenand oxygen-based antioxidants [55]. We speculated that pyomelanin would protect R. solanacearum from plant oxidative stress responses and provide additional protection from a plant's initial immune response. However, we could not observe any difference in disease severity between the mutants and the wild-type strain by standard soil-soaking inoculation or petioleinjection inoculation (data not shown). Therefore, further studies are needed to elaborate the role of pyomelanin in R. solanacearum.
v3-fos-license
2019-03-16T13:09:53.239Z
2014-02-27T00:00:00.000
43785866
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "HYBRID", "oa_url": "https://www.omicsonline.org/open-access/design-synthesis-and-antidiabetic-cardiomyopathy-studies-of-cinnamic-acidamino-acid-hybrid-analogs-2161-0444.1000345.pdf", "pdf_hash": "7068a9dca31a0cf25db915f35abca04424bf8f16", "pdf_src": "MergedPDFExtraction", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:1632", "s2fieldsofstudy": [ "Chemistry", "Medicine" ], "sha1": "87d3fe1f597e362d43fbf309ff6233c81498dabf", "year": 2014 }
pes2o/s2orc
Design, Synthesis and Antidiabetic, Cardiomyopathy Studies of Cinnamic Acid-Amino Acid Hybrid Analogs Diabetes mellitus is a chronic metabolism disorder characterized by hyperglycemia due to insulin deficiency or insulin resistance. Associated complications include Myocardial infarction, cardiomyopathy, retinopathy, neuropathy, nephropathy, etc. Cinnamic acid analogs (SSPC0-SSPC20) containing different amino acids were designed and docked into crystal structure of AMPK and PPARs. Among the 20 designed compounds five compounds namely SSPC5, SSPC8, SSPC11, SSPC14, SSPC15 showed good docking scores using Glide 5.0 Maestro program and were subjected to ADME prediction by using software Quickprop version 3.1. These were then selected for synthesis, characterized and antidiabetic activity carried out using Alloxan induced diabetic rat model by measuring blood glucose levels using glucometer at 0, 1, 2, 4, 6, 8 and 24 hrs through the tail vein puncture method. SSPC5, SSPC8, SSPC11, SSPC14 showed % reduction in blood glucose of 23.02%, 37.02%, 14.04% and 15.96% as compared to standard with 33.53% reduction. As SSPC14 had good and comparable docking scores in both AMPK and PPAR γ receptor, so it was subjected for the Diabetic as well as diabetic cardiomyopathy activity by recording the electrocardiogram of both diabetic and control rat. It was found to be very efficient at low dose and had a prolong duration of action on the heart (Up to 54 hrs). Thus this study indicated that such hybrid antidiabetic drug with dual action on hyperglycemia and cardiac function is desirable and cost effective. *Corresponding author: Deepanwita Maji, Department of Pharmaceutical Sciences, Birla Institute of Technology, Mesra, Ranchi-835215, India, Tel: +919334870271; Fax: 0651-2276247; E-mail: deepanwita.maji@gmail.com Received November 16, 2013; Accepted February 25, 2014; Published February 27, 2014 Citation: Prakash S, Maji D, Samanta S, Sinha RK (2014) Design, Synthesis and Antidiabetic, Cardiomyopathy Studies of Cinnamic Acid-Amino Acid Hybrid Analogs. Med chem 4: 345-350. doi:10.4172/2161-0444.1000163 Copyright: © 2014 Prakash S, et al. This is an open-access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. Introduction The epidemic of obesity and sedentary lifestyle is projected to result in over 300 million people with diabetes mellitus by 2025 [1]. Diabetes Mellitus is a syndrome of disordered metabolism usually due to a combination of hereditary and environmental causes resulting in hyperglycemia (fasting plasma glucose >7.0 mmol/lit (126 mg dl -1 ) or plasma glucose >10 mmol/lit, two hours after a meal) due to insulin deficiency and/or insulin resistance [2]. Diabetes is associated with a number of complications both microvascular and macrovascular. Microvascular complications include diabetic nephropathy, neuropathy, and retinopathy. Macrovascular complication includes coronary artery disease, peripheral arterial disease, and stroke. Diabetic Cardiomyopathy is responsible for 80% of deaths among diabetic patients much of which has been attributed to CAD (coronary artery disease). This was first described in 1972 on the basis of observations in four diabetic patients who presented with HF (heart failure) without evidence of hypertension, CAD, valvular or congenital heart disease [3]. Diabetic cardiomyopathy refers to a disease process which affects the myocardium in diabetic patients causing a wide range of structural abnormalities eventually leading to LVH [left ventricular (LV) hypertrophy] and diastolic and systolic dysfunction or a combination of these [4] (Figure 1). Cinnamic acid and its derivatives have been reported to show various pharmacological activities like hepatoprotective action [6], antidiabetics action [7], antioxidant action [8], etc. Also cinnamic acid is known to have good cardioprotective activity [9]. Earlier works have shown peptides to have significant antidiabetics activity like Exenatide, which is an incretin mimetic [10]. Studies showed that a hexapeptide (Gly-Ala-Gly-Val-Gly-Tyr) had improved glucose transport and also exerts beneficial lipid metabolic effects [11]. Because of this a series of cinnamic acid-amino-acid hybrid series were designed, docked using Glide 5.0 and the best docked five compounds were synthesized. Antidiabetic activity of the five compounds was done on alloxanised rats and a new non-invasive animal model was developed to study the diabetic cardiomyopathy. Material and Methods Chemistry Synthesis was carried out in Mini Block XT-Parallel Synthesizer (Mettler Toledo). TLC was done using (BAW) n-butanol: glacial acetic acid and water 4:1:1 solvent system and further characterized by melting point using Optimelt (Stanford Research System), FTIR using (FTIR-8400S, SHIMADZU), 1H NMR was done and data Collected on Wormhole-vnmrs 400 and Mass spectroscopy was also done. Synthesis Cinnamic acid-amino acid hybrid compounds SSPC5, SSPC8, SSPC11, SSPC14 and SSPC15 were synthesized using Liquid phase peptide synthesis method. Cinnamic acid was prepared using benzaldehyde and acetic anhydride. Biological evaluation The antidiabetic activity of the synthesized test drugs was carried out on Alloxan induced diabetic rats by measuring the decrease in blood glucose level by ANOVA followed by Dunnett's t-test with equal sample size. Diabetic cardiomyopathy activity was obtained by recording the electrocardiogram of both diabetic and control rat. Docking studies The 20 designed compounds were docked into crystal structure of AMPK (PDB IDs: 2Y94) and PPARs (PDB IDs: 3ET0, 3ET1 and 3ET2) which is the most accurate structure available. The interaction energy between designed molecule and receptors were calculated and the results are presented in the Table 1. The score represented in terms of Gibbs free energy (∆G). ADME properties of designed compounds were found out by using software Qikprop version 3.1. QP log Po/w: Predicted octanol /water partition coefficient; Range, -2.0 to 6.5 QP log S: Predicted aqueous solubility, log S. S in moles/liter is the concentration of the solute in a saturated solution that is in equilibrium with the crystalline solid,log S; Range, -6.5 to 0.5 Synthesis In a dry 250 ml round-bottomed flask fitted with an air condenser carrying a calcium chloride guard-tube, 21 g (20 ml, 0.2 mol) of pure Benzaldehyde, 30 g (28 ml, 0.29 mol) of acetic anhydride and 12 g (0.122 mol) of freshly fused and finely powdered potassium acetate were added (Scheme 1). The mixture was heated on a sand bath at 160°C for 1 hour and at 170-180°C for 3 hours, poured while still hot (80-100°C) into about 100ml of water contained in an l liter roundbottomed flask which has previously been fitted for steam distillation. A saturated aqueous solution of sodium carbonate was added with vigorous shaking until a drop of the liquid withdrawn on the end of a glass rod turns red litmus a distinct blue. The solution was steam distilled until all the unchanged benzaldehyde was removed and the distillate was clear. The residual solution was cooled and filtered from resinous by-products. The filtrate was acidified by adding concentrated hydrochloric acid slowly, with vigorous stirring until the evolution of carbon dioxide ceases. Cinnamic acid was recrystallized from a mixture of 3 volumes of water and 1 volume of rectified spirit. The yield of dry Cinnamic acid (colorless crystals), m.p. 133°C, is 18 g (62%). General procedure for the synthesis of hybrid compounds SSPC5, SSPC8, SSPC11, SSPC14, and SSPC15 Equimolar quantity of Cinnamic acid and amino acid were coupled using 10 ml of CPE reagent stirred till clear at a temp of 0-5°C (0.01 mol). To this mixture, triethylamine was added till the pH 7 as mentioned keeping the reaction (Scheme 2) temperature kept below 5°C and kept 6 hrs at 0°C. The product so obtained was filtered out and washed with solvent ether and dried., recrystallized using ethanol. TLC was using solvent system of n-Butanol: Glacial Acetic Acid: Water=4:1:1.Futher characterization was done using melting point using Optimelt (Stanford Research System), FTIR using (FTIR-8400S, SHIMADZU), 1H NMR was done and data Collected on Wormholevnmrs 400.and Mass spectroscopy was also done. Anti-diabetic activity The antidiabetic activity of the synthesized test drugs were carried out on Alloxan induced diabetic rats and measuring the decrease in blood glucose level. Male albino rats of Wistar strain weighing about 145-240gm were used. Animals were maintained at 22 ± 2°C with 12 hr. light: 12 hr. dark cycle. Each experimental group consisting of 4 animals each. Diabetes was induced in groups II to IX as shown in Table 3 below by injecting freshly prepared alloxan (dissolved in 0.9% NaCl injectable solution to produce a concentration of 40 mg/mL) intraperitonealy at a dose of 40 mg/kg to overnight fasting animals. Rats were then tested for sufficient levels of hyperglycemia two days after injection and 4 weeks post-injection. Cardiomyopathy studies The Cardiomyopathy study of the synthesized test drug SSPC14 had been studied as it showed comparative docking scores on both PPARγ and AMPK enzyme, in the Alloxan induced diabetic rats, which produced cardiomyopathy symptoms after 14 days of Alloxan treatment. Rats were tested for sufficient levels of cardiomyopathy Figure 4). Cardiomyopathy activity The diabetic cardiomyopathy activity of the synthesized test drug SSPC14 was done on Alloxan induced diabetic rats by recording the electrocardiogram of both diabetic and control rat and are shown below. The ECG recording for normal control and diabetic control group were done in an interval of two hour for two days. The ECG recording for treatment group was done contentiously six hours after drug treatment and then an interval of six hour up to the 54 hours ( Figures 5 and 6). After giving the treatment drug SSPC 14 the rat's heart rate appeared to normalize in 15 min and completely normalized in one hour. The heart rate is normal up to the 54 hr. after the drug treatment. The HRV Analysis The HRV analysis was difficult. HRV spectrum analysis suggests no variation in the sympathetic and parasympathetic systems related to cardiac system. There is Elevation in the S-T Segment The ST segment represents the period when the ventricles are depolarized. Average S-T prolongation for normal healthy rats was 34.8 sec. Average S-T prolongation for diabetic rats was 44.6 sec. Average S-T prolongation for drug treatment rats was 32.6sec after 54 hr. ECG Power Spectrum Analysis The overall power of ECG frequency spectra was increased just after the oral dose of compound (SSPC14) and it was sustained till 1hour, then started deteriorating. QRS Interval Analysis QRS interval of drug treated diabetic rats was longer (expanded) that is duration was increased. QRS interval for control (normal healthy rats) is 17.6 msec. QRS interval for control diabetic rat is 25.0 msec. The above docking studies, ADME studies, antidiabetic studies show that compounds SSPC5, SSPC8, SSPC11, SSPC14 and SSPC15 show significant decrease in the blood glucose levels. The cardiomyopathic study with SSPC14 show significant activity on the heart and also brings the ECG to almost normal after 54 hrs (Figures 7-12). Result and Discussion Designing Among the 20 hybrid compounds designed using GLIDE Good binding can be seen for the above compounds with amino-acid residues GLN286, TYR473, SER289, HIS449, LEU330SER289, TYR327, HIS449, TYR473 of PPAR γ receptor also through H-bonding. SSPC14 showed almost similar scores of -8.66 and -8.39 in AMPK and PPAR γ receptor, respectively, so it was selected for both antidiabetic and diabetic cardiomyopathic study. ADME studies showed Human oral absorption of 3 (good) for SSPC5, SSPC8, SSPC11, SSPC14 and (medium) for SSPC15. Their % Oral Absorption being 85.34, 81.14, 69.57, 78.70 and 71.38, respectively which is also significant. Synthesis The hybrid compounds having Cinnamic acid in combination with the amino acids (Phenylalanine, Proline, Glycine Cysteine, and Tyrosine) were synthesized using liquid phase method with chlorophosphate ester as the condensing reagent. Yield of these were about 80% and showed good crystalline nature. The physicochemical properties like melting point, Rf value and Spectral studies like FT-IR, NMR and Mass used for characterization of all synthesized compounds and confirmation of the same. The compound SSPC14 was subjected to Diabetic cardiomyopathy activity, induced in rats after 14 days of alloxan treatment was found to be very effective at the given dose and has a prolong duration of action on the heart (Up to 54 hrs.). ECG pattern was normal for first 15 mins but a very distinct inversion was observed in the ECG pattern from 30mins to 24hrs after administration of drug which started normalizing after 30hrs till 54hrs of the study. The heart rate was normal up to 54 hr. after the drug treatment. Conclusion Thus we conclude that this approach is a new innovative idea to design, synthesise and screen for the antidiabetic and cardiomyopathic activities of cinnamic acid-amino acid hybrid analogues which has given a new direction for the establishment of newer compounds which would be beneficial for both diabetes and cardiomyopathy and help to retain normal cardiac function. The use of non-invasive cardiomyopathic animal screening is a new model in addition to the conventional pharmacological screening.
v3-fos-license
2021-08-20T13:20:00.293Z
2021-08-20T00:00:00.000
237218733
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.frontiersin.org/articles/10.3389/fchem.2021.715009/pdf", "pdf_hash": "7718fd74116397e1960915e64d503fa087f35b8e", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:1634", "s2fieldsofstudy": [ "Materials Science" ], "sha1": "7718fd74116397e1960915e64d503fa087f35b8e", "year": 2021 }
pes2o/s2orc
Synthesis and Characterization of Surfactant for Retarding Acid–Rock Reaction Rate in Acid Fracturing Acid fracturing is an effective method to develop ultra-low permeability reservoirs. However, the fast reaction rate reduces the effect of the acid fracturing and increases the near-well collapse risk. Therefore, it is necessary to retard the acid–rock reaction rate. In this work, we synthesized an acid-resistant Gemini zwitterionic viscoelastic surfactant (named VES-c), which has good performances such as temperature resistance, salt resistance, and shear resistance. Besides, a low concentration of VES-c increases the viscosity of the acid solution. The CO2 drainage method was used to measure the reaction rate between the dibasic acid and dolomite/broken core. We find that the dibasic acid containing 0.3% VES-c retards the dolomite reaction rate of 3.22 times compared with only dibasic acid. Furthermore, the dibasic acid containing 0.3% VES-c exhibits uniform distribution and is not easy to adhere to the solid surface. The VES-c also is favorable to reduce the formation of amorphous calcium carbonate. Retarding the rate of acid–rock reaction and enhancing the acidification are mainly attributed to VES-c's salt-tolerance, anti-adsorption, and the property of increasing the viscosity of the solution. Hopefully, this kind of surfactant retarding reaction rate is applied to other acid–rock reactions. INTRODUCTION Conventional oil fields have entered the middle and late stages of exploitation after years of development, but there are still low and difficult exploitation potentials. The proportion of reserves in ultra-low permeability oil fields has been increasing year by year. Therefore, the development of ultra-low permeability reservoirs becomes important (Guo et al., 2017), and the main means of development is fracturing techniques. Since 1947, the hydraulic fracturing fluid technique was first used in the Kansas-Houghton field (PAK. and Chan, 2004), and the fracturing fluid technique has received considerable attention. Subsequently, other fracturing fluid techniques were greatly developed, for example, hydraulic fracturing fluids (Zhang et al., 2018a;Zhou et al., 2019), oil-based fracturing fluid (Zhang et al., 2018b), emulsified fracturing fluid (Buijse and van Domelen, 2000;Sayed et al., 2012;Zakaria and Nasr-EI-Din, 2015), foam fracturing fluid (Sayed and Al-Muntasheri, 2016;Dehdari et al., 2020;Qu et al., 2020), thickening fracturing fluid (Liu and Li, 2016;Cai et al., 2018), alcohol-based hydraulic fracturing (Marrugo-Hernandez et al., 2018), and surfactant fracturing fluid (Zhang et al., 2018c;Lu et al., 2019;Mejia et al., 2019;Tangirala and Sheng, 2019;Zhang et al., 2019). Although hydraulic fracturing fluid has been widely used, it still represents the poor stability to shear resistance and serious filtration loss. To this end, surfactant fracturing fluid has been developed in recent decades. This kind of surfactant fracturing fluid shows good performance, such as shear resistance, temperature resistance, salt resistance, and harmlessness to reservoirs (Yu et al., 2019a;Chen et al., 2019). Acid fracturing is a widely used technique in both new and existing wells to increase the production in ultra-low permeability reservoirs (Rbeawi et al., 2018). Usually, the minerals composition of the reservoir mainly includes illite, chlorite, montmorillonite, kaolinite, calcite, laumontite, dolomite, quartz, feldspar, and muscovite. The corresponding chemical composition of each mineral is summarized in Supplementary Table S1. Most minerals react with acid, especially the carbonate minerals (e.g., calcite and dolomite), leading to dissolving fillings in the reservoirs and reducing the compressive strength of reservoir rocks (Zhang and Fang, 2020). In heterogeneous tight reservoirs, a large permeability contrast results in fluids flowing into the highly permeable zone, which does not effectively cover the tight target reservoir, thus reducing the overall efficiency of acid fracturing measures. To improve the cleaning efficiency of acid on reservoir interstitial materials, the polymer and viscoelastic surfactant (Afra et al., 2020) are used to increase the viscosity of the acid solution, reduce fluid loss, and prolong the distance of acid etching (Jones and Dovle, 1996). Polymers have good temperature resistance and shear resistance, but polymer solutions need strong oxidants (e.g., ammonium persulfate and potassium persulfate) as gel-breaking (Wang et al., 2016). Gel-breaking oxidants oxidize Fe 2+ and produce its precipitation, causing secondary damage to the reservoir. Besides, the polymers do not break gel easily and adhere to the surface of the rock, causing damage to the reservoir. Therefore, SCHEME 1 | Synthesis route of VES-c surfactant and the VES-c sample. SCHEME 2 | Measuring the reaction rate using the volume of drained water by the acid-rock reaction. Frontiers in Chemistry | www.frontiersin.org August 2021 | Volume 9 | Article 715009 surfactants have attracted the researchers' attention to provide a clean fracturing fluid because of their small molecular weights and there is no need for a gel breaker. In this work, an acid-resistant Gemini zwitterionic viscoelastic surfactant (VES-c) was synthesized. A low concentration of VES-c can effectively retard the acid-rock reaction rate and increase the effect of rock acidification. Moreover, the VES-c does not adhere easily to the rock surface. VES-c Synthesis Scheme 1 shows the synthesis route of the VES-c. First, the intermediate (glycinate) was synthesized by the reaction of epichlorohydrin and glycine. A 3.75 g (50.0 mmol) of glycine was dissolved with 100 ml deionized water in a 500 ml flask. Epichlorohydrin of 7.87 ml (100.48 mmol) dissolved in 30 ml ethanol was quickly poured into the glycine solution. Sodium hydroxide (50.0 mmol) was added to the mixed solution. The flask was moved to an oil bath and heated at 60°C with stirring for 14 h. Second, erucamidopropyl dimethylamine of 42.24 g (100.1 mmol) was dissolved in 40 ml ethanol in a beaker, and the ethanol solution was poured into the flask containing the intermediate solution. Besides, 30 ml ethanol was used to wash the beaker three times, and the washing liquids were also poured into the flask. The flask was moved to the oil bath and heated at 80°C with stirring for 24 h. Finally, the negative pressure rotary evaporation was conducted to remove the Frontiers in Chemistry | www.frontiersin.org August 2021 | Volume 9 | Article 715009 3 solvent (water and ethanol) and obtain the crude product. Then, recrystallization was conducted three times to purify the product using ethanol and acetone mixture (volume ratio: 1/3). The other materials and methods are collected in the Supplementary Material. Measuring Rate of Acid-Rock Reaction In acid fracturing, one or more acids such as hydrofluoric acid, hydrochloric acid, fluoroboric acid, acetic acid, and formic acid are usually used (Yang et al., 2006;Assem et al., 2019;Jeffry et al., 2020). In addition, the F − in the hydrofluoric acid and fluoroboric acid reacts with Ca 2+ , Mg 2+ , and Fe 3+ to form waterinsoluble salts, which are not selected. Formic acid is toxic and not easy to operate. Hydrochloric acid and acetic acid can react with carbonate minerals, and Cl − and CH 3 COO − do not form insoluble salts with cations. Therefore, we selected hydrochloric acid and acetic acid. Due to the fast reaction of the hydrochloric acid with rock minerals, producing fragments or broken particles of rock minerals and damage in the reservoir could occur during the acidification process. Acetic acid is a good choice for acid fracturing because acetic acid has a slower reaction with rock minerals than hydrochloric acid. To improve the efficiency of acid fracturing, we used dibasic acid, including hydrochloric acid and acetic acid. Acidifying tests usually use acid-resistant core displacement devices (Wang et al., 2020) and acid-resistant simulated fracturing devices (Asadollahpour et al., 2019). The cost of those devices is huge and inconvenient in the laboratory. In this study, the reaction rate was measured through the self-assembled device (see Scheme 2). The reaction rate of 0.3% VES-c dibasic acid and rock was calculated by measuring the amount of CO 2 produced. The CO 2 enters the sealed container B, where the oil is added to prevent CO 2 from dissolving in water. As the reaction proceeds, the pressure in the B bottle increases, and the water is drained into the cylinder. The reaction rate is calculated by the volume of drained water divided by the reaction time. Compared with the core displacement device or the simulated acidizing fracturing device, the disadvantage of our self-assembled device is that it is insufficient to achieve the experimental process pressurization operation and the simulated fracturing operation. However, the advantages are that the device is cheap, simple, convenient, and easy to operate and can be assembled at any time in the laboratory. In addition, the reaction of the acid and solid can be observed intuitively, which is more suitable to study whether the synthesized VES-c can delay the reaction rate. FT-IR To determine the structure of VES-c, the FT-IR spectrum is performed, and the result is shown in Figure 1. We observe the stretch vibration absorption peak of C O at 1647.32 cm −1 and the stretch vibration peak of C-O at 1255.60 cm −1 . This indicates that the carboxylate is successfully connected. Besides, the peak at 3278.84 cm −1 is the stretch vibration absorption peak of O-H. The peaks at 3440.50 and 1548.61 cm −1 represent the stretch and bending vibration absorption peaks of amide N-H. The peaks at 3005.38, 2927.44, and 2854.72 cm −1 are the stretch vibration absorption peaks of C-H, -CH 3 , and -CH 2 -, respectively. The peaks at 964.37, 727.13, and 576.69 cm −1 are the bending vibration absorption of C-H, -CH 3 , and -CH 2 -, respectively. VES-c Surface Tension The surface tension (γ) values of various concentrations of VES-c solution were measured at 25°C. In the low concentration range (9.7 × 10 −7 -3.9 × 10 −6 mol/L), the γ value is close to the γ value of deionized water (γ deionized water 72.286 mN/m), and with the increase of VES-c concentration, the γ value greatly decreases and finally approaches to a certain value, as shown in Figure 3. The critical point is obtained by the intersection of two linear fittings. The concentration of VES-c at this point is known as the critical micelle concentration (CMC). At the CMC point, the surfactant molecules in the solution begin to form micelles. The CMC of VES-c was 89.2 μmol/L, and the corresponding c CMC was 32.8 mN/m, indicating that VES-c increases the viscosity of the solution at a low concentration. VES-c Dissolution The dissolution of 0.3% VES-c, 1.5% VES-c, and 2.7% VES-c in different concentrations of hydrochloric acid solutions and different concentrations of NaCl solutions was analyzed, respectively. The results show that different concentrations of VES-c are well dissolved in deionized water, hydrochloric acid solutions, and NaCl solutions. Besides, the viscosity of VES-c acid solutions or VES-c NaCl solutions is higher than that of deionized water by conducting a vial inversion test. The detailed figures are collected in Supplementary Figure S2. VES-c Viscosity The viscosity of VES-c and polyacrylamide were compared at the same concentration (see Figure 4). After the concentration of 0.2%, VES-c solutions' viscosity became higher than that of polyacrylamide solution. More importantly, the VES-c (relative molecular mass of 1032) shows good performance to the reservoir and environment compared to polyacrylamide (8-10 million relative molecular mass). VES-c Shear Resistance The injected liquid is affected by the frictional shear of the pipe wall and the reservoir rock; thus, it requires that the solution has a good shear resistance. The shear resistance of 0.3% VES-c and 1% VES-c solutions was measured by the dynamic rheometer at the temperature from 25°C to 95°C and the shear rate of 170 s −1 (see Figure 5). Before 60°C, the shear viscosities of 0.3% VES-c and 1% VES-c solutions are both small (close to 0 Pa·s). When the temperature exceeds 60°C, their shear viscosities greatly increase with temperatures rising because increasing the temperature facilitates the entanglement motion of VES-c molecules. After 75 min, the viscosity values still have a small fluctuation range, which shows that the VES-c has good shear resistance. VES-c Temperature Resistance To explore the temperature resistance of VES-c, we used the synchronous thermal analyzer to measure the VES-c solution in the temperature ranges from 40°C to 400°C. Figure 6 shows that, at 235.0°C, the first peak of the DTA curve appears, indicating that the first endothermic decomposition of VES-c occurs. When the temperature reached 400°C, the mass of VES-c was reduced by 56.96%. It means that the VES-c has good temperature resistance and can apply to high-temperature reservoirs. However, whether the components after chain scission continue to exert the effect of surfactant needs future experimental verification, which is beyond this work. VES-c Microstructure In the previous reports, there are three main forms of surfactants in dilute solutions: spherical micelle, rod-shaped micelles, and spherical bilayer vesicles (Israelachvili and Mitchell, 1976). As the concentration increases, a large number of surfactant molecules aggregate to form densely structured worm-like micelles (see Supplementary Figure S3). The worm-like micelles are entangled with each other and increase the viscoelasticity of the solution (Bulgakova et al., 2013;Yang and Hou, 2020). To analyze the microstructures of the VES-c surfactant, the 0.3% VES-c, 1% VES-c, and 3% VES-c solutions were observed by the cold-field SEM. Figure 7A shows that the 0.3% VES-c is randomly stacked in the solution as small flakes and slender columns. In the longitudinal direction, the structure is densely stacked and layered (see Figure 7B). The densely layered accumulations connect to the sheets, forming a large gap between the sheets but with no worm-like structure (see Figure 7C). In the 1% VES-c solution, the aggregation state of VES-c changes from chaotic accumulation to a long strip structure formed by small flakes and slender columns (see Figure 7D), and the overlap of long strips forms a layered grid structure ( Figure 7E). The overall structures are long strips (some are worm-like shapes) interconnected to form a layered network structure with dense holes ( Figure 7F). When the concentration increased to 3%, the aggregation of molecules appears as a large number of small flakes and slender columns formed a folded membrane ( Figure 7G). The magnified observation shows a clear worm-like structure ( Figure 7H). In the horizontal direction, the structures are entangled and connected, and in the longitudinal direction, the structure is densely layered ( Figure 7I). No worm-like micelles were formed in the 0.3% VES-c solution, but the long-chain tail and Gemini structure of the molecule effectively increased the viscosity of the solution. In the 1% and 3% VES-c solutions, worm-like micelles were formed, and the worm-like micelles were connected horizontally to form a longitudinal layer. This structure is stable and dense. It suggests that VES-c represents a good viscosity increasing effect, temperature resistance, and shear resistance. The Effect of VES-c on Retarding Acid-Rock Reaction To explore the effect of the VES-c retarding acid-rock reaction, we studied the four groups of acid-rock reactions. Table 1 summarized the rock dissolution rate and liquid pH after reaction for four groups. Group 1 is the reaction between dolomite and dibasic acid (3% HCl and 5% CH 3 COOH). Group 2 is the reaction between dolomite and 0.3% VES-c dibasic acid. Group 3 is the reaction of broken core and dibasic acid. Group 4 is the reaction between broken core and 0.3% VES-c dibasic acid. After the dolomite reactions (e.g., Group 1 and Group 2), the pH of solutions is 4.5, and the k of Group 2 is 4.45% higher than that of Group-1, indicating that VES-c does not adhere to the surface of the dolomite to hinder the reaction. For the broken core reactions (e.g., Group 3 and Group 4), the pH of solutions is 0.5, and the k of Group 4 is 1.29% higher than that of Group 3, indicating that VES-c does not adhere to the surface of the broken core and is beneficial to the acid and broken core reaction. Figure 8A shows that the drainage volume varies with time for Group 1 and Group 2. The reaction between dolomite and dibasic acid (black line) was very rapid in the first 90 min, and about 150 ml of water was drained. After 310 min, the reaction ended. However, for the Group 2 reaction (red line), about 50 ml of water was collected in the first 90 min, and the reaction lasted about 1000 min. The reaction rate of Group 2 was retarded, about 66.67%, compared with that of Group-1. It is obvious that the 0.3% VES-c retards the reaction rate of dolomite and dibasic acids. In the same manner, Figure 8B shows that the drainage volume varies with time for Group 3 and Group 4. In the first 90 min, the drainage volume of Group 3 (black line) was slightly greater than that of Group 4 (red line). However, after 90 min, the drainage volume of Group 3 was almost unchanged, but Group 4 reacted for 234 min. The reaction of broken core and dibasic acids was retarded as well. ICP-MS Analysis To analyze the element contents of solution after acid-rock reaction, an inductively coupled plasma mass spectrometer (ICP-MS) was used to determine the types and contents of elements. Figure 9A shows that the contents of Ca and Mg in the solution of Group 2 are higher than those of Group 1. This is consistent with the result of the dissolved ratio (see Table 1). Notably, the ratio of Mg and Ca (Mg/Ca 6.2) in Group 2 is lower than that (Mg/Ca 6.5) of Group 1. In the acid solution of pH ≈ 4.5, the VES-c may decrease the formation of amorphous calcium carbonate (ACC) (Rodriguez-Blanco et al., 2012;Rao et al., 2016), which increases the content of Ca 2+ in the solution, which is beneficial to reduce reservoir damage. Usually, the chemical composition of dolomite is CaMg(CO 3 ) 2 , where Mg is replaced with Fe to produce CaMg 0.77 Fe 0.23 (CO 3 ) 2 ; thus, there is a small amount of Fe. The reaction equations are as follows: C a Mg 0.77 Fe 0.23 (CO 3 ) 2 +4H + → Ca 2+ +0.77Mg 2+ +0.23Fe 2+ +2CO 2 ↑+2H 2 O (2) Frontiers in Chemistry | www.frontiersin.org August 2021 | Volume 9 | Article 715009 8 In Figure 9B, the main elements are Fe, Mg, Ca, and Al. The carbonate minerals in the core are iron calcite and iron dolomite, so the content of Fe is the highest. The content of each element of Group 4 is higher than that of Group 3, which is consistent with the dissolved ratio. The reaction equations between the main minerals contained in the core and the acid solution are as follows. The related reactions of feldspar minerals and acid solution are as follows: (Na, K)AlSi 3 O 8 +4H + +4H 2 O → 3H 4 SiO 4 +(Na, K) + +Al 3+ (3) CaAl 2 Si 2 O 8 +8H + → 2H 4 SiO 4 +Ca 2 + +2Al 3+ (4) The reactions of carbonate minerals and acid solution are shown in Eqs. 2, 5: The clay mineral illite is relatively stable and hardly reacts with acid at room temperature. The chemical components of chlorite are (Mg,Fe,Al) 3 [(Si,Al) 4 O 10 ](OH) 8 and (Mg, Fe, Al) 3 (OH) 6 , and the main chemical components are SiO 2 , A1 2 O 3 , FeO, and MgO. The corresponding reactions are shown in the following equations: Figure 11B with Figure 11C, the core surface after the dibasic acid treatment looks messy, loose, and fragile. Moreover, the core surface after 0.3% VES-c dibasic acid treatment is relatively regular and firm. It suggests that VES-c is favorable to deep acidification of rock and prevents loose particles from clogging pores. In addition, by comparing the EDS figures, the Ca, Fe, and Mg of the reacted cores are reduced and the difference in the C element is small, which proves that the carbonate minerals reacted and the VES-c is not easy to adhere to the core surface. XRD Analysis The effect of VES-c was analyzed from a microscopic view by the SEM-EDS. To fully investigate the effect of VES-c, the XRD analysis was also performed from a macroscopic view. The results show that the peak intensity of dolomite is reduced after the reaction for Group 1 and Group 2, and some peaks disappear. This is because the contents of Ca, Mg, and Fe in the dolomite are changed ( Figure 12A). For Group 3 and Group 4, the carbonate mineral peaks of the cores disappeared after the reaction, indicating that the reaction finished. Adding 0.3% VES-c would not affect the dissolution of dibasic acid on the dolomite and core (see Figure 12B). The Mechanism of VES-C Retarding Acid-Rock Reaction 0.3% VES-c retards the acid-rock reaction because the solution viscosity is increased without adhering to the core surface. There are three effects of 0.3% VES-c acid viscous solution. First, the movement of H + in the solution is slowed. Second, the viscous liquid reduces the fluid loss and increases the spreading area of the liquid (Yu et al., 2019b), thereby resulting in a uniform and deep acidification. Third, the viscous liquid restrains the overflow of CO 2 (see Supplementary Figure S4). The CO 2 in the solution extends the distance of H + to the solid surface and is tethered at the solid surface to reduce the touch efficiency of H + . In addition, the amount of CO 2 increases in the solution; namely, the increase of product concentration reduces the reaction rate. The VES-c with good salt tolerance is not precipitated with ion concentration increasing during the reaction. The total concentration of Ca 2+ , Mg 2+ , Fe 2+ , and Al 3+ in the solution after Group 4 reaction reached 3085 mg/L, indicating that VES-c has good resistance to high-valent ions. CONCLUSION In this work, we synthesized a Gemini zwitterionic viscoelastic surfactant (VES-c) with good acid and salt resistance, temperature resistance, and shear resistance. Although no worm-like micelle structure was formed in the 0.3% VES-c solution, the viscosity of 0.3%VES-c dibasic acid (3% HCl+5% CH 3 COOH) increases due to its special molecular structure forming the layer structure with pores. A self-assembled device was used to verify the effect of 0.3% VES-c on retarding the reaction of dibasic acid and rock. ICP-MS, SEM-EDS, and XRD were used to verify element content and structure after the acid-rock reaction. The main conclusions are obtained as follows: 1) 0.3% VES-c can prolong the reaction time and ICP-MS results show that the ion concentrations in the solution for Group 1/ 2 or Group 3/ 4 reactions are similar, which suggests that VESc is not easy to adhere to the solid surface. In addition, VES-c decreases the formation of ACC. 2) SEM-EDS intuitively exhibits that 0.3% VES-c dibasic acid better dissolves dolomite and cores, and the dissolved solids are more uniform and produce more pores with harmless solids. XRD also verifies the effect of 0.3% VES-c in enhancing acid-rock dissolution. 3) The mechanism of VES-c retarding the acid-rock reaction was analyzed. First, 0.3% VES-c increases the viscosity of the dibasic acid and does not adhere to the solid surface. Second, the viscous VES-c solution inhibits the H + movement, reducing the solution and filtering out and expanding the spreading area of the liquid. Third, the viscous VES-c solution also restrains CO 2 to escape from the liquid, thereby extending the distance of H + movement, reducing the touch area of solids, and thus reducing the reaction rate. DATA AVAILABILITY STATEMENT The original contributions presented in the study are included in the article/Supplementary Material; further inquiries can be directed to the corresponding authors.
v3-fos-license
2014-10-01T00:00:00.000Z
2006-05-23T00:00:00.000
16477743
{ "extfieldsofstudy": [ "Biology", "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://journals.plos.org/plosmedicine/article/file?id=10.1371/journal.pmed.0030208&type=printable", "pdf_hash": "04c0c7deb5267b851bce5dbd83bbdb0b95ad2fcf", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:1635", "s2fieldsofstudy": [ "Medicine", "Philosophy", "Biology" ], "sha1": "69ac3a9abf46ccf06f233344ddd3c7dfaed3a5b6", "year": 2006 }
pes2o/s2orc
The Limits of Reductionism in Medicine: Could Systems Biology Offer an Alternative? In the first of a two part series, Ahn and colleagues discuss the reductionist approach pervading medicine and explain how a systems approach (as advocated by systems biology) may complement reductionism. S ince Descartes and the Renaissance, science, including medicine, has taken a distinct path in its analytical evaluation of the natural world [1,2]. This approach can be described as one of "divide and conquer," and it is rooted in the assumption that complex problems are solvable by dividing them into smaller, simpler, and thus more tractable units. Because the processes are "reduced" into more basic units, this approach has been termed "reductionism" and has been the predominant paradigm of science over the past two centuries. Reductionism pervades the medical sciences and affects the way we diagnose, treat, and prevent diseases. While it has been responsible for tremendous successes in modern medicine, there are limits to reductionism, and an alternative explanation must be sought to complement it. The alternative explanation that has received much recent attention, due to systems biology, is the systems perspective (Table 1). Rather than dividing a complex problem into its component parts, the systems perspective appreciates the holistic and composite characteristics of a problem and evaluates the problem with the use of computational and mathematical tools. The systems perspective is rooted in the assumption that the forest cannot be explained by studying the trees individually. In order for a systems perspective to be fully appreciated, however, we must fi rst recognize the reductionist nature of medical science and understand its limitations. For this reason, the fi rst article in this series is dedicated to examining the reductionist approach that pervades medicine and to explaining how a systems approach (as advocated by systems biology) may complement it. In the second article, we aim to provide a more practical discussion of how a systems approach would affect clinical medicine. We hope that these discussions can stimulate further inquiry into the clinical implications of systems principles. Current Medical Science While the implementation of clinical medicine is systems-oriented, the science of clinical medicine is fundamentally reductionist. This is shown in four prominent practices in medicine: (1) the focus on a singular, dominant factor, (2) emphasis on homeostasis, (3) inexact risk modifi cation, and (4) additive treatments. Focus on a singular factor. When the human body is viewed as a collection of components, the natural inclination of medicine is to isolate the single factor that is most responsible for the observed behavior. Much like a mechanic who repairs a broken car by locating the defective part, physicians typically treat disease by identifying that isolatable abnormality. Implicit within this practice is the deeply rooted belief that each disease has a potential singular target for medical treatment. For infection, the target is the pathogen; for cancer, it is the tumor; and for gastrointestinal bleeding, it is the bleeding vessel or ulcer. While the success of this approach is undeniable, it leaves little room for contextual information. A young immuno-compromised man with pneumococcal pneumonia usually gets the same antibiotic treatment as an elderly woman with the same infection. The disease, and not the person affected by it, becomes the central focus. Our contemporary analytical tools are simply not designed to address more complex questions, and, thus, questions such as "how do a person's sleeping habits, diet, living condition, comorbidities, and stress collectively contribute to his/her heart disease?" remain largely unanswered. Emphasis on homeostasis. For decades, homeostasis has been a vital, guiding principle for medicine. Claude Bernard in 1865 and later Walter B. Cannon popularized this principle, expounding on the body's remarkable ability to maintain stability and constancy in the face of stress [3]. Since then, homeostasis has been incorporated into clinical practice. Illness is defi ned as a failed homeostatic mechanism, and treatment requires physicians to substitute for this failed mechanism by correcting deviations and placing parameters within normal range. This corrective treatment approach is true for a range of medical conditions, from hypothyroidism to hypokalemia to diabetes. This interpretation of homeostasis, however, is biased by a reductionist viewpoint in two ways. First, the emphasis on correcting the deviated parameter (e.g., low potassium) belies the importance of systemswide operations. Either alternate, less intuitive targets may be more effective, or correction of the deviated parameter may itself have harmful system-wide effects. Existing evidence that demonstrates adverse effects of calcium for hypocalcemia [4,5] or blood pressure control for strokerelated hypertension [6] points to the limitations of this homeostasis interpretation as a universal principle. Secondly, the exclusive focus on normal ranges belies the importance of dynamic stability. Because reductionism often disregards the dynamic interactions between parts, the system is often depicted as a collection of static components. Consequently, emphasis is placed on static stability/normal ranges and not on dynamic stable states, such as oscillatory or chaotic (seemingly random but deterministic) behavior. Circadian rhythms [7] are an example of oscillatory behavior, and complex heart rate variability [8][9][10] is an example of chaotic behavior. Failure to include these dynamic states in the homeostasis model may lead to treatments that are either ineffective or even detrimental. Inexact risk modifi cation. Since disease cannot always be predicted with certainty, health professionals must identify and modify risk factors. The common, unidimensional, "one-riskfactor to one-disease" approach used in medical epidemiology, however, has certain limitations. An example is hypertension, a known risk factor for coronary heart disease. Guidelines suggest pharmacological and lifestyle treatment for individuals with systolic blood pressure greater than 140. This strategy is supported by evidence from the Framingham Study, which showed that men between 35 and 64 years of age with systolic blood pressures greater than 140 were twice as likely to develop heart disease as compared to individuals with systolic blood pressure less than 140 [11]. However, given that nearly 70% of the American population is not affected by hypertension, up to 30% of coronary artery disease develops in individuals with normal blood pressure [11]. Conceivably, a large number of people at small risk may give rise to more cases of disease than a small number of people at high risk. This observation is termed the prevention paradox [12]. To capture these missed cardiac events, the natural recourse is to progressively lower the blood pressure threshold for treatment. Consequently, the Joint National Committee on Prevention, Detection, Evaluation, and Treatment of High Blood Pressure lowered its initial diastolic blood pressure threshold of 105 in 1977 to 90 in 1980, to 85 (for high normal) in 1992, and to 80 (for prehypertension) in 2003. The cost of such a strategy is the unnecessary treatment of individuals who wouldn't have developed coronary disease in the fi rst place. This problem originates from the constraints imposed by a one-risk to one-disease analysis and the inability to work with multiple risk factors and calculate their collective infl uences. If a more multidimensional analytical method were used, then more precise risk projections for individuals could be devised. Additive treatments. In reductionism, multiple problems in a system are typically tackled piecemeal. Each problem is partitioned and addressed individually. In coronary artery disease, for example, each known risk factor is addressed individually, whether it be hyperlipidemia or hypertension. The strategy is also extended to coexisting diseases, such as hypothyroidism, diabetes, and coronary artery disease. Each disease is treated individually, as if the treatment of one disorder (such as coronary artery disease) has minimal effects on the treatment of another (such as hypothyroidism). While this approach is easily executable in clinical practice, it neglects the complex interplay between disease and treatment. The assumption is that the results of treatments are additive rather than nonlinear. Limitations to Current Medical Science The science underlying our medical practices, from diagnosis to treatment to prevention, is based on the assumption that information about individual parts is suffi cient to explain the whole. But there are circumstances in which the complex interplay between parts yields a behavior that cannot be predicted by the investigation of the parts alone. The failure to account for these circumstances is the common denominator for the explanations of why the aforementioned practices are, in many cases, inadequate. So how should these complexities be addressed? Is there a formal method that can explain how the pieces create the whole? How do we shift our lens from the parts to the system? The answers to these questions may come from a relatively new branch of science called systems biology [13][14][15][16]. Systems biology was conceived to address the molecular complexities seen in biological systems. One major impetus for its creation was the human genome project. Human Genome Project The completion of the human genome project in 2003, in addition to the development of high-throughput technologies such as DNA array chips, has led scientists to confront a challenge they could not address before; namely, how do genes interact to collectively create a system-wide behavior? The human genome contains 30,000 to 35,000 genes [17]. Although this number is just fi ve times the number of genes in a unicellular eukaryote (e.g., approximately 6,000 genes in Saccharomyces cerevisiae ) [18], the human genome encodes for nearly 100 trillion cells in the human body [19]. The richness of information is derived not only in the genes themselves but also in the interaction between genes and between their respective products. The genes encode for messenger RNA, the messenger RNAs encode for proteins, and the proteins act as catalysts or secondary messengers, among other diverse functions. Between each hierarchical level, modifi cations (e.g., alternative splicing) are made, and at each hierarchical level (e.g., transcription), thousands of molecules interact with other molecules to create a complex regulatory network. What becomes evident from these molecular analyses is that phenotypic traits emerge from the collective action of multiple individual molecules [20]. Therefore, the previous notion that a single genetic mutation is responsible for most phenotypic defects is overly simplistic. Complex diseases such as cancer, asthma, or atherosclerosis cannot generally be explained by a single genetic mutation. Systems Biology: An Introduction The need to make sense of complex genetic interactions has led some researchers to shift from a componentlevel to system-level perspective. This novel approach incorporates the technical knowledge obtained from systems engineering, which began with Norbert Weiner's "cybernetics" in 1948 and Ludwig von Bertalanffy's "General Systems Theory" in 1969 [21,22]. The developing fi elds of chaos theory, nonlinear dynamics, and complex systems science, along with computational science, mathematics, and physics, have also contributed to the analytical armamentarium used by systems analysts. The intention of applying these theories to biological systems (termed "systems biology") is to understand how properties emerge from the nonlinear interaction of multiple components (Table 2). How does consciousness arise from the interactions between neurons? How do normal cellular functions such as cellular division, cell activation, differentiation, and apoptosis emerge from the interaction of genes? These questions highlight the diffi culty of understanding complex biological systems-the moment the lens is directed toward the components of a biological system, the behaviors and properties of the whole system become obscure. Plainly said, one loses sight of the forest for the trees. Systems biology is an integrative approach that combines theoretical modeling and direct experimentation. Theoretical models provide insights into experimental observations, and experiments can provide data needed for model creation or can confi rm or refute model fi ndings. With this integrative approach, it becomes E. coli chemotaxis is an example of systems biology's application (see Figure 1). Chemotaxis is defi ned as directed motion of a cell toward increasing (or decreasing) concentrations of a particular chemical substance. E. coli has been observed to migrate toward areas of higher aspartate concentrations through a series of "runs" and "tumbles." The "runs" are linear paths taken by the bacteria, while the "tumbles" are random rotations that reorient the bacteria. When bacteria reach higher concentrations of aspartate, time spent "running" in proportion to "tumbling" increases-the logic being that if higher concentrations of aspartate are encountered, the bacterium is on the right track and should continue in that direction. If the E. coli fails to detect increasing aspartate concentrations, the bacterium eventually exhibits "adaptation," where it returns to the baseline "tumble and run" activities. This ensures that it does not continually head in the wrong direction. Conventional medical methods have, for more than a decade, been able to identify the enzymes and molecules involved in the chemotactic pathway. Despite this, little was known about how the interactions in this pathway translated to its known chemotactic behavior, namely the ability of E. coli to "adapt" in a large range of aspartate concentrations. Spiro, et al. [31] used systems methods in 1997 to provide a mechanistic explanation. They placed the involved enzymes into a mathematical equation (context), considered the relationship between these enzymes (space), and analyzed the activities for each enzyme with the use of computational tools (time). Increased temporal detections of aspartate led to reduced autophosphorylation rate of the aspartate receptor. This effect reduced the tumbling rate and increased the running time. When there was no increased detection of aspartate, methylation of the aspartate receptor occurred, which increased the autophosphorylation rate and caused the E. coli to return to prestimulus tumble-and-run activities (adaptation). Importantly, this adaptive behavior occurred at different aspartate concentrations, explaining how E. coli does not perpetually exist in an excited state, even at higher aspartate concentrations. Similar conceptual breakthroughs have been obtained with the use of systems methods in other biological phenomena, such as bacteriophage lysis-lysogeny [32], biological oscillations [33,34], circadian rhythms [35,36], and Drosophila development [37][38][39]. In these situations, the incorporation of context, time, and space into the equation has provided information not otherwise obtained through structural information alone. Box 1. Chemotaxis as an Example of Systems Biology's Application apparent that no single discipline is ideal to address systems biology. Scientists from molecular biology, computational science, engineering, physics, statistics, chemistry, and mathematics need to cooperate in order to explain how the biological whole materializes [23]. While the fi eld of systems biology is young, it has been received with substantial enthusiasm. Many believe that, without a system-level understanding, the benefi ts of the genomic information cannot be fully realized. The perceived importance of this understanding is refl ected in the investments made by major academic and industrial centers within the past few years [24]. Importance of Context, Space, and Time How is systems-level understanding achieved? The answer likely lies in the dynamic and changing nature of biological networks. Unlike the static depiction of many wiring network representations, both the molecular concentrations and enzyme activities are continually changing as a result of infl uences from other molecular substrates. The network is an interactive and dynamic web in which the properties of a single molecule are contingent on its relationship to other molecules and the activities of those other molecules within the network. Therefore, the behavior of the system arises from the active interactions of these biological components. To elicit the system-wide behavior, three factors need to be considered: (1) context, which values the inclusion of all components partaking in a process; (2) time, which considers the changing characteristics of each component; and (3) space, which accounts for the topographic relationships between and among components. Box 1 and Figure 1 show an example of how systems methods-incorporating context, time, and space-allowed researchers to provide a mechanistic explanation for Escherichia coli chemotaxis. The three factors of context, time, and space play a vital role in systems science. Systems biologists consequently use tools such as differential equations, diffusion functions, computational models, and high throughput tools to incorporate one or more of these factors to address a research question. This approach differs from traditional medical methods, where the central focus is elaborating the instantaneous property of a component involved in a disease process. In many medical models, the process of data extraction, such as obtaining serum glucose level or blood pressure, can lead to loss of information on time, space, or context. Systems biologists contend that loss of this information leads to loss of rich information that would otherwise contribute to a better understanding of the systemic and dynamic behavior of the human body. Systems Biology Concepts Several concepts have emerged in systems biology to describe properties occurring at the systems level. One prominent concept is robustness, defi ned as the ability to maintain E. coli has been observed to migrate toward areas of higher aspartate concentrations through a series of "runs" and "tumbles" (see Box 1). Autophosph, autophosphorylation. stable functioning despite various perturbations [25,26]. Natural systems specifi cally demonstrate an uncanny penchant for robustness, which, as many have argued, is necessary for natural systems to survive and procreate [27]. Robustness is attained by fi ve described mechanisms: feedback control, structural stability, redundancy, modularity, and adaptation (see Box 2) [13,28]. Biological systems across all scales, from cells to organisms, rely on a combination of these mechanisms to maintain a semblance of stability. The human body is no exception. The stability discussed in systems biology is distinct from the stability commonly perceived in clinical medicine. Medical practitioners often picture stability as an unwavering entity such that values are maintained within a specifi c, confi ned range. But stability in systems biology is revealed dynamically, and it is the behavior of the system rather than the state of the system that remains consistent. This dynamic stability can assume many forms, including homeostatic, bistable (having two stable states), oscillatory, or chaotic [29]. Normal biological functions can be classifi ed into one of these dynamic behaviors: for instance, bacteriophage lysis-lysogeny as bistable, circadian rhythms as oscillatory, or heart rate variability as chaotic. This varied perspective of stability is more extensive than the commonly accepted notion of homeostasis and may ultimately infl uence how treatments are deliberated. Lessons from Systems Biology The fundamental disconnect that exists between clinical medicine and systems biology largely stems from their disparate worldviews-one focuses on the parts and the other on the systems. As a consequence, the factors of time, space, and context, which are considered vital for a systemlevel understanding, are not assigned the same level of importance in medicine as they are in systems biology. Moreover, system-level concepts such as robustness, stability, and variability do not have meaningful equivalents in the medical vernacular. The incorporation of such concepts into medicine may help address certain limitations and greatly enhance its therapeutic potential. The second article in this series will explore how systems medicine may be realized in practice. Box 2 Feedback control: Serves to correct deviations and restores the system to its natural behavior. Structural stability: Explains for the stability that arises from the very nature of the network structure. For instance, the World Wide Web was shown to be resistant to random attacks to Web sites by virtue of its organization [30]. Redundancy: Allows for functionally equivalent units to substitute for one another in the event of a failure. Modularity: Prevents amplifi cation of a perturbation by dividing function or structure into subunits or modules. Adaptation: Promotes survival and functioning in a variety of environmental conditions.
v3-fos-license
2018-04-03T04:48:27.240Z
2017-08-16T00:00:00.000
3572170
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "HYBRID", "oa_url": "https://bpspubs.onlinelibrary.wiley.com/doi/pdfdirect/10.1111/bcp.13372", "pdf_hash": "a3755ffeff7574ecfa2a80a3a5a03dada2d8a648", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:1636", "s2fieldsofstudy": [ "Medicine", "Biology", "Chemistry" ], "sha1": "a3755ffeff7574ecfa2a80a3a5a03dada2d8a648", "year": 2017 }
pes2o/s2orc
Population pharmacokinetics and electrocardiographic effects of dihydroartemisinin–piperaquine in healthy volunteers Aims The aims of the present study were to evaluate the pharmacokinetic properties of dihydroartemisinin (DHA) and piperaquine, potential drug–drug interactions with concomitant primaquine treatment, and piperaquine effects on the electrocardiogram in healthy volunteers. Methods The population pharmacokinetic properties of DHA and piperaquine were assessed in 16 healthy Thai adults using an open‐label, randomized, crossover study. Drug concentration–time data and electrocardiographic measurements were evaluated with nonlinear mixed‐effects modelling. Results The developed models described DHA and piperaquine population pharmacokinetics accurately. Concomitant treatment with primaquine did not affect the pharmacokinetic properties of DHA or piperaquine. A linear pharmacokinetic–pharmacodynamic model described satisfactorily the relationship between the individually corrected QT intervals and piperaquine concentrations; the population mean QT interval increased by 4.17 ms per 100 ng ml–1 increase in piperaquine plasma concentration. Simulations from the final model showed that monthly and bimonthly mass drug administration in healthy subjects would result in median maximum QT interval prolongations of 18.9 ms and 16.8 ms, respectively, and would be very unlikely to result in prolongation of more than 50 ms. A single low dose of primaquine can be added safely to the existing DHA–piperaquine treatment in areas of multiresistant Plasmodium falciparum malaria. Conclusions Pharmacokinetic–pharmacodynamic modelling and simulation in healthy adult volunteers suggested that therapeutic doses of DHA–piperaquine in the prevention or treatment of P. falciparum malaria are unlikely to be associated with dangerous QT prolongation. Introduction Dihydroartemisinin (DHA)-piperaquine is currently one of five artemisinin-based combination therapies (ACTs) recommended by the World Health Organization (WHO) for the treatment of Plasmodium falciparum malaria [1][2][3]. It has also proved to be well tolerated and effective in mass treatments and intermittent preventive therapies [4,5]. DHA is a potent antimalarial compound but it is rapidly eliminated from the systemic circulation (elimination half-life 1-2 h) [6][7][8]. By contrast, piperaquine has a large apparent volume of distribution and a long terminal elimination half-life (20-30 days). Thus, in the DHA-piperaquine ACT, the slowly eliminated piperaquine removes those parasites remaining after the 3-day course of DHA [9]. Artemisinin resistance in P. falciparum has emerged in South-East Asia [10,11], threatening current elimination efforts and leading to partner drug resistance. Mass drug administration with DHA-piperaquine is one approach to resistance containment but proposed extensive use in healthy people emphasizes the need to assess potential cardiovascular toxicity risks [12,13]. Primaquine is the only available drug for the radical cure of Plasmodium vivax malaria. A single low dose of primaquine is also recommended by the WHO as a gametocytocide in acute P. falciparum malaria [4]. This single 0.25 mg base kg -1 dose is considered unlikely to cause serious toxicity in patients with glucose-6-phosphate dehydrogenase deficiency, so it should be given to all nonpregnant patients above 6 months of age with P. falciparum malaria in low transmission settings [14]. The potential for high doses of quinoline-related compounds to cause cardiovascular toxicity has been recognized since the first introduction of the cinchona alkaloids. Quinidine, the diastereomer of quinine, is the prototype for medicines causing delayed ventricular repolarization, which is manifest as marked QT prolongation (once termed the 'quinidine effect') on the electrocardiogram (ECG). This results in both antiarrhythmic and proarrhythmic effects. QT prolongation may be associated with potentially lethal polymorphic ventricular tachycardia (i.e. torsades de pointes), particularly in patients with congenitally long QT intervals or those with other predisposing factors. The most extreme effects caused by antimalarial drugs occurred with halofantrine, which was clearly associated with sudden death [15]. Although QT prolongation is associated with several structurally related antimalarial agents, halofantrine is the only compound that has been associated with sudden unexplained death. Piperaquine is structurally similar to chloroquine, which also causes consistent QT prolongation [16]. Concerns have been raised regarding the potential for DHA-piperaquine to cause cardiotoxicity. Several studies have reported a significant QT prolongation associated with DHA-piperaquine treatment [17][18][19]. A recent study of a high piperaquine dose (50% increased dosage compared with standard treatment) in Cambodian soldiers reported a substantial prolongation of the Fridericia-corrected QT (QTcF) interval [20] and the study was halted because of cardiovascular safety concerns, although the machine read the QU rather than the QT intervals. A study in Cambodian children and adults with uncomplicated P. falciparum malaria showed a small but significant prolongation of the Bazett-corrected median QT (QTcB) interval of 11 [95% confidence interval (CI) 4, 18] ms after receiving a standard age-based dosage of DHApiperaquine [21]. A large multicentre, prospective, observational study in African patients receiving a standard 3-day treatment of DHA-piperaquine showed that only three out of 1002 evaluated patients had a QTcF interval above 500 ms and less than 10% of patients had a maximum QTcF prolongation above 60 ms [22]. The interpretation of electrocardiographic changes during the treatment of malaria is confounded by systematic changes that occur during recovery and result in QT lengthening, so drug effects are better assessed in healthy subjects, who are also more representative of populations receiving mass treatments. The present study aimed to investigate the population pharmacokinetic properties of DHA and piperaquine, identify potential drug-drug interactions with primaquine and quantify the relationship between piperaquine exposure and QT prolongation in healthy volunteers using a nonlinear mixed-effects modelling approach. Study design The study was conducted at the Faculty of Tropical Medicine, Mahidol University, Bangkok, Thailand. The clinical details and noncompartmental pharmacokinetic results of the study have been reported in full elsewhere [23]. Study approval was obtained by the ethics committee of the Faculty of Tropical Medicine, Mahidol University, Bangkok, Thailand (reference number TMEC 12-004, approval number MUTM 2012-009-01), and by the Oxford University Tropical Research Ethics Committee (OXTREC 58-11). The study was registered at Clinicaltrials.gov (NCT01525511, 16 January 2012). The methods used were in accordance with the approved guidelines. The study aims were explained in full to the volunteers, and written informed consent was obtained from all subjects before their participation. At admission, a full medical history was taken, a physical examination and complete blood count were carried out and blood glucose levels were measured. Participants with malaria or with glucose-6-phosphate dehydrogenase (G6PD) deficiency, pregnant women and lactating women were excluded from the study. Safety was analysed based on adverse events, physical examination, vital signs, clinical laboratory parameters, 12-lead ECG and methaemoglobin levels. The study had an open-label, randomized, three-way, crossover design and was conducted in 16 healthy Thai volunteers. It was a descriptive pharmacokineticpharmacodynamic study and no formal sample size calculations were performed. However, 16 subjects were chosen on the basis of the observed variability in the pharmacokinetic parameters of the study drugs, and therefore assumed to generate a reasonable degree of accuracy in parameter estimates. All volunteers received primaquine alone in the first phase, followed by a washout period of 1 week. In the second and third phases, volunteers received DHA-piperaquine alone and DHA-piperaquine coadministered with primaquine at random, with an intervening washout period of 8 weeks. Study drug regimens comprised two tablets of primaquine (each tablet containing 15 mg primaquine base) and three tablets of co-formulated DHA-piperaquine (each tablet containing 40 mg DHA and 320 mg piperaquine phosphate). Study drugs were administered in the morning, 30 min after a light meal (~200 kcal and 8 g fat) with a glass of water. Subjects were rested for at least 20 min before ECG measurements were taken (ECG-1250 K, Nihon Kohden, Japan). 12-lead ECG measurements were performed twice before drug administration, and at 1, 2, 4, 8, 12, and 24 h after each study drug administration. The ECGs were recorded at 10 mm mV -1 sensitivity, and 25 mm s -1 paper speed. Automatic readouts of all ECG measurements were collected but all ECGs with a reported QT interval greater than 450 ms were manually adjudicated by a research physician (unblinded) and a cardiologist (blinded). Other abnormal ECG waveforms were read by a cardiologist. Observed QT intervals were later corrected for heart rate by both the Fridericia and Bazett formulae [24]. Data-driven individual and study population correction factors were also evaluated (see section on methodology, below). Drug quantification Plasma concentrations of DHA and piperaquine were measured using solid-phase extraction followed by liquid chromatography coupled with tandem mass spectrometry [25,26]. Quality control samples at low, middle and high concentration (5.87, 117 and 1880 ng ml -1 for DHA and 4.50, 20.0 and 400 ng ml -1 for piperaquine) were analysed in triplicate within each batch of study samples, to ensure the accuracy and precision of the drug assay. The relative standard deviations (% CV) were 3.49%, 2.54% and 1.87% for the DHA quality control samples and 4.76%, 2.60% and, 2.82% for the piperaquine quality control samples. The lower limit of quantification (LLOQ) was set to be 2.00 ng ml -1 for DHA and 1.50 ng ml -1 for piperaquine. The laboratory is a participant in the QA/QC proficiency testing programme supported by the Worldwide Antimalarial Resistance Network [27]. Population pharmacokinetic analysis DHA and piperaquine plasma concentrations were transformed into their natural logarithms and analysed using a nonlinear mixed-effects modelling approach in NONMEM version 7.3 (Icon Development Solution, Ellicott City, MD, USA). Pirana version 2.9.0 [28], Perl-speaks-NONMEM version 3.5.3 (PsN) [29] and Xpose version 4.0 [30] were used for automation, model evaluation and diagnostics during the model-building process. The first-order conditional estimation method with interactions (continuous data only) or the Laplacian estimation method (a combination of continuous and categorical data) was used throughout modelling and simulation. Piperaquine concentrations below the LLOQ were omitted as only 2.3% of the samples were measured to be below this level. However, a relatively large fraction of DHA concentrations were below the LLOQ (15% of all data, and 7.0% of data in the elimination phase). Therefore, two LLOQ methods were evaluated during the model-building process [31]. Data below the LLOQ were omitted (M1 method) or modelled as categorical data (M3 method). Model fitness was evaluated primarily by the objective function value (OFV; calculated by NONMEM as proportional to À2 × log-likelihood of the data). Model discrimination between two hierarchical models was determined by a likelihood ratio test, based on the chi-square distribution of the OFV (i.e. P-value <0.05 then ΔOFV >3.84, at 1 degree of freedom difference). One-, two-, three-and four-compartment structural disposition models were evaluated for DHA and piperaquine. The best performing model was used to evaluate the absorption characteristics of DHA and piperaquine (i.e. first-order absorption with and without lag time, zero-order absorption and transit absorption). The transit compartment absorption model is a more mechanistic description of delayed absorption compared with the dichotomous properties of a lag-time model [32]. Pharmacokinetic parameters were assumed to be log-normally distributed and therefore implemented as an exponential between-subject variability, as in Equation 1. where θ i is individual i's parameter estimate, θ is the typical parameter estimate of the population and η i,θ is the between-subject variability for individual i, which is normal distributed with a zero mean and variance ω 2 . The betweenoccasion variability (the variability between administration of the study doses) was also investigated, as in Equation 2: where κ j,θ is the between-occasion variability of the pharmacokinetic parameter θ at the jth dosing occasion. Betweensubject and between-occasion variability was also evaluated on the relative bioavailability, fixed to unity for the population, to allow for the observed high variability in the absorption of the study drugs. Estimated between-subject and between-occasion variability below 10% or when estimated with poor precision (RSE > 50%) were fixed to zero. Residual unexplained variability was modelled as an additive error on the log-transformed observed concentrations (equivalent to an exponential error on an arithmetic scale). Body weight was introduced into the pharmacokinetic model as a fixed allometric function on all volume, clearance and distribution parameters, centred on the median body weight (64 kg) of the study population, as in Equation 3 and 4 [33]: where CL i represents the individual clearance value, CL represents the typical population value of clearance, BW i represents the individual body weight, V i represents the individual volume of distribution, and V represents the typical population value of volume of distribution. All continuous and categorical covariates (aspartate aminotransferase, alanine aminotransferase, alkaline phosphatase, haemoglobin, blood urea nitrogen, serum creatinine level, albumin level, primaquine coadministration and age) were investigated by using a stepwise forward inclusion (P-value <0.05), followed by stepwise backward elimination (P-value >0.001). A strict P-value of 0.001 for the backward elimination was used as there were relatively few subjects in the present study [34]. Gender was not evaluated as a covariate owing to the substantial imbalance between male and female subjects (five males out of 16 subjects). The effect of primaquine coadministration was also modelled separately, using a full covariate approach in which the primaquine coadministration was implemented as a categorical covariate on all pharmacokinetic parameters (except relative bioavailability owing to identifiability issues) in the final pharmacokinetic model. The full covariate models were bootstrapped (n = 1000) to determine a potentially influential drug-drug interaction on primary and secondary pharmacokinetic parameters. A primaquine-dependent change of more than ±25% in parameter estimates was deemed a clinically relevant drug-drug interaction. Potential model misspecification and systematic errors were evaluated by basic goodness-of-fit diagnostics. Eta and epsilon shrinkages were used to assess the ability to detect model misspecifications in goodness-of-fit diagnostics [35]. Model robustness and nonparametric confidence intervals were evaluated by bootstrap diagnostics (n = 1000). Predictive performances of the final models were illustrated by prediction-corrected visual and numerical predictive checks (n = 2000) [36]. The 5th, 50th and 95th percentiles of the observed concentrations were overlaid with the 95% CIs of each simulated percentile, to detect model bias. Population cardiac electrophysiological pharmacodynamics of piperaquine The observed QT interval must be corrected for heart rate in order to compare QT intervals between and within patients. Observed QT measurements were corrected by the traditionally used Bazett and Fridericia formulae (i.e. fixing the exponent (α) to 1/2 and 1/3, respectively, in Equation 5). Furthermore, all observed individual QT and RR intervals from the placebo arm (i.e. primaquine-alone arm) in a subject were used to determine the optimal individual QT correction factor (α) for each subject using ordinary least-squares fit (Equation 5). The calculated individual correction factor for a particular subject was then applied to all measured QT intervals for that subject, in order to generate corrected QT intervals (QTc) [24]. The appropriateness of the applied correction methods was evaluated by individual linear regression analysis of QTc vs. RR. The relationship between piperaquine drug concentrations and QRS, JT (i.e. QT -QRS) and QT intervals was evaluated with ordinary linear regression to assess the most appropriate modelling approach. The individually corrected QT interval was deemed the most appropriate measurement (see results section, below) in this particular analysis and therefore carried forward throughout modelling and simulation. QTc prolongations (ΔQTc) were calculated by subtracting the baseline QTc interval (QTc Baseline ) from the observed QTc intervals after study drug administration (QTc Post-dose ), as in Equation 6. Double-delta corrections are commonly performed to adjust for the observed circadian rhythm of ECG measurements [37]. Thus, double-delta-corrected QTc prolongations (ΔΔQTc) were calculated by subtracting the placebo arm ΔQTc from the treatment arm ΔQTc, as in Equation 7. The primaquine-alone arm was used as the placebo arm. Although primaquine can be shown in experimental conditions to affect ion channels, and notably to block the human ether-à-go-go-related gene (hERG) potassium channel, the active concentrations are substantially higher than those likely to occur in humans taking low oral doses [38,39]. Furthermore, there was no correlation between ΔQTc vs. primaquine concentrations in the primaquine alone arm when evaluated using ordinary linear regression (i.e. the slope did not deviate significantly from zero). Calculated ΔΔQTc-prolongations were used as the pharmacodynamic endpoint. Individually predicted piperaquine concentrations (C P ) were obtained from imputing individual pharmacokinetic parameter estimates directly into the pharmacodynamic model. The relationship between drug exposure and QT prolongation was evaluated initially by a linear direct-response pharmacodynamic model, as in Equation 8. where θ 1 represents the typical baseline ΔΔQTc prolongation, η 1 is the normally distributed between-subject variability, θ 2 is the slope of the exposure-response relationship and ε i is the normally distributed residual error. Different exposureresponse relationships (i.e. power model and E MAX model) were also investigated during the model development process. Hysteresis was investigated to account for a possible delayed exposure-response relationship (i.e. turn-over and link models). Age, gender, electrolyte levels (i.e. potassium and sodium) at admission and concomitant primaquine administration were evaluated as linear covariates on the piperaquine-related ΔΔQTc prolongation, using a stepwise addition-deletion approach (as described above). Model evaluation and diagnostics were performed in the same manner as for the pharmacokinetic modelling approach. The final population pharmacokinetic-pharmacodynamic model was used to simulate QTc prolongation at different piperaquine concentrations. Single piperaquine doses ranging from 100 mg to 2000 mg were simulated (a total of 20 000 simulated subjects) in order to cover a wide range of possible piperaquine concentrations. Simulated piperaquine concentrations and the associated ΔΔQTc prolongation were overlaid with the observed data to determine the piperaquine concentrations resulting in predicted clinically important QT prolongations (> 60 ms). The final pharmacokinetic-pharmacodynamic model of piperaquine was also used to simulate expected QT prolongation in mass drug administration scenarios. A total of 1000 healthy subjects (body weight of 60 kg), receiving a standard 3-day treatment regimen every 4 weeks or every 8 weeks for a total duration of 1 year, were simulated. The maximum QT prolongation, in each simulated subject, after each round of drug administration was visualized in order to characterize the likely effects of DHA-piperaquine in malaria elimination campaigns. A total of 1000 hypothetical patients (body weight of 60 kg), receiving a standard 3-day treatment regimen of DHA-piperaquine [4], were also simulated based on a population pharmacokinetic model in nonpregnant women with uncomplicated P. falciparum malaria to evaluate the expected QT prolongation in a patient population [8]. Nomenclature of targets and ligands Key protein targets and ligands in this article are hyperlinked to corresponding entries in http://www.guidetopharmacology. org, the common portal for data from the IUPHAR/BPS Guide to PHARMACOLOGY [40], and are permanently archived in the Concise Guide to PHARMACOLOGY 2015/16 [41]. Results The frequent sampling and the crossover design produced ideal data for pharmacokinetic-pharmacodynamic modelling. All 16 volunteers completed the study protocol, and tolerated the treatments well, with no reported serious adverse events. The study was conducted between 18 June 2012 and 2 November 2012. The clinical safety results have been published in full elsewhere [23]. The full demographic characteristics are presented in Table 1. Population pharmacokinetic properties of DHA A total of 384 DHA plasma samples were collected. A twocompartment disposition model proved superior to a onecompartment model, both when omitting concentrations measured below the LLOQ (ΔOFV = À26.0) and when implementing them as categorical data using the M3 method (ΔOFV = À12.3). This confirmed that the improved model fit was not because of data censoring. Adding an extra third disposition compartment resulted in a minor improvement in model fit (ΔOFV = À6.55; P > 0.01). In addition, the terminal half-life estimated from the three-compartment model was somewhat long (median half-life of 3.11 h) compared with previous reports (0.145-2.5 h) [7,8]. Therefore, the twocompartment disposition model was carried forward. Omitting concentrations below the LLOQ did not show any model misspecification in the fraction of censored observations and resulted in similar model performance to that using the M3 method. The approach of omitting concentrations below the LLOQ was therefore deemed appropriate. A transit compartment absorption model with six transit compartments was superior to all other absorption models evaluated (ΔOFV > À258). Estimating both the transit rate between transit compartments and the absorption rate from the last transit compartment to the central compartment resulted in a significantly improved model fit compared with when setting them to be equal (ΔOFV = À17.6). Implementing body weight as a fixed allometric function on all clearance and volume parameters did not improve the model fit (ΔOFV = 0.819). However, it was retained in the final model based on the strong biological prior and previously published results [7]. No significant covariates were identified in the stepwise covariate approach. The observed data showed substantial between-occasion variability in the absorption of DHA, with additional between-subject variability in the elimination clearance of DHA. The final model showed a satisfactory goodness of fit ( Figure 1) and predictive performance, as illustrated by the visual predictive check (Figure 2A). Eta and epsilon shrinkages were generally low (<20%) except for the absorption rate constant (37.6% and 23.7% shrinkage on study occasions 1 and 2, respectively). A numerical predictive check (n = 2000) resulted in 1.84% (95% CI 1.23%, 10.4%) and 3.99% (95% CI 1.53%, 10.1%) of DHA observations below and above, respectively, the simulated 90% prediction interval. Pharmacokinetic parameter estimates from the final model and corresponding secondary parameters are summarized in Table 2 and Table 3, respectively. Population pharmacokinetic properties of piperaquine A total of 623 piperaquine plasma samples were collected in the study. A three-compartment disposition model resulted in a significantly improved model fit compared with a twocompartment disposition model (ΔOFV = À297). No further improvement was seen with an additional disposition compartment (ΔOFV = À0.500). A transit compartment absorption model with two transit compartments was superior to all other models evaluated (ΔOFV > À452). There was no significant change in model fit when the transit rate between transit compartments and the absorption rate from the last transit compartment to the central compartment were set to be equal (ΔOFV = 0.564). Implementing body weight as a fixed allometric function on all clearance and volume parameters resulted in an improved model fit (ΔOFV = À5.95). No other covariates were significant in the stepwise covariate approach. The observed data showed substantial between-subject and betweenoccasion variability in the absorption of piperaquine, with additional between-subject variability in the elimination clearance, the inter-compartmental clearance and the central volume of distribution of piperaquine. The final model showed a satisfactory goodness of fit ( Figure 1) and predictive performance, as illustrated by the visual predictive check ( Figure 2B). Moderate eta and epsilon shrinkages were seen in the final model (i.e. between 20% and 30%) except for clearance, which showed a somewhat higher shrinkage of 35.3%. A numerical predictive check (n = 2000) resulted in 3.79% (95% CI 2.14%, 8.73%) and 4.12% (95% CI 2.31%, 8.73%) of piperaquine observations below and above, respectively, the simulated 90% prediction interval. Pharmacokinetic parameter estimates from the final model and corresponding secondary parameters are summarized in Table 2 and Table 3, respectively. Drug-drug interactions Primaquine coadministration did not have a significant impact on the pharmacokinetic properties of DHA or piperaquine when evaluated with a stepwise covariate approach. In the full covariate approach for DHA, the impact of primaquine coadministration was less than ±25% on primary pharmacokinetic parameters ( Figure 3A). The full covariate approach for piperaquine resulted in a median 37.3% (95% CI À67.6%, 33.7%) decrease in central volume of distribution and a median 26.8% (95% CI À21.2%, 62.5%) increase in mean transit absorption time during concomitant administration of primaquine ( Figure 3B). However, the 95% CI for these covariate effects included a zero effect, so a lack of effect could not be excluded. The impact on other primary pharmacokinetic parameters was less than ±25%. Furthermore, no substantial differences were evident in secondary exposure parameters of DHA and piperaquine in the full covariate approach (Figure 3A and B). Electrocardiographic effects of piperaquine Individually estimated subject-specific QT corrections were slightly less affected by heart rate compared with standard Bazett and Fridericia corrections. Individual regression of QTc and RR intervals resulted in 6/16, 6/16, and 5/16 individuals with regression slopes significantly different from zero using Bazett, Fridericia and individually determined corrections, respectively. Therefore, individual corrections were applied to the observed QT interval. The initial PKPD modelling of piperaquine concentration-response analysis showed no significant relationship between piperaquine drug concentrations and ΔQRS (P = 0.520). Hence, ΔJTc and ΔQTc showed an almost identical concentration-response relationship (data not shown) and ΔQTc was therefore carried forward in the analysis as this measurement is commonly reported in the literature. A linear direct response model resulted in an adequate description of the relationship between piperaquine exposure and QTc prolongation. The linear model showed better model fit and predictive performance compared with the other models evaluated (i.e. the power model and E MAX model). The implementation of a delayed response model was not supported by the observed data and resulted in low parameter precisions. The population baseline ΔΔQTc prolongation was estimated close to zero and therefore fixed to this value but allowed for between-subject variability in the same parameter. No major between-subject variability was observed in other pharmacodynamic parameters in the final model. Primaquine did not affect the relationship and no other significant covariates (age, gender and electrolyte levels) were identified in the stepwise covariate approach. Within the concentration range measured, the final model resulted in a population mean increase in ΔΔQTc of 4.17 (95% CI 0.973, 43.1) ms with every 100 ng ml -1 increase in piperaquine plasma concentration. The final model showed a satisfactory goodness of fit ( Figure 1) and predictive performance, as illustrated by the visual predictive check ( Figure 2C and 2D). Eta shrinkage of the slope parameter was moderate (26.5%) and epsilon shrinkage was low (2.39%). A numerical predictive check (n = 2000) resulted in 4.50% (95% CI 1.80%, 8.56%) and 4.05% (95% CI 1.80%, 8.56%) of observed ECG measurements below and above, respectively, the simulated 90% prediction interval. Pharmacodynamic parameter estimates from the final model are summarized in Table 2. Pharmacokinetic-pharmacodynamic model simulations, based on the assumption that a linear concentration-effect relationship continued at piperaquine plasma levels over 500 ng ml -1 , showed that 95% of all subjects (i.e. 95% prediction interval) had a predicted QT prolongation below 60 ms at piperaquine concentrations below 1000 ng ml -1 ( Figure 4A). Pharmacokinetic-pharmacodynamic model simulations, using previously published pharmacokinetic parameter estimates in patients with uncomplicated P. falciparum malaria, resulted in a predicted median QT prolongation of 11.2 (95% CI À15.6, 41.2) ms after standard 3-day DHA-piperaquine treatment, which is consistent with the current study ( Figure 4B). Simulations of monthly and bimonthly mass drug administration regimens over a total duration of 1 year suggest that individually predicted maximum QT prolongations did not reach 50 ms in any subjects [median 18.9 (95% CI À6.44, 49.0) ms after monthly treatment; median 16.8 (95% CI À11.0, 45.1) ms after bimonthly treatment] (Figure 4C and D). Discussion The antimalarial combination treatment of DHA-piperaquine has been used extensively and shown excellent efficacy and tolerability [1,42]. However, recent concerns have been raised regarding the potential for cardiotoxicity because piperaquine, like many drugs in this class, causes delayed ventricular repolarization (manifested as electrocardiograph prolongation of the QT interval). The present study in healthy subjects assessed the pharmacokinetic properties of DHA and piperaquine, potential drug-drug interactions of concomitant primaquine treatment, and QT prolongation associated with piperaquine treatment. The results were generally reassuring, and suggested that it is highly unlikely for marked QT prolongation to occur following standard doses of DHA-piperaquine. Limitations of the study included the small number of participants, the fact that only Thai volunteers were included and that there was a gender bias (three males and 13 female). Thus, modelling and simulation results should not be extrapolated directly to patients with malaria, and especially young children, without considering disease effects, body size differences and enzyme maturation in very young children. Furthermore, the relationship between QT prolongation and the risk of sudden death is not straightforward; the risk of arrhythmia associated with a long QT interval is clearly greater with some drugs than others. Larger population-based pharmacokineticpharmacodynamic studies in patients with malaria and in healthy subjects are needed for final conclusions to be reached on the safety of DHA-piperaquine. The population pharmacokinetic properties of DHA were best described by a two-compartment disposition model with six transit compartments in the absorption phase. In previous studies, both one-and two-compartment disposition models have been used to describe the pharmacokinetic properties of DHA [8,[43][44][45]. The difference in the disposition models reported most likely result from the rapid disposition phase and the different sampling frequencies in the absorption and disposition phases. Sparse sampling is likely to mask an early disposition phase. However, the clinical impact of using a one-or two-compartment structure may well be very small, as long as the terminal elimination half-life is characterized accurately. The implementation of body weight as an allometric function on clearance and volume parameters has been reported in previous studies [7,8]. Even though body weight did not provide an improved model fit in the present study, it was retained as a covariate in the final model based on prior biological knowledge and to allow for extrapolation of the developed model into other populations, such as children. No significant covariates were found in the present study, using a step-wise covariate approach. Modelling performed here demonstrated large variability in the absorption characteristics of DHAa large between-occasion variability in mean transit time (52.6%), absorption rate constant (89.0%) and relative bioavailability (35.9%). This might be due to the lipophilic physicochemical properties of DHA, resulting in variable absorption characteristics on different dosing occasions [7]. The final model showed a satisfactory goodness of fit and predictive performance (Figure 1 and 2A). Overall, pharmacokinetic parameter estimates were in agreement with those previously reported in healthy volunteers and patients with uncomplicated P. falciparum malaria [7,[43][44][45][46]. Piperaquine was described by a three-compartment disposition model, which is in agreement with recently published studies [8,[47][48][49][50]. The variable absorption characteristics of piperaquine were best described with two transit absorption compartments, compared with three or five in previous studies [8,48]. The difference in the number of transit compartments might be explained by different study designs and sample frequencies during the absorption phase. Body weight, implemented as an allometric function on clearance and volume parameters, improved the model fit. It also has a strong biological prior and has been identified in previous studies [8,33,48]. No other significant covariates were found in the present study, using a step-wise covariate approach. The final model showed overall satisfactory goodness-of-fit and predictive performance (Figure 1 and 2B). Modelling performed here demonstrated moderate variability in the absorption characteristics of Table 2 Parameter estimates from the final population pharmacokinetic-pharmacodynamic model of dihydroartemisinin and piperaquine in healthy volunteers ΔΔQTc, double-delta-corrected QTc prolongation; BASE, baseline; BOV, between-occasion variability; BSV, between-subject variability; CI, confidence interval; CL/F, oral clearance; %CV, coefficient of variation; F, relative bioavailability; k a , absorption rate constant from last transit compartment to central compartment; MTT, mean transit time; Q P /F, inter-compartment clearance; σ PK , residual exponential error variance of drug measurements; σ PD , residual additive error variance of ΔΔQTc prolongation; %RSE, relative standard deviation; SLOPE, slope parameter of the relationship between piperaquine concentration and ΔΔQTc-prolongation; V C /F, apparent central volume of distribution; V P /F, apparent peripheral volume of distribution. *Between-occasion variability a Computed population mean parameter estimates from NONMEM. Parameter estimates are based on the typical individual in the population with a body weight of 64 kg. BSV and BOV are presented as the %CV, calculated as 100Â ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi ffi exp estimate ð ÞÀ1 p b Based on nonparametric bootstrap diagnostics (n = 1000 samples). Parameter precision is presented as %RSE, calculated as 100Â standard deviation mean value . The 95% CI is calculated as the 2.5th to 97.5th percentile of bootstrap estimates piperaquine (below 35%). The overall pharmacokinetic parameter estimates were in agreement with previous studies in healthy volunteers and patients with P. falciparum malaria [8,23,48,50,51]. The WHO suggested recently that a single low dose of primaquine (0.25 mg kg -1 ) be added to ACTs in order to reduce malaria transmission in low transmission areas [4]. The safety of a single low dose of primaquine has been demonstrated in both G6PD-deficient and G6PD-normal patients [52,53] and might be an important tool in malaria elimination efforts [54]. To the best of our knowledge, potential pharmacokinetic drug-drug interactions have not been evaluated formally with a modelling approach. This was assessed with two different approaches. First, a bottom-up approach was performed by characterizing the impact of primaquine coadministration on each pharmacokinetic parameter using a stepwise addition and elimination approach. In the second approach, a top-down analysis was employed by including a categorical primaquine coadministration effect on all pharmacokinetic parameters simultaneously (i.e. full covariate approach). None of these approaches found any clinically significant drug-drug interactions between primaquine and DHA or piperaquine. However, the full covariate approach indicated a trend of decreasing inter-compartmental clearance and absorption rate constant of DHA when coadministered with primaquine. Similarly, a trend of decreasing central volume of distribution and increasing mean transit absorption time of piperaquine was seen when coadministered with primaquine. However, the 95% CI of these effects spanned zero, and a lack of effect could not be excluded. A lack of clinically relevant drug-drug interactions with primaquine was further supported by no substantial differences in secondary exposure parameters of DHA and piperaquine, with and without coadministration of primaquine, when using the full covariate approach. These results were expected as primaquine does not induce or inhibit any enzymes and the test compounds are metabolized through different enzymatic pathways [55][56][57][58]. The results of the present study were also in agreement with the noncompartmental analysis of the data, which did not identify any drug-drug interactions with primaquine [23]. Many antimalarial drugs have been associated with QT prolongation, which reflects a delay in the repolarization of the ventricular myocytes during the cardiac cycle [16]. This can predispose to the development of ventricular arrhythmias, most notably torsade de pointes, and sudden death. Drugs can increase the risk of QT prolongation by several mechanisms, most commonly by blocking the hERG potassium channel and other cardiac ion channels (i.e. carrying calcium and sodium). The antimalarial drug halofantrine was withdrawn from clinical use because it induced marked QT prolongation and was associated with an increased risk of sudden death [59]. On the other hand, amiodarone blocks the hERG potassium and calcium/sodium channels, resulting in substantial QT prolongation, but carries a very low risk of degenerating into torsade de pointes [60]. The exact relationship between electrophysiological events, QT prolongation and the development of torsade de pointes has not been well characterized. DHA-piperaquine treatment has been associated with QT prolongation both in patients and healthy volunteers but not with torsade de pointes or sudden death [18][19][20][21][22]61]. Yet, few studies have investigated the relationship between piperaquine exposure and QT prolongation, and no previous studies have quantified this relationship using population pharmacokineticpharmacodynamic modelling [20,62]. No significant QT prolongation has been seen previously with the administration of primaquine [63,64]. The lack of a concentration-response relationship between primaquine concentrations and ΔQTc in the present study confirmed that Table 3 Secondary parameter estimates of dihydroartemisinin and piperaquine in healthy volunteers with and without primaquine coadministration primaquine, at these doses, has no impact on ventricular repolarization. Although there is some evidence from experimental studies that artemether may prolong the QT interval, the general consensus is that the artemisinin derivatives at currently used doses have no significant effect. Thus, only piperaquine plasma concentrations were used to drive the pharmacodynamic QT prolongation in the present modelling exercise, and the administration of primaquine alone was used as a negative control arm. ΔΔQTc intervals were used in the pharmacodynamic model, to minimize the impact of heart rate and the naturally occurring circadian rhythm of the QT interval [65]. This also reduces regression towards the mean of the baseline QT interval, by subtracting the average of the individual baseline values of the QT intervals from the QT measurements. Therefore, a change in the ΔΔQTc interval should be attributed solely to the exposure to piperaquine. In the present study, a significant relationship between QT prolongation and piperaquine concentration was described accurately by a linear exposure-response model, which has also been seen previously [20,62]. Inclusion of electrolytes (potassium and sodium) or any other covariate did not have a significant effect in the model, most likely due to the fact that healthy volunteers were studied here. The final pharmacokinetic-pharmacodynamic model showed overall good diagnostic/predictive performance ( Figure 1 and Figure 2C) and the estimated slope was in agreement with that in previous studies [20,62], indicating that this model was suitable for simulations. There were no significant changes in other electrocardiographic intervals associated with drug administration. A drug-induced QT prolongation of less than 60 ms is generally accepted as a clinical cardiac safety stopping rule according to the US Food and Drug Administration (FDA) [66]. Simulations, using the final pharmacokineticpharmacodynamic model and assuming a continuous linear concentration-effect relationship, predicted that piperaquine concentrations below 1000 ng ml -1 would result in a QT prolongation of less than 60 ms in healthy volunteers (i.e. upper end of the 95% CI below 60 ms). A standard 3-day dosing regimen of 50 mg kg -1 in DHApiperaquine given to pregnant and nonpregnant women with uncomplicated P. falciparum malaria reported a median maximum piperaquine concentration of 244 ng ml -1 (interquartile range 173-344 ng ml -1 ) [8]. Thus, standard treatment regimens should result in QT prolongations well below 60 ms and should be safe in a clinical setting. This was Figure 3 Effect of primaquine coadministration on the pharmacokinetic parameters of dihydroartemisinin (A) and piperaquine (B) when using a full covariate approach. The top panels illustrate primary pharmacokinetic parameters and the lower panels illustrate secondary derived pharmacokinetic parameters. The y-axes represent the density of parameter estimates from 1000 bootstraps. The vertical dashed lines represent a covariate effect of ±25%, assumed to be clinically insignificant. conc., concentration. AUC 0-24 ; area under the concentration-time curve from time zero to 24 hours, AUC 60 days ; area under the concentration-time curve from time zero to 60 days, C MAX ; maximum concentrations, CL/F; oral clearance, Day 7 conc.; day 7 concentration of piperaquine, F; relative bioavailability, k a ; absorption rate constant from last transit compartment to central compartment, MTT; mean transit time, Q P /F; inter-compartment clearance, T MAX ; time to maximum concentrations, t 1/2 ; terminal half-life, V C/ F; apparent central volume of distribution, V P /F; apparent peripheral volume of distribution further supported by simulations [8], using the developed exposure-response model for QT prolongation. Simulations of standard oral DHA-piperaquine 3-day treatment in patients with uncomplicated P. falciparum malaria resulted in a median QT prolongation of 6.50 (95% CI À18.6, 35.2) ms. The FDA threshold for regulatory concern is 5 ms (the upper limit of the 95% CI being 10 ms) for novel drugs. Even though the QT prolongation of piperaquine shows a somewhat inflated confidence interval, it should not pose a clinical concern at therapeutic concentrations [66]. DHA-piperaquine is a promising candidate for mass drug administration and malaria elimination strategies because of its long terminal elimination half-life and subsequent long postdose prophylactic effect [9,67]. However, the long halflife of piperaquine results in accumulation and a 336% (range 271-402%) and 267% (range 146-381%) increase in piperaquine trough concentrations at week 36 compared with week 4 after repeated monthly and bimonthly treatment doses, respectively [67]. It is therefore necessary to evaluate long-term cardiac safety before implementation in clinical settings. Simulations of mass drug administration in 1000 healthy subjects in South-East Asia receiving standard 3-day DHA-piperaquine treatment, either monthly or bimonthly, predicted QT prolongations of less than 60 ms in all patients ( Figure 4C and D). The simulations predicted a minimal accumulation of QT prolongations over the 12 months, owing to the relatively flat slope of the exposure-response relationship (4.17 ms increase for every 100 ng ml -1 increase in piperaquine concentrations). A small difference between the various regimens was noted, with a median QT prolongation with the monthly and bimonthly regimens of 18.9 (95% CI À6.44, 49.0) ms and 16.8 (95% CI À11.0, 45.1) ms, respectively. In summary, simulations performed here with use of the developed pharmacokineticpharmacodynamic model suggest that the standard treatment regimen of DHA-piperaquine in patients and mass drug administration over 1 year in healthy subjects are likely to be safe according to standard cardiac safety criteria. In conclusion, the pharmacokinetic properties of DHA and piperaquine, the influence of concomitant primaquine administration and the relationship between piperaquine exposure and electrocardiographic measurements were successfully characterized using nonlinear mixed-effects modelling. Concomitant primaquine administration did not affect the pharmacokinetic properties of DHApiperaquine, supporting the concomitant use of a single low dose of primaquine as a transmission blocking agent in the treatment of malaria. Piperaquine administration resulted in a significant prolongation of the QT interval but the effect was modest and simulations suggest that mass treatments are unlikely to result in dangerous QT prolongation. Competing Interests This study was a part of the Wellcome Trust-Mahidol University-Oxford Tropical Medicine Research Programme supported by the Wellcome Trust of Great Britain. Part of this Figure 4 Simulations of QT prolongations in healthy volunteers at different piperaquine (PQ) concentrations (A), after standard 3-day treatment in patients with uncomplicated Plasmodium falciparum malaria (B), after monthly mass drug administration of the standard 3-day regimen (C), and after bimonthly mass drug administration of the standard 3-day regimen (D). Box and whisker plot represent the interquartile range and the 2.5 th to 97.5 th percentiles. ΔΔQTc, double-delta-corrected QTc prolongation work was supported by the Bill and Melinda Gates Foundation. The funding bodies did not have any role in the collection, analysis or interpretation of the data, writing of the manuscript, or in the decision to submit the manuscript for publication. All authors have competed the Unified Competing Interest form at http://www.icmje.org/coi_disclosure.pdf (available on request from the corresponding author) and declare no financial relationships with any organization that might have an interest in the submitted work in the previous 3 years; and no other relationships or activities that could appear to have influenced the submitted work.
v3-fos-license
2023-09-08T06:42:52.775Z
2023-07-07T00:00:00.000
261582551
{ "extfieldsofstudy": [ "Computer Science" ], "oa_license": "CCBY", "oa_status": "HYBRID", "oa_url": "https://dl.acm.org/doi/pdf/10.1145/3632971.3632986", "pdf_hash": "a3e647c570ff3dbc1e0dd94f6bcaa734ed880cf7", "pdf_src": "ArXiv", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:1637", "s2fieldsofstudy": [ "Computer Science" ], "sha1": "a3e647c570ff3dbc1e0dd94f6bcaa734ed880cf7", "year": 2023 }
pes2o/s2orc
A Food Package Recognition and Sorting System Based on Structured Light and Deep Learning Vision algorithm-based robotic arm grasping system is one of the robotic arm systems that can be applied to a wide range of scenarios. It uses algorithms to automatically identify the location of the target and guide the robotic arm to grasp it, which has more flexible features than the teachable robotic arm grasping system. However, for some food packages, their transparent packages or reflective materials bring challenges to the recognition of vision algorithms, and traditional vision algorithms cannot achieve high accuracy for these packages. In addition, in the process of robotic arm grasping, the positioning on the z-axis height still requires manual setting of parameters, which may cause errors. Based on the above two problems, we designed a sorting system for food packaging using deep learning algorithms and structured light 3D reconstruction technology. Using a pre-trained MASK R-CNN model to recognize the class of the object in the image and get its 2D coordinates, then using structured light 3D reconstruction technique to calculate its 3D coordinates, and finally after the coordinate system conversion to guide the robotic arm for grasping. After testing, it is shown that the method can fully automate the recognition and grasping of different kinds of food packages with high accuracy. Using this method, it can help food manufacturers to reduce production costs and improve production efficiency. INTRODUCTION With the support of modern science and technology, the use of industrial robots is gradually gaining popularity.They have the advantages of reliability, stability, and high precision, which can reduce the intensity of manual work and thus improve the quality of work based on the guarantee of operational efficiency.As a result of these advantages, industrial robots have gradually developed into a core force in the manufacturing industry [1][2][3].Existing industrial robots usually work according to a preset position, and once the position changes, they cannot continue to work and must be reprogrammed [4].Along with the deepening development of artificial intelligence, ChatGPT and other intelligent machine technologies, intelligent machines are continuously reshaping the manufacturing system [5].In recent years, computer vision algorithms have been widely used in robotics, object recognition, and other fields, and some recent studies have applied computer vision to robotic arm grasping tasks [6][7][8].Yao et al. [9] designed a robotic arm intelligent grasping system based on machine vision, using a matching algorithm based on contour features to identify target objects and capable of completing grasping and handling tasks.Liu et al. [10] designed a vision-based mobile sorting robot system, which also uses image binarization, edge detection and other algorithms to determine the location of the target and then guide the robot to grasp it. However, for the sorting of food packages, the existing technology suffers from the following two problems.The first is the lack of robustness of traditional vision algorithms [11].Food packaging usually uses some reflective or transparent materials, which can reflect or transmit light, making many pixel points in the image captured by the camera overexposed or too dark, and the parameters in the traditional vision algorithm cannot fit these pixel points well.In addition, there is a lack of a simple, automatic method to determine the height of the target on the z-axis in guiding the robot arm for grasping.In order to solve the above two problems, we designed a fully automatic food package recognition and sorting system to achieve a high accuracy rate of sorting for food packages. Figure 1 shows the whole system and the objects we want to detect.In Figure 1(a), the depth camera is located at a height of 860 mm directly above the target, and the initial position of the robotic arm is located outside the field of view of the camera.The difficulty of identification lies in the detection of powder packets and vegetable packets.These two types of packaging are highly reflective characteristics, may prone to errors and omissions after random stack. METHODOLOGY In Figure 1, we show the system we designed and three kinds of food packages we used for testing.The detailed workflow of the system is shown in Figure 2, which consists of three modules: structured light module, deep learning module and grasping module.Initially, the target within the field of view is scanned using a depth camera to generate a 3D point cloud, while the 2D image is transmitted to the deep learning module.In this module, the pre-trained MASK R-CNN model is utilized to classify the target's class and provide the predicted 2D coordinates to the point cloud reconstruction module.Subsequently, based on the reconstruction outcomes, the 3D coordinates corresponding to the 2D points in the point cloud are determined.Ultimately, the coordinates are transformed through hand-eye calibration and computed as points within the robot arm coordinate system for the purpose of grasping. Structured light module In order to reconstruct the 3D information of the target while obtaining a 2D image of the target, we use a structured lightbased depth camera to scan the target.By using the projection equipment to project a pre-designed structured light pattern of a specific pattern onto the surface of the object, and then using the camera to photograph the deformation pattern, the 3D information of the object to be measured is obtained by analyzing the aberrations produced by the projected pattern on the surface of the object under study, and then using the specific encoding scheme of the projection to obtain the threedimensional coordinates of the points on the target to be measured using the triangulation principle of the pinhole imaging pattern of the camera.In this system, the encoded stripe pattern we use is an 18-sheet stripe pattern made by Gray code and line shift code [12].Figure 3 shows the mathematical model of the 3D reconstruction. is a point on the object to be measured, then in the projector image corresponds to the point = ( , ) , in the camera image corresponds to the point = ( , ) .Figure 4 shows the results of the 3D reconstruction in the form of a depth map, the colored pixel points represent the successfully reconstructed 3D points, and the black points represent the missing 3D points. Once the 2D coordinate information of the target 2 ( 2 , 2 ) is obtained, its corresponding 3D coordinate 3 ( 3 , 3, 3 ) can be calculated as follows: (1) where is the index of the 2D coordinate, is the width of the 2D image, and , , and store the 3D point cloud data of the image. Deep learning module With the introduction of convolutional neural networks and the continuous development of deep learning methods in the field of computer vision, object detection methods based on convolutional neural networks have gradually emerged and replaced traditional methods as the mainstream.Referring to the research results of Luo's team [13], Yuan's team [14] and Wang's team [15], we used the MASK R-CNN [16] algorithm with deep learning instance segmentation instead of the traditional vision algorithm for food packaging detection of reflective and transparent materials.Figure 5 shows the flow of the algorithm.It uses ResNet [17] as CNN [18] backbone and extends the Faster R-CNN [19] by incorporating a segmentation branch for generating object masks.The input image is first input into a CNN for image feature extraction to obtain a feature map, which is usually ResNet50 or ResNet101.Then the RPN (Region Proposal Network) is used to generate regions that may contain targets, and for each region, it is projected onto the corresponding feature map using ROI Align to generate a fixed size feature map.Finally, the feature map is fed into a Full Convolutional Network (FCN), which is used to segment the instances at the pixel level, generate a mask for each instance, and finally output the class, location and mask information for each instance.By deploying the model in the system and reading the grayscale image obtained from the depth camera scan, the 2D position information and class information of the target in the image can be predicted. For the calculation of 2D coordinates, we calculate its midpoint as the center of the object based on the predicted coordinates of the top-left and bottom-right corners of the rectangular box.Since the 3D coordinates of this center point may be missing, for each target, we take a small area near its midpoint, calculate all the 3D coordinates corresponding to the points in this area, and then sort them according to the z magnitude, and select the one with the smallest z as the final result to input into the next module. Since MASK R-CNN is a supervised learning network, its algorithm needs to be trained by using datasets with labels for the purpose of accurately classifying data or predicting results.In this system, we also use a depth camera to capture grayscale images of objects, label them as a dataset and then feed the dataset into a MASK R-CNN network model to be trained in a GPU environment.We choose grayscale images as the input to the network model because they have better morphological characteristics than RGB images [20] and have only one channel, which is easy to transmit and compute in the system.Figure 6 shows part of the dataset after the labeling process.The colored parts of the image are manually labeled with different instance labels, and this label information will be used for network model training along with the pixel information of the original image. Grasping module In structured light module, the 3D coordinates obtained are based on the camera coordinate system and need to be converted to the robot arm base coordinate system in order to control the robot arm to that point for gripping. Before the coordinate conversion, the hand-eye calibration of the robot arm is needed to obtain the relative positional relationship between the robot arm and the camera.We use a calibration plate for hand-eye calibration.An algorithm is used to identify the corner points in the calibration plate, then the end-effector of the robot arm is manually controlled to the position of the corner points and the conversion between them is calculated based on the coordinates of the end-effector and the coordinates of the corner points identified by the camera. After the calibration is completed, the coordinates of the points under the camera coordinate system can be calculated corresponding to the coordinates under the robot arm base coordinate system.For the calculated 3D point coordinates 3 ( 3 , 3, 3 ) under the coordinate system , its corresponding point under the robot base coordinate system can be calculated by a transformation matrix : where is called the rotation matrix and is called the translation vector. Once the transformation matrix has been constructed, the corresponding values for the points under can be That is, for a point in the coordinate system, there is: ′ = * + where ′ is the corresponding point of point under the coordinate system.Input ′ as the final result into the robot arm, and then the robot arm can be commanded to run to that point for grasping. Model training results We divided the 300 dataset images into a training set and a validation set in the ratio of 7:3.Each image is 2044*1536 in a single-channel grayscale image.A server with eight GeForce RTX 3090 graphics cards was used for the training experiments.There are a number of hyperparameters that need to be set in the training of the model.We set the batch-size of the network to 16, meaning that two images can be read at once on each GPU.In back propagation, the SGD optimizer was chosen for training, with an initial learning rate set to 0.02, a momentum of 0.9, a weight decay of 0.0001, and a maximum epoch of 100.A strategy of linear growth of the initial learning rate was used.At the beginning of training, the learning rate starts at 0.001 and gradually increases to 0.02 through 500 iterations and is multiplied by 10% at the 80th and 90th epochs.ResNet101 was chosen for the backbone part of the network, num classes were set to 3 (without background). The change in loss and the improvement of the accuracy graph during the training process are shown in Figure 7.The loss finally converges at about 0.06 and the accuracy curve eventually stabilizes at about 99.63. Testing the model on a test set We tested the model using images from the validation set.We set the threshold for model prediction to 0.8 because we found that for the targets we expect to be predicted, the probability of the prediction result is between 0.8 and 1, while for some targets that we do not expect to be predicted, the probability of their prediction result is mostly below 0.8.The mask of such targets usually has a large error or the location is not favorable for grasping, so we hope to reduce the uncertainty in the crawl by setting a higher threshold.Figure 8 shows the prediction results of the model. Final test results Tests have shown that the method can achieve automatic recognition and grasping for a wide range of food packages.Figure 9 shows one set of test results, where we placed four sauce packages, four powder packages and five vegetable packages.It first goes through one scan, then grabs a vegetable packet, then a second scan, grabs a sauce packet, then a third scan, grabs a powder packet.The process is cycled through until there are no targets in the field of view.The system finally achieved 100% correct grasping rate. CONCLUSION AND FUTURE WORK Through experiments, we demonstrated that training and prediction using the MASK R-CNN network model can solve the recognition problem of reflective and transparent packaging.This method can identify most of the obvious targets on the surface, and then calculate the two-dimensional results into three-dimensional ones based on the point cloud, which can improve the grasping accuracy in the z-axis direction.However, as our training dataset is small, there is still much room for improvement in model prediction, in the future we are considering expanding a larger dataset for model training.In addition, we realize the importance of polarization optical imaging techniques in improving object imaging quality and enhancing identification probability based on the difference in polarization characteristics of reflected light from the object and stray light from the background.It can remove strong specular reflections from highly reflective material surfaces, thus improving the imaging quality of camera scans.In the subsequent work, we will try to modify the depth camera using polarized lenses, which can improve the quality of the point cloud after 3D reconstruction while improving the probability of object detection. Figure 1 ( b) shows the three kinds of food packages we used.To increase the difficulty of identification, we chose three packages with different degrees of reflectivity and transparency, and placed the packages in a random stack. Figure 1 : Figure 1: The system we designed (a) and three kinds of packages we detected (b). Figure 2 : Figure 2: Framework of the system. Figure 4 : Figure 4: The results of the partial 3D reconstruction are presented as a depth map. Figure 7 : Figure 7: The loss curve and accuracy curve. Figure 8 : Figure 8: Some prediction results of MASK R-CNN model.A 100% correct rate was achieved, with no false checks.
v3-fos-license
2018-12-12T03:01:25.383Z
2016-06-30T00:00:00.000
158031665
{ "extfieldsofstudy": [ "Economics" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "http://sceco.ub.ro/index.php/SCECO/article/download/349/326", "pdf_hash": "d37b82601fd5ba3165b608255378a55beef582c8", "pdf_src": "Anansi", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:1638", "s2fieldsofstudy": [ "Business" ], "sha1": "d37b82601fd5ba3165b608255378a55beef582c8", "year": 2016 }
pes2o/s2orc
BUILDING AN EFFECTIVE SALES FORCE Building an effective sales force starts with selecting good salespeople, but good salespeople are very difficult to find. The reason for this is that most sales jobs are very demanding and require a great deal from the salesperson. There are many different types of sales jobs. Before it can hire salespeople, each company must do a careful job analysis to see what particular types of selling and other skills are necessary for each sales job. One task of the market planner is to establish clear objectives each year for the entire sales force, for each region, each sales office, and each salesperson. Sales jobs are different from in-house jobs in some significant ways. Nevertheless, each company must continually work on building and maintaining an effective sales force using the following steps: recruitment, selection, training, compensation and evaluation of each salesperson. Introduction One of the reasons personal selling is difficult is that the salesperson is caught in a squeeze between the company and the customer or potential customer.On the one hand, the salesperson is paid by the business firm to represent its own interests when dealing with customers.In this role, the salesperson must represent the customer's interests in attempting to get from the company such things as quick deliveries, good credit terms, immediate repairs if equipment breaks down, and extra allocations of products or supplies in times of shortages.In this role, the salesperson must fight for the customer's interests.Representing both the company and the customer is often a rough situation to be in, and the salesperson must use tact, good judgment, and emotional maturity in keeping both sides as satisfied as possible.This also requires the ability to think clearly and with imagination, often under extreme pressure.The salesperson must also be a self-starter, since there is a steady flow of work that presents itself to be solved or accomplished.The salesperson must schedule sales calls on present customers, and must also call on potential new customers frequently.This requires a great deal of initiative and internal motivation.The most difficult problem of all may be the constant discouragement that every salesperson faces.It takes a great deal of "ego strength" to accept defeats gracefully and to resolve to try even harder the next time. Studies have shown that in many industries only a third to a half of new salespeople survive more than a single year.For those who do survive, the financial rewards can be very substantial.There is usually no faster way for new college graduates to make a good income than by getting a sales job that requires creative and persuasive selling. Several steps or phases are involved in building and maintaining an effective sales force in a business firm: recruitment, selection, training, compensation, evaluation.These steps are not actually sequential for established companies, since they continually engage in all steps in order to maintain a productive sales force. Selecting the right salespeople An early and often the most important step in building an effective sales force is to select good salespeople.The selection of salespeople is usually done by professional personnel managers and consultants, especially in large business firms.These people know how to do the job analysis that is necessary to find out exactly what is required for each type of sales position. Screening tools and techniques A wide variety of tools and techniques are available to help both the sales manager and the company personnel department in selecting the best salespeople from all applicants, for example: company application form, mental ability tests, personality and interest tests, interviews with company management, interviews with outside personnel evaluation specialists.The particular tool or tools that are the best predictors of sales success will vary from one selling job to another and from one evaluator to another.Some personnel specialists will tend to stress the interview; others will put primary emphasis on mental ability or personality tests; and others feel the application form is the best indicator, because it shows what a person has already done in his or her life to develop good selling skills.The most reasonable conclusion is that all the various tools are potentially valuable for screening applicants.Most selling jobs require a great deal of customer contact, and it helps greatly if the applicant is the type of person who likes to be with people.The personal history on the application form shows what the previous sales or customer contact jobs a person has had, what social clubs they belong to, what leisure time activities they engage in.An applicant's personal history can also show the degree of commitment he or she is likely to bring to a sales job. Recruiting an applicant pool In many cases, the key requirement for a good selection process is having a good pool of applicants to choose from.This means that recruitment procedures must be effective.A good recruiting system in large companies must operate continually, even when there are no immediate job openings. Training salespeople New sales employees almost always require a training period before being sent out into the field.For some sales jobs this training can be done in only a few days or weeks; in others it takes much longer.Among the many points that need to be covered in sales training are such topics as:  What the company stands for, including its mission and objectives  What its products or services will do  How the company's products compare to those of the competition  The different needs and characteristics of various customers  Specific selling and prospecting techniques  Company office reporting procedures  How the salesperson is expected to dress and act  How to interface with the different portions of the company on behalf of the customer The list could go on and on.It is clear that creative and persuasive types of selling require a great deal of training for new sales employees.Part of this training must be done in-house.Another large part must be done in the field, under real-world conditions.Most well-run large business firms now have regular training sessions for all their salespeople, to keep them informed and up to date. Compensating salespeople Because sales jobs are different in some important ways from internal jobs, they need a different kind of compensation arrangementone that provides a more direct incentive for a job well done.It is possible to provide this direct incentive because salespeople usually have a very tangible record of achievementthe volume of sales or profits generated.This is usually the primary basis for rewarding the sales force in most companies.There are two major types of compensation for salespeople: financial and nonfinancial.Financial incentives There are three principal approaches to compensating sales force financially: straight salary, straight commission and combination of salary plus commission.The straight-salary approach has some advantages, particularly for new salespeople.Some companies use straight-salary compensation for experienced salespeople and for new ones.Straight-commission compensation often provides the greatest incentive for sales employees.Here the employee receives a percentage of the total sales volume he or she generates.Because both straight salary and straight commission have serious weaknesses, most companies pay on the basis of salary plus commission.This provides a balance of the best features of both types of compensation.Nonfinancial incentives There are many different types of nonfinancial sales incentives.These include trophies and plaques, free trips to conferences in resort areas, extra amounts of secretarial and other internal support, and advancement up the ladder into sales management.Nonfinancial incentives are an important part of the job of building and maintaining an effective sales force. Evaluating the sales force In most large companies, each salesperson develops an individual plan of action for the next month, quarter, or year, in writing.This plan will include a variety of specific objectives, both quantitative and qualitative. Market intelligence Most salespeople now carry a much greater responsibility for keeping company management informed about what is going on in the marketplace, accurately, currently and in detail.A part of this information come from the sales call reports they submit regularly.Other parts are developed in the process of making a proposal to win a particular order.Competitive information in particular has become especially important in recent years.Company salespeople can provide a great deal of useful competitive information from their discussions with customers and prospects.This information becomes a part of the marketing intelligence system of the company. Telemarketing Phone calls to present customers can often accomplish the following: obtain reorders of suppliers or equipment the customer has bought before, inform customers of new products or services that are now available, find out there are any changes in the customer's business that would offer opportunities for additional sales, ask if deliveries are on time and if the supplies or equipment are working properly, ask customers to estimate future needs, for sales forecasting purposes.Telemarketing can also be used very effectively in identifying and qualifying prospects and in soliciting new business.Today's salesperson works from referrals or prospect lists.Each prospect can be called on the telephone to see whether there is enough real interest to justify a sales call.In a growing number of cases, the sale itself can be made over the telephone. Team selling In today's business world, most companies selling very complex equipment to other business firms no longer expect a single salesperson to make all the contacts and presentations necessary to win a large order or contract.They form selling teams, which consist of representatives from each part of the company involved in a sale.This team is coordinated by a salesperson, who calls specialists in whenever needed and who often includes them in major sales presentations. Call routing by computer A few large companies with "management sciences" departments or capabilities have developed computer models that plan the proper number of sales calls each salesperson should make on each customer or within each sales territory.The computer is given all this information for each customer and each salesperson, and it calculates the number of times each customer should be contacted each year or quarter by a salesperson.They also provide an estimate of the sales or profits that should be generated by each salesperson or in each sales territory, and this is useful in evaluating the performance of the sales force.Salespeople have to fulfill one or more of the following tasks:  Exploration: Salespeople find new customers and cultivate relationships with them;  Targeting: Salespeople decide how to divide their limited time resources between actual customers and potential customers;  Communication: Salespeople communicate professionally, information about products and services offered by the company;  Sale: Salespeople mastered the "art of selling", which is to address, presentation, finding answers to objections and preparing sales;  Offering: salespeople offer customers various services -consultations concerning their problems, technical assistance, financing and delivery of goods without delay;  Gather information: Salespeople performs both market research and gather information on this and completes reports on sales visits they have carried out;  Allocation: salespeople decide which customers to be allocated with priority certain products, in cases the company has insufficient quantities of them. Conclusions There are many different types of sales jobs.Before it can hire salespeople, each company must do a careful job analysis to see what particular types of selling and other skills are necessary for each sales job.One task of the market planner is to establish clear objectives each year for the entire sales force, for each region, each sales office, and each salesperson.Quantitative objectives can be specified for sales volume, number of units, and gross profit margins; qualitative objectives include signing up new dealers, conducting training sessions, setting up displays, and improving selling skills. As companies increasingly show a strong market orientation, their sales forces should focus increasingly on market and customer needs.Classical vision is based on the idea that salespeople should be concerned only with sales, marketing department assuming the task to deal with profitability and marketing strategy.According to the new vision, however, salespeople must know how to produce both customer satisfaction and profit to the company.The attributes that salespeople must have, are: energy and personal initiative; organizational and planning capacity; a satisfactory level of schooling and culture; ability to adapt to a variety of personalities and behaviors; concern for personal and professional development; desire and need for professional recognition.Sales jobs are different from in-house jobs in some significant ways.Nevertheless, each company must continually work on building and maintaining an effective sales force. Quantitative objectives usually include: Better prospecting approaches  Developing better personal relationships with people within the company that provide selling and service supportThe new sales forceToday's sales job is very different in some significant ways from what it was years ago.  Improving one's knowledge of products (both company's and competitors')  More information about customer needs and current buying practices  Improving personal selling techniques 
v3-fos-license
2018-04-03T05:43:40.915Z
2010-03-01T00:00:00.000
39927552
{ "extfieldsofstudy": [ "Medicine", "Physics" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://doi.org/10.1364/oe.18.005008", "pdf_hash": "d3b132ff731c3ea9c8e7b7e8ebb1df554e1e4451", "pdf_src": "ScienceParseMerged", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:1639", "s2fieldsofstudy": [ "Engineering", "Physics" ], "sha1": "75e60ab2ac39a24970bd3797070e8904c6810924", "year": 2010 }
pes2o/s2orc
Measurement of the coupling constant in a two-frequency VECSEL We measure the coupling constant between the two perpendicularly polarized eigenstates of a two-frequency Ver tical External Cavity Surface Emitting Laser (VECSEL). This measurement i s performed for different values of the transverse spatial separation b etween the two perpendicularly polarized modes. The consequences of thes e m asurements on the two-frequency operation of such class-A semiconduct or lasers are discussed. © 2010 Optical Society of America OCIS codes:(140.7260) Vertical cavity surface emitting lasers; (140.3 60) Lasers; (060.5625) Radio frequency photonics. References and links 1. G. Pillet, L. Morvan, M. Brunel, F. Bretenaker, D. Dolfi, M. Vallet, J.-P. Huignard, and A. Le Floch, “Dualfrequency laser at 1.5 μm for optical distribution and generation of high-purity mic rowave signals,” J. Lightwave Technol.26,2764-2773 (2008). 2. R. Czarny, M. Alouini, C. Larat, M. Krakowski, and D. Dolfi, “THz-dual-frequency Yb:KGd(WO4)2 laser for continuous-wave THz generation through photomixing,” Elec tron. Lett.40, 942-943 (2004). 3. L. Morvan, N. D. Lai, D. Dolfi, J.-P. Huignard, M. Brunel, F. Bretenaker, and A. Le Floch, “Building blocks for a two-frequency laser lidar-radar: a preliminary study,” Ap pl. Opt.41, 5702-5712 (2002). 4. K. Otsuka, P. Mandel, S. Bielawski, D. Derozier, and P. Glo rieux, “Alternate time scales in multimode lasers,” Phys. Rev. A46, 1692-1695 (1992). 5. M. Brunel, A. Amon, and M. Vallet, “Dual-polarization micro chip laser at 1.53μm,” Opt. Lett.30, 2418-2420 (2005). 6. M. Brunel, F. Bretenaker, S. Blanc, V. Crozatier, J. Briss et, T. Merlet, and A. Poezevara, “High-spectral purity RF beat note generated by a two-frequency solid-state laser in a dual thermooptic and electrooptic phase-locked loop,” IEEE Photon. Tech. Lett. 16, 870-872 (2004). 7. L. Morvan, D. Dolfi, J.-P. Huignard, S. Blanc, M. Brunel, M. Vallet, F. Bretenaker, and A. Le Floch, “Dualfrequency laser at 1.53 μm for generating high-purity optically carried microwave si gnals up to 20 GHz,” in Conference on Lasers and Electro-Optics/International Qu ant m Electronics Conference and Photonic Applications Systems Technologies , Technical Digest (CD) (Optical Society of America, 2004), p aper CTuL5. #122427 $15.00 USD Received 8 Jan 2010; revised 19 Feb 2010; accepted 24 Feb 2010; published 25 Feb 2010 (C) 2010 OSA 1 March 2010 / Vol. 18, No. 5 / OPTICS EXPRESS 5008 8. G. Baili, M. Alouini, D. Dolfi, F. Bretenaker, I. Sagnes, an d A. Garnache, “Shot-noise limited operation of a monomode high cavity finesse semiconductor laser for microwave p hotonics applications,” Opt. Lett. 32, 650652 (2007). 9. G. Baili, F. Bretenaker, M. Alouini, D. Dolfi, and I. Sagnes , “Experimental investigation and analytical modeling of excess intensity noise in semiconductor class-A lasers,” J. Lightwave Tech. 26, 952-961 (2008). 10. G. Baili, L. Morvan, M. Alouini, D. Dolfi, F. Bretenaker, I . Sagnes, and A. Garnache, “Experimental demonstration of a tunable dual-frequency semiconductor laser free of r laxation-oscillations,” Opt. Lett. 34, 3421-3423 (2009). 11. M. Sargent III, M. O. Scully, and W. E. Lamb, Jr., Laser Physics(Addison-Wesley, 1974). 12. M. M.-Tehrani and L. Mandel, “Coherence theory of the rin g laser,” Phys. Rev. A17, 677-693 (1978). 13. A. Laurain, M. Myara, G. Beaudoin, I. Sagnes, and A. Garna che, “High power single-frequency continuouslytunable compact extended-cavity semiconductor laser,” Opt. Expr.17, 9503-9508 (2009). 14. A. E. Siegman, Lasers(University Science Books, 1986), pp. 992-999. 15. M. Brunel, M. Vallet, A. Le Floch, and F. Bretenaker, “Dif ferential measurement of the coupling constant between laser eigenstates,” Appl. Phys. Lett. 70, 2070-2072 (1997). 16. M. Alouini, F. Bretenaker, M. Brunel, A. Le Floch, M. Vall et, and P. Thony, “Existence of two coupling constants in microchip lasers,” Opt. Lett. 25, 896-898 (2000). 17. A. McKay, J. M. Dawes, and J.-D. Park, “Polarisation-mode coupling in (100)-cut Nd:YAG,” Opt. Expr. 15, 16342-16347 (2007). 18. S. Schwartz, G. Feugnet, M. Rebut, F. Bretenaker, and J.P. Pocholle, “Orientation of Nd 3+ dipoles in yttrium aluminum garnet: Experiment and model,” Phys. Rev. A 79, 063814 (2009). 19. J. Talghader and J. S. Smith, “Thermal dependence of the ref ractive index of GaAs and AlAs measured using semiconductor multilayer optical cavities,” Appl. Phys. Let t. 66, 335-337 (1995). 20. M. San Miguel, Q. Feng, and J. V. Moloney, “Light-polariz tion dynamics in surface-emitting semiconductor lasers,” Phys. Rev. A52, 1728-1739 (1995). 21. M. P. van Exter, R. F. M. Hendriks, and J. P. Woerdman, “Phys ical insight into the polarization dynamics of semiconductor vertical-cavity lasers,” Phys. Rev. A 57, 2080-2090 (1998). 22. D. Burak, J. V. Moloney, and R. Binder, “Microscopic theo ry f polarization properties of optically anisotropic vertical-cavity surface-emitting lasers,” Phys. Rev. A 61, 053809 (2000). Introduction Two-frequency lasers have proved to be interesting sources for the optical distribution and generation of radar local oscillators [1], the optical generation of high spectral purity CW THz radiation [2], or for pulsed or CW lidar-radar systems [3].In all these examples of use, the two-frequency lasers were based on solid-state active medium, such as Er-Yb doped glasses or Nd doped crystals.Due to the long lifetime of the upper level population, typically in the 100 µs to 10 ms range, these lasers are the so-called class-B lasers, i. e., their photon lifetime is much shorter than the population inversion lifetime.This means that these lasers exhibit relaxation oscillations.In the case of a two-oscillating-mode laser, there are two relaxation oscillation frequencies, corresponding respectively to the in-phase and the antiphase oscillation of the intensities of the two modes [4].The presence of these resonances leads to a strong intensity noise in the kHz to MHz frequency domain [5] which constitutes the actual limitation of the noise of these lasers when they are used to generate low noise microwave signals [6,7].However, it has recently been shown that Vertical External Cavity Surface Emitting Lasers (VECSELs) can belong to the class-A dynamical regime and exhibit a very low intensity noise [8,9].This is why we recently investigated the possibility to reach two-frequency oscillation in a VECSEL [10].However, the simultaneous oscillation of the two modes of the laser relies on the value of the nonlinear coupling constant C between these modes [11].In order to decrease C well below the critical value of 1, we implemented a spatial separation of the two polarization modes in the active medium [10].Then, it seems important in the present context to control the value of C in order to reach a stable two-frequency regime for a minimum spatial separation of the two modes.Indeed, a very large spatial separation would allow differential noises to appear, thus increasing the beat frequency jitter.Moreover, it is well known that the value of C plays an important role in the noise correlations between the two modes in class-A lasers [12].The aim of the present work is consequently to measure the value of C in a two-frequency VECSEL for different values of the spatial separation of the two otrhogonally polarized modes, as a first step in optimizing this kind of lasers. Description of the experiment The experimental setup is schematized in Fig. 1.The laser is based on a 1/2-VCSEL grown by Metal Organic Chemical Vapor Deposition (MOCVD) [13] consisting in a 27.5-period GaAs/AlAs Bragg mirror (99.9% reflectivity).Gain at 1 µm is provided by six strained balanced InGaAs/GaAsP quantum wells covered by an anti-reflection coating.The structure is bonded to a SiC substrate and maintained at 20 • C thanks to a Peltier thermo-electric cooler.The pump system consists in a 808 nm pigtailed diode laser delivering up to 3 W and focused on the gain chip to a 100 µm diameter spot with an incidence angle of about 30 • .The cavity is closed with a 50 mm radius of curvature concave mirror reflecting 99% of the intensity.The cavity length is about 47 mm.In these conditions, the photon lifetime is of the order of 15 ns, which is much longer than the carrier lifetime (which is of the order of 3 ns), thus ensuring class-A dynamical behavior for the laser [8].In these conditions, the laser threshold corresponds to a pump power of about 270 mW. Dual-polarization oscillation is achieved by introducing a birefringent YVO 4 crystal (BC) inside the cavity.This crystal, which is anti-reflection coated at 1 µm, introduces a polarization walk-off d proportional to its thickness.We have three different crystals of thicknesses 1, 0.5, and 0.2 mm, corresponding to d = 100, 50, and 20 µm, respectively.When the horizontal position of the pump spot is carefully adjusted to provide as much pump power to the two spatially separated beams, one can achieve the simultaneous oscillation of the two perpendicularly linearly polarized beams which correspond to the ordinary and extraordinary polarizations of BC and are separated by d in the active structure.Perfect spatial overlapping of the beams between the crystal BC and the cavity mirror M is ensured by the fact that M is concave while the Bragg mirror is plane.To make each polarization oscillate at a single frequency, a 150-µm thick uncoated glass étalon is introduced inside the cavity.The orientation of the étalon is adjusted to make the two cross-polarized modes oscillate in the same longitudinal mode of the cavity, as checked using an optical spectrum analyzer.The frequency difference between the two polar-izations, which is thus smaller than the 3.2 GHz free spectral range (FSR) of the cavity, can then be measured using either a 10 GHz FSR Fabry-Perot interferometer or a fast photodiode followed by an electrical spectrum analyzer.We also use these apparatus to make sure that no high-order transverse mode is oscillating.The powers of the two polarizations can be equalized by adjusting the horizontal position of the pump beam.In these conditions, with the 200 µmthick crystal for example, the laser threshold increases up to 1.1 W.An output power of 220 mW with a stable dual-frequency behavior is obtained with a pump power of 2600 mW.For higher pump powers, the dual-frequency behavior of the laser is no longer stable. Principle of the measurement In a class-A laser, the intensities I o and I e of the two ordinary-and extraordinary-polarized modes obey the following differential Eqs.[14]: where τ o and τ e are the lifetimes of the ordinary and extraordinary polarized photons, r o and r e are the excitation ratios of the two modes (ratio of the unsaturated gain to the losses), I sat is the saturation intensity of the active medium, and ξ oe and ξ eo are the ratios of the cross-to self-saturation coefficients for the two modes.The steady-state solution of Eqs. ( 1) and ( 2) corresponding to the simultaneous oscillation of the two modes is given by: where C = ξ oe ξ eo (5) is the nonlinear coupling constant, as defined by Lamb [11].The solution given by Eqs. ( 3) and ( 4) may be stable only if C < 1 [11].Different methods have been used to measure C [15,16,17].Here, we choose to observe the response of the intensities of the two modes to the introduction of extra losses for only one of these modes.From Eqs. ( 3) and (4), we can see that the values of ξ oe and ξ eo can be deduced from the observation of the response of the mode intensities to a modification of the losses and/or of the gain of the modes: Thus, by modulating the losses of the ordinary mode and by measuring the modulation amplitudes of the intensities of the two modes, we can use Eq. ( 6) to determine ξ eo .The same procedure can be performed using Eq. ( 7) to measure ξ oe by modulating the losses of the ex- traordinary mode. Results and discussion In our experiment, we perform this modulation of the losses experienced by one mode only by using a knife-edge introduced in the part of the cavity in which the two modes are spatially separated (see Fig. 1).This knife-edge is mounted on a piezo-electric transducer in order to modulate the amount of diffraction losses introduced.The truncation losses introduced by the knife-edge are much smaller than 1 %, allowing us to neglect any modification of the spatial profile of the beam.Figure 2 reproduces typical experimental traces obtained by this method.In this example, the separation between the ordinary and extraordinary modes is d = 20 µm.Without the knifeedge, the laser threshold corresponds to a pump power of 1.14 W. The introduction of the knife edge inside one of the beams increases this threshold by about 50 mW.This Fig. was obtained with a pump power of 2.21 W. The output powers of the two modes are about 100 mW.To obtain the results of Fig. 2, the position of the knife-edge is modulated at 227 Hz.These results show that, as expected from Eqs. ( 6) and (7), the modulations of the intensities of the two modes are in antiphase.The modulation of the losses is slow enough compared to the time constants of the system, namely the photon lifetime in the cavity and the carrier lifetime, to allow us to consider that the steady-state analysis of section 3 is valid.Then, using Eqs.( 6) and ( 7), the results of Figs.2(a) and 2(b) lead to ξ eo = 0.76 and ξ oe = 0.85, respectively.This leads to the following value of the coupling constant in this case: We have checked experimentally that the value of C is independent of the pump power and of the amount of losses introduced by the knife-edge.The values of ξ eo and ξ oe slightly vary from one measurement to the other, as already observed in the case of Er,Yb doped glass [16] and Nd:YAG [18], but their product C remains constant to ± 0.05.We then performed the same measurement for a larger spatial separation d = 50 µm (see Fig. 3).These measurements were performed in conditions similar to the ones of Fig. 2. By comparing Fig. 3 with Fig. 2, one can clearly see that the intensity of the mode whose losses are not modulated is less affected by the modulation of the losses of the other modes, illustrating the decrease in the coupling allowed by the increase of the spatial separation d.By using again Eqs. ( 6) and ( 7), the curves of Figs. 3(a) and 3(b) lead to ξ eo = 0.34 and ξ oe = 0.53, respectively.Their product leads to: We eventually used the 1-mm long birefringent crystal which introduces a transverse spatial separation d = 100 µm between the ordinary and extraordinary modes.With this crystal, we could detect absolutely no variation of the intensity of one mode when the losses of the other mode are modulated.We thus conclude that: These measurements are indicated as filled circles in Fig. 4. In principle, the coupling constant should evolve like the overlap integral of the two modes in the active medium, namely where C 0 is the coupling constant for superimposed modes, and I 1 (x, y) and I 2 (x, y) are the mode intensity profiles in the active medium.By taking two Gaussian profiles of equal radii w 0 separated by a distance d, Eq. ( 11) becomes: To compare Eq. ( 12) with experiments, we tried to determine w 0 by two means.The first method consists in carefully measuring the cavity length by measuring the beat note frequency between two successive longitudinal modes when the laser is multimode.We then obtain a cavity optical length L = 4.73 cm.Since the thicknesses of the 1/2-VCSEL, the étalon, and the birefringent crystal are very small compared with the cavity length, we use this value as the cavity geometrical length to compute the size of the waist for a simple planar-concave cavity.This leads to w 0 = 60 µm.We then adjust the value of C 0 to 0.71 in order to find the correct value of C for d = 20 µm.This leads to the dot-dashed blue line in Fig. 4, which provides only a poor agreement with the measurements for d = 50 µm and d = 100 µm.We then tried to determine w 0 by measuring the divergence of the beam at the output of the laser.Then by taking into account the divergent lens effect due to the traversal of the output mirror, we can deduce the divergence of the intracavity beam and then, finally, the value of w 0 .With that method, we find w 0 = 50 µm. By adjusting the value of C 0 to 0.75, we then obtain the dashed green line in Fig. 4. The agreement with measurements is better than before but is still not perfect.A perfect agreement can be obtained for w 0 = 41 µm and C 0 = 0.8, as evidenced by the full red line in Fig. 4. The reasons why the agreement cannot be better with the two other curves is that i) Eq. ( 12) leads to a very fast variation of C with w 0 , and ii) our methods to determine w 0 lack precision.In particular, the first one which supposes that our cavity is a planar-concave resonator is not true because the thermal lens induced by the pump in the active medium is strong, preventing the structure from behaving as a simple planar mirror.However, since the thermo-optic coefficients of GaAs and AlAs are positive [19], we expect the thermal lens effect to be positive, which should lead, for our cavity configuration, to an increase of the mode radius inside the structure instead of the observed decrease.This point thus requires further investigations. One potentially important conclusion of the present work is that the high value of C 0 (of the order of 0.8) explains why it is so difficult to obtain robust oscillation of the two modes without performing a spatial separation.However, since C 0 is smaller than 1, the simultaneous oscillation of the two modes is in principle possible even in the absence of spatial separation.However, it requires a careful balance of the losses and gains of the two modes to force them to oscillate simultaneously. Conclusion In conclusion, we have measured the coupling constant for the two perpendicularly polarized modes of a laser based on strained balanced InGaAs/GaAsP quantum wells.The values of the coupling constant that we obtained are useful for the design of relaxation-oscillation-free two-frequency lasers [10].In particular, it is found that the simultaneous oscillation of two perpendicularly polarized modes is in principle possible without spatially separating them in the active medium.In the future, we plan to gain better understanding of the values we get for this coupling constant using either models based on the introduction of carrier spin dynamics [20,21] or on a more complete microscopic theory of the anisotropy of gain saturation in such quantum wells [22].This should help us to optimize these lasers and, in particular, reduce their noise. Fig. 2 . Fig. 2. Experimental results for a spatial separation d = 20 µm.(a) Evolution of the powers of the ordinary and extraordinary modes when the losses of the ordinary mode are modulated at 227 Hz.(b) Same as (a) when the losses of the extraordinary mode are modulated.
v3-fos-license
2023-05-17T15:22:34.541Z
2023-01-01T00:00:00.000
258726165
{ "extfieldsofstudy": [], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://ijels.com/upload_document/issue_files/5IJELS-104202328-TheChoice.pdf", "pdf_hash": "bd302118a26cc19718d4eba305ce4d65abfa375a", "pdf_src": "Anansi", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:1640", "s2fieldsofstudy": [ "Psychology" ], "sha1": "8d3fde503987a7cba0e10413fff31d2a01678ece", "year": 2023 }
pes2o/s2orc
The Choice of Characters under the Collapse of Values--Interpreting A Road to the Big City from Psychoanalytic Theory — “A Road to the Big City” is one of Doris Lessing's classic short stories, which depicts three typical characters in society at that time. This article believes that the three characters correspond to the three levels of Freud's personality theory, and based on this, analyzes and explores the reasons behind the character's behavior. INTRODUCTION A Road to the Big City starts with the story of Jansen, a city passer, leaving the city by train at midnight.In the six hours of changing the bus, Jansen meets two sisters.sister Lilla is a prostitute who has a giggling career for a year, and her sister Marie is a rural girl who has just arrived in the city in the morning.Obviously, Marie wants to come to the city to take refuge in her sister.But she doesn't know what her sister is engaged in and her sister even wants her to do the same job.But Jansen knows that.Facing the doom that is coming to Marie, Jansen's inner humanity is touched.He wants to persuade Marie to return to the countryside, just to keep Marie's innocence and not be polluted by the turbidity of the city.However, Jansen's persuasion seems weak under the strong material temptation of the city, and Marie doesn't believe anything he said at all.Finally, out of a desperate state of mind, Jansen forcibly sent Marie to the train back to the countryside, but when his own train starts, he finds that Marie has got off the train and back to the city.Obviously, Marie chooses to stay in the city at the last moment and face the future she doesn't know.This short story is one of Lessing's classic works, which is short but thought- THEORETICAL SOURCES Personality refers to important and relatively stable aspects of behavior (Ewen, 2009:3).According to Freud, the founder of psychoanalytic school, the goal of all behavior is to obtain pleasure and avoid unpleased or pain.The Ego is a part of the Id transformed by the direct influence of the external world through the mediation of perceptual consciousness.The Ego is guided by the "reality principle" and not influenced by the "happiness principle". The purpose of the reality principle is to delay the release of energy until something satisfying is found or produced. Following the principle of reality does not mean giving up the principle of happiness, but merely requiring it to be put aside for the time according to practical needs (Behrendt, 2016:46).The influence of the Ego's self-preservation drives the pleasure principle is displaced by the reality principle, which without abandoning the aim of ultimately achieving pleasure, none the less demands and procures the postponement of gratification, the rejection of sundry opportunities for such gratification, and the temporary toleration of unpleased on the long and circuitous road to pleasure (Freud, 2003).The implementation of the principle of reality, the role of the second process, the more important role of the external world in one's life, all stimulate the development and maturity of such mental processes as perception, memory, thinking and action (Hall, 1986:30). The Ego represents what we call reason and sanity, in contrast to the Id which contains the passions (Freud, 1989:08).Because of following the "principle of reality", Ego can adjust the contradiction between "Id" and "Superego" according to the actual conditions of the surrounding environment, and take rational behavior, striving to satisfy the Id's desires in realistic and socially appropriate ways. The Superego is the representative of every moral constraint and the advocate of the pursuit of perfection.The Superego, at the highest level of the personality, is the moralized Ego.Its power comes from its capacity to create guilt and the bad feelings connected with guilt, and it can dictate our behavior and even our thoughts.While the Superego can help the individual to conform to the basic rules and laws of the society he lives in, it can also sometimes become the most powerful and even the most destructive part of his personality (Roth, 1997:13) Lilla In the short story, sister Lilla can be seen as the embodiment of Id.Her behavior only follows the "happy principle", that is, avoiding bitterness and seeking happiness, and eliminating the stressful experience that makes people feel painful and uncomfortable.In life, she ignores the external norms of social morality and constantly induces the "Ego" to meet her desire for happiness.Lilla is dressed in a stylish and exquisite way: "She wore a tight short black dress, several brass chains, and high shiny black shoes.She was a tall broad girl with colorless hair ridged tightly round her head, but given a bright surface so that it glinted like metal.She immediately lit a cigarette…" (Lessing,1) In the process of speaking to Jansen, Lilla also has her own routines, such as asking the time of the train, and finding opportunities to give Marie and Jansen time alone.From these places, we can see Lilla's work nature and proficiency. In the patriarchal society at that time, women generally undertook housework at home and did not go out easily.The nature of the work of women who always show up outside can be imagined.Men are always better educated and find better jobs in society.It is also reflected in the text that men are gentlemen in suits.In a patriarchal society, most women can only rely on men, either as housewives or as prostitutes like Lilla.Under the temptation of big cities, Lilla who comes from the countryside, longs for the brilliance of big city life, never considered the practical possibility and morality of her desire.As a prostitute, Lilla not only satisfies her physical desires, but also satisfies her material desires. As long as she thought it could bring pleasure to her psychology or physiology, she would do even irrational things.Social ethics and norms are not within Lilla's consideration.She only needs to meet her own needs.In order to earn more money, she even did not hesitate to teach her sister to be a prostitute.Lilla took her to the railway station, specially selected men to stay overnight, helped her choose men, and created opportunities for her to have relationships with strange men.Lilla is a victim of a patriarchal society and a full manifestation of the dominant position of the id in consciousness. Maire Marie can embody her "Ego", and she also follows the principles of reality.As a rural girl, Marie also wants to live a free life with money and "love" as her sister Lilla.We learned from her that she always thought her sister was a typist.She could also do a serious job as a typist and live the life she wanted.She is eager to realize her dream by working hard and making a living in a big city.That's why she came to the city from the countryside and wanted her sister to introduce herself to a suitable job.But Marie whose ideas are pure and simple, never thought of doing work that broke social norms and values, which can be seen from her formal dress and her utter lack of urbanism: "Plump, childish, with dull hair bobbing in fat rolls on her neck, she wore a flowered and flounced dress and flat white sandals on bare and sunburned feet.Her face had the jolly friendliness of a little dog." (Lessing,1) Therefore, we saw the Ego in Marie's consciousness.She has her own "happiness principle" that she wants to follow. In order to achieve this goal of happiness, she currently follows the "reality principle", that is, running from the countryside to the city and finding a decent job.Everything in the external world, that is, the big city, is stimulating and seducing the senses and psychology of this inexperienced girl, seemingly accelerating her psychological development and maturation process.She increasingly felt that she had broadened her horizons and became more and more obsessed with the prosperity of big cities, so she wanted to stay. "The three went into the street.Not far away shone a large white building with film stars kissing between thin borders of coloured shining lights. Streams of smart people went up the noble marble steps where splendid men in uniform welcomed them.Jansen, watching Marie's face, was able to see it like that."(Lessing,3) Finally, Marie learned the truth from Jansen and firmly chose to stay in the big city.This is actually the result of her adjustment between the contradiction between "Id" and "Superego".Living in a busy big city like Lilla is something she aspires to, but Jansen suggests a simple and original life back in the countryside.Finally, Marie stayed and chose to realize her dream in her own way. Mengqi The Jansen In the short story, Jansen embodies "Superego".At the beginning of the short story, Jansen's background was explained: "For a week he had been with rich friends, in a vacuum of wealth, politely seeing the town through their eyes.Now, for six hours, he was free to let the dry and nervous air of Johannesburg strike him direct."Based on the following text, we can also learn that Jansen is tired of the life of luxury in the city.He had already seen through the filth, hypocrisy, and deceit of city life and wants to return to a more pure and simple life.Therefore, he was attracted to this simple girl Marie at the first glance.Under the influence of the conscious superego, the Ego feels ashamed when it succumbs to temptation.When Jansen unknowingly returned to the apartment with them and realized what was going to happen next, he suddenly felt very helpless and angry. "Jansen adjusted himself on the juicy upholstery of a big chair.He was annoyed to find himself here.What for?What was the good of it?He looked at himself in the glass over a sideboard.He saw a middle-aged gentleman, with a worn indulgent face, dressed in a grey suit and sitting uncomfortably in a very ugly chair."(Lessing,4) Superego is a symbol of morality and norms and a defender of traditional social values.As a person who has experienced everything in the city, Jasen wants to escape here and protect the girl from the same occupation and life as her sister Lilla.So, Jansen kept persuading Marie to go back to the countryside and even bought a ticket to take her on the train.At this time, it was Jansen's conscious Superego, which ignores the gains and losses of reality and acts in accordance with "moral principles" that was at work.Jansen just wanted to follow his conscience and morality and send this simple girl back to the countryside.In this way, it is possible for her to continue to maintain her innocence without being invaded by the impetuous big cities. Otherwise, she will only become a slave to desire, like her sister Lilla, and become a prostitute for material life. However, in the end, Marie returned to the city, and Jansen failed to achieve his wish.No matter how ethical it is, Marie still chooses to pursue Ego.Because the Superego cannot completely conceal the impulses of the Id and Ego.Freud believed that this perfect personality state was only an ideal and could not be fully realized.The Id and Ego will strive to break away from the bondage of the Superego.So, in some cases, the Superego must obey the demands of instinct. IV. CONCLUSION Doris Lessing created three characters with different personalities in her short story A Road to the Big City with delicate strokes and concise language.By analyzing the correspondence between the male protagonists Jansen, Lilla and Marie and the Superego, Id and Ego in Freud's personality structure theory, this paper believes that in order to live a vain life, Lilla didn't hesitate to pull her sister as a prostitute, and she had no shame and moral bottom line.She was full of desire and instinct, and was the representative of Id; Marie pursued the principle of realism.In order to live as brightly as her sister, she simply adhered to the principle that she thought she was right, that is, listening to her sister's words can be as successful as her sister, having love and money.She was the embodiment of Ego; Jansen became the moral authority among the three.He wanted follow his conscience and morality to save Marie to escape the city, symbolizing Superego.In this short story, Freud's theory of personality structure has been better displayed and completely explained. By analyzing this short story, we find that in that society, traditional values were disintegrating.People in the countryside are tired of their existing lives and yearn for big cities.What people value is no longer a simple and peaceful life, but a rich material life and enjoyment.Rich men spend money like dirt, while women even become prostitutes in order to satisfy their desire to enjoy themselves.Having discovered the dirty and restless side of the city, for the middle-aged man, Jansen, who still has a conscience and traditional values, but is not compatible with the young people in the city, he has to return to his hometown and seek the comfort and tranquility that traditional life can provide. provoking.Most scholars tend to analyze the female character Marie in their works, with almost no comparative analysis of the three characters.Therefore, this article creatively uses Freud's personality theory to analyze the psychology of three characters, exploring the character selection and its reasons under the disintegration of traditional social values. 2023, 8(3), (ISSN: 2456-7620) (Int. J of Eng. Lit. and Soc. Sci.) https://dx.doi.org/10.22161/ijels.83.5 32 Freud's first theory was called the "topographical model", and it divided the mind into two areas: a Conscious/Pre-conscious area that contains all the thoughts and feelings of which we are Mengqi The Choice of Characters under the Collapse of Values--Interpreting A Road to the Big City from Psychoanalytic Theory IJELS- 2023, 8(3), (ISSN: 2456-7620) (Int. J of Eng. Lit. and Soc. Sci.) https://dx.doi.org/10.22161/ijels.83.5 33 III. CHARACTER ANALYSIS . The psychological punishment and reward for the Superego are pride and guilt or inferiority.The Ego, when it has done something moral, or conceived a moral thought, it is pleased with pride; and when the Ego gives in to temptation, it feels ashamed.The main function of the Superego is to control and regulate the impulses in the Id that, if lost, would
v3-fos-license
2022-08-14T06:17:35.287Z
2022-08-12T00:00:00.000
251539816
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.nature.com/articles/s41598-022-17074-6.pdf", "pdf_hash": "f03011b9f1cd709300be98c5d4598b719298e517", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:1641", "s2fieldsofstudy": [ "Environmental Science" ], "sha1": "c34098fbf4218cef5676ec32f44fce2e6b71c51a", "year": 2022 }
pes2o/s2orc
High-resolution surface water dynamics in Earth’s small and medium-sized reservoirs Small and medium-sized reservoirs play an important role in water systems that need to cope with climate variability and various other man-made and natural challenges. Although reservoirs and dams are criticized for their negative social and environmental impacts by reducing natural flow variability and obstructing river connections, they are also recognized as important for social and economic development and climate change adaptation. Multiple studies map large dams and analyze the dynamics of water stored in the reservoirs behind these dams, but very few studies focus on small and medium-sized reservoirs on a global scale. In this research, we use multi-annual multi-sensor satellite data, combined with cloud analytics, to monitor the state of small (10–100 ha) to medium-sized (> 100 ha, excluding 479 large ones) artificial water reservoirs globally for the first time. These reservoirs are of crucial importance to the well-being of many societies, but regular monitoring records of their water dynamics are mostly missing. We combine the results of multiple studies to identify 71,208 small to medium-sized reservoirs, followed by reconstructing surface water area changes from satellite data using a novel method introduced in this study. The dataset is validated using 768 daily in-situ water level and storage measurements (r2 > 0.7 for 67% of the reservoirs used for the validation) demonstrating that the surface water area dynamics can be used as a proxy for water storage dynamics in many cases. Our analysis shows that for small reservoirs, the inter-annual and intra-annual variability is much higher than for medium-sized reservoirs worldwide. This implies that the communities reliant on small reservoirs are more vulnerable to climate extremes, both short-term (within seasons) and longer-term (across seasons). Our findings show that the long-term inter-annual and intra-annual changes in these reservoirs are not equally distributed geographically. Through several cases, we demonstrate that this technology can help monitor water scarcity conditions and emerging food insecurity, and facilitate transboundary cooperation. It has the potential to provide operational information on conditions in ungauged or upstream riparian countries that do not share such data with neighboring countries. This may help to create a more level playing field in water resource information globally. Results Mapping and global statistics. We have for the first time established and applied analytics to large amounts of satellite data, to monitor a total of 71,208 small to medium-sized reservoirs (sizes varying from 0.01 to 100 km 2 ), with a revisit frequency of at least one week (though the actual observation frequency varies depending on the satellite image availability and quality). For reference, we consider a size of 10 ha to be small, similar to the minimum size of lakes in the HydroLAKES database 31 . We use a combination of remote sensing image processing algorithms on multi-petabyte satellite datasets and cloud analytics through the GEE platform. Recent work on mapping dams and reservoirs so far has focused on the identification of dams and reservoirs 30,31 . Studies focusing on higher-resolution surface water dynamics of reservoirs at a global scale have mainly researched a smaller number of reservoirs 25,26 . In our study, we combine data from multiple Landsat and Sentinel satellites to ensure revisit times are short enough to assess the status of reservoirs. We collate a set of reservoirs in the form of geospatial polygons from several dam and waterbody datasets, and per reservoir establish image-to-image surface water area estimates over non-cloudy areas, followed by a filling of remaining occluded pixels using the probability of surface water occurrence 18 . The entire method is described in the Methods section. We perform validation for four countries (Spain, India, South Africa and the USA) and 768 reservoirs in total, resulting in high goodness of fit between satellite-derived surface water area dynamics and daily in-situ water level and storage measurements with the r2 values larger than 0.7 for 67% of reservoirs used in the validation study, indicating that the surface water area can be used as a proxy of storage in many situations when analyzing reservoir storage dynamics. Besides the global statistics on the surface water area of reservoirs in the last 36 years (1985-2021), we also studied their intra-annual and inter-annual variability. Figure 1 shows how the surface water area of small and medium-sized reservoirs varies at inter-annual (year-to-year trend) and intra-annual (seasonal) scales. Note Figure 1. Overview of the reservoirs included in the database and their surface water area dynamics. The mean-normalized intra-annual (seasonal) (a) and inter-annual (trend) (b) variability of the surface water area of reservoirs, computed as a standard deviation of trend/seasonal components of surface water area time series divided by its mean area. Seasonal/Trend decomposition of surface water area time series is performed using statsmodels Python library 32 . The map is generated using Unfolded Studio 33 . Map: https:// studio. unfol ded. ai/ public/ 189a4 711-8fd7-4884-ac49-d97e9 f9379 6e. www.nature.com/scientificreports/ that 479 reservoirs with a mean area larger than 100 km 2 are also depicted in the maps but were excluded when computing trend/seasonal variability due to lower accuracy. We did not analyze the origin of inter-annual trend variability, which could be caused by reservoirs being filled during the study period, decommissioned, or large climate variabilities. We also investigated if the seasonal/trend variability of surface water area varies significantly for different geographical regions with the established dataset. We did not find any significant spatial relations, implying that the found relationship between variability and size of surface water bodies, when investigated over a large enough set of samples, applies globally, regardless of the climate region or landscape. In the remainder of this paper, we investigate three possible use cases for our reservoir surface water area dynamics database, assuming sufficient satellite data is available in near real-time. Claims regarding water scarcity. Climate change increases the frequency and severity of extreme weather events, such as floods and droughts. But politicians may be using this fact to hide some of the real culprits behind water scarcity, such as increasing water withdrawals and poor water resources management. Objective and timely information on surface water resources can support or counter claims regarding water scarcity and/or its principal drivers. We demonstrate this by analyzing surface water dynamics in Turkey in the years leading up to January 2021. In mid-January 2021, The Guardian reported that water availability was critically low across Turkey as it faced "the most severe drought in a decade" 33 . In particular, Istanbul was reported to be hit hardest, with reservoir levels receding, such that curtailments were needed. Focussing on Istanbul first, Fig. 2a,b show the surface area time series for the largest reservoirs around Istanbul. The results indeed confirm this and reveal that these reservoirs were indeed at a low level, but also, that more dramatic drought conditions have occurred in the past, particularly a two-year drought in 2007-2008 and another in 2014. It should be noted that Istanbul's population has grown dramatically over the past decades, making this region a severely water-stressed one due to rapidly increasing demand and, therefore, highly vulnerable to the declining rainfall across the Mediterranean as a result of climate change 34 . www.nature.com/scientificreports/ We also analyzed the accumulated surface water of all reservoirs in our database over the entire country to support the findings reported by The Guardian article. Figure 2c,d show the behavior of the accumulated surface water area (c) in time, with the locations of the reservoirs (d) considered. Our analysis for Turkey only includes reservoirs larger than an average surface water size of 10 hectares and smaller than 50 km 2 . The graph does not reveal the drought as significantly as the results for Istanbul. It does, however, demonstrate that there is a clear upward trend in the long-term surface water. After further inspection, we found that this results from the development of entirely new surface water bodies. We visually inspected several locations where new reservoirs were constructed using the Aqua Monitor tool 17 and confirmed that their surface water area time series indeed increased significantly from almost zero. The significant change in surface water over the year 2009 is partly due to the refilling of existing water bodies after the drought of 2008 and partly due to the filling of entirely new impoundments, such as Manyas, Boyabat, Çekerek, and a number of others. Although surface water area estimates do not provide the means to monitor per-capita available water resources, the above evidence demonstrates that surface water, and volumetric water resources availability (because of the interrelation between surface area and volume), was much higher than it was during the 2008 drought. This was partly due to the construction of new dams and resulting impoundments over a short amount of time. The above example demonstrates that surface water monitoring may assist in answering attribution questions around water scarcity. It also indicates that essential contextual information is needed to understand the attribution problem, such as population growth numbers and translation of available surface water resources into relative units such as per-capita available water resources. Also, the ability to distinguish water resources from new impoundments will assist in understanding the development of surface water availability in a given country against the challenges imposed by drivers of scarcity, something essential to monitor progress in Sustainable Development Goal 6. It is then vital to make users of the information understand the complexity of these compounding effects on water scarcity in a digestible manner. It is also important to note that we focus only on surface water resources without looking at other sources such as groundwater. Drought early warning, response, and food insecurity. Another logical use case for near real-time monitoring of reservoirs is drought early warning and water and food insecurity early warning. Suppose multiple reservoirs in multiple basins or countries suffer from very low surface water amounts compared to the normal situation. Detecting this early may be an early warning signal for degrading food production and thus potentially upcoming food insecurity. If this situation is observed in an area that strongly relies on local food production at the onset of the dry season, then this may imply food shortages in the forthcoming season. Early observations may then be used to organize and implement water use limitations, redistribution towards priority uses (such as drinking water supply), government food supply and early aid mobilization by the international disaster relief and development communities to relieve water and food insecurity. Here we will use recent droughts in South Africa, including the well-documented "Day-Zero" drought impacting Cape Town in 2017 36,37 , to demonstrate this, and the earlier, but much larger-scale drought, impacting the entire region in 2015/2016, demonstrating the importance of large-scale monitoring for water rationing decisions or pre-allocation of food resources. During the severe drought that affected Cape Town in 2017, concerns about the drought were so severe that plans were made to shut down all but essential services 38 . But while this drought was very well captured on storage time series and reported on in the media, the large-scale drought of 2015/2016, which affected water resources and food availability in the entire region, including the Limpopo, Incomati, Umbeluzi and Zambezi basins, has never been adequately described in terms of storage anomalies . Most studies have focused on investigating the climate phenomena leading to the drought 39 , rainfall and runoff anomalies, or the response of institutions to the drought 40 . However, surface water storage has not been monitored at a large scale, althoughthis variable may serve as a good indicator for the state of water resources 41 , also because it integrates the history of rainfall, runoff, and water use over time, including possible multi-annual storage effects. We describe the surface water conditions in a way similar to the Standardized Storage Index (SSI) 42 . SSI shows how much the current storage deviates from normal conditions in the given month, measured in standard deviations.We introduce a Standardized Area Index (SAI) in the same way as SSI, looking atthe variability of surface water area of reservoirs over larger areas. This allows for an equivalent comparison between all reservoirs in a given area, regardless of their size, seasonal control that may be applied, or storage to runoff ratio as a measure of multi-year storage capacity. Figure 3 shows the SAI applied to surface water area time series (instead of storage) for respectively: Theewaterskloof Dam, the largest dam providing water to Cape Town; the six largest reservoirs (by surface area) around Cape Town in our database, and all water surfaces together of impoundments larger than 1 km2 across entire South Africa. The results show that by November 2017, the storage anomaly in Theewaterskloof Dam was more than three standard deviations under normal expected conditions, proving that this drought was the worst recorded in the last thirty-six years. This also holds for the entire surroundings of Cape Town. Interestingly for South Africa, on average, this same period was quite normal (orange line). For the whole country, the earlier occurring 2016 drought comes up very clearly. To visualize the 2016 drought geospatially, Fig. 4 shows how the drought manifested itself over reservoirs in the entire region, including the surroundings of South Africa, notably Eswatini, Lesotho, Namibia, Botswana, and Zimbabwe. We show here the conditions at the beginning of the dry season of 2016 (April), a moment at which the information could be used as an early warning signal. It demonstrates how severe conditions such as "Day-zero" may be monitored at a much broader scale and earlier time, offering clear early warning signals for water and food shortages and possibilities for pre-allocation of drought response resources at the national www.nature.com/scientificreports/ www.nature.com/scientificreports/ but possibly also at transboundary level. Understanding the effects of drought at the transboundary basin level, where drought affects also the other riparian states may help foster cooperation instead of triggering conflicts. Transboundary water conflicts and cooperation. Monitoring water resources use in small and medium-size reservoirs can be a considerable contributor to conflict prevention and mitigation in internationally shared basins. A significant share of all conflicts between riparian states emerges in relation to water allocation and/or the development of infrastructure for irrigation and hydropower generation 43 , especially if states favor unilateral strategies for doing so. Very often, such efforts spark disagreement or full-fledged conflict as other riparian states get concerned about the (potential) negative environmental and socioeconomic effects of such projects on their own water resources use opportunities and a potential violation of the principle of no significant harm, which is considered a cornerstone of cooperation over shared water resources 43,44 . As states accord strategic or even security relevance to water resources or consider dams as emblematic symbols of nationbuilding 45 , dams and their reservoirs are tied to a political conflict potential that often exceeds the actual water resources use dimension of those. De-politicizing or de-securitizing the dams discourse and instead engaging in creating a joint understanding of water resources use and possible change across the entire basin can thus be an important factor in mitigating conflict and fostering cooperation. The countries in the Tigris-Euphrates basin (Turkey, Syria, Iraq, and Iran) have unilaterally developed irrigation and hydropower schemes over the last 50 years, intensifying water use in a water-scarce basin. This led to an intensification of tensions over shared water resources in a region that also suffers greatly from several prolonged violent conflicts beyond the water sector. The potential of monitoring basin-wide water resources may support future steps towards cooperation as it levels the information playing field between all riparian states. Disputes concern, in addition to the protracted dispute between Turkey and Iraq, between Iran and Iraq. Iran is storing water from tributaries to the Tigris (such as the Sirwan and the Little Zab rivers, together accounting for about 25% of the Tigris' annual flow) flowing into Iraq, and diverting water eastwards to ease the severe water scarcity large parts of the country are facing and the socioeconomic as well as political implications that come with it 46 . In order to do so, Iran has built more than 600 dams across the entire country in the last decades, some of those on rivers flowing to Iraq. This is done without sharing data with neighboring Iraq, not least because cooperation in general and data and information exchange in particular between both states are limited 46 . Individual water crises in two riparian states are thus aggravated by unilateral measures to alleviate such crises and have led to tensions between the two countries. Access to data on reservoir water dynamics and thus also reservoir operation could provide a basis for both states to engage in cooperative approaches to using the scarce water resources they share in more efficient ways. While political willingness to share data remains low in the region, open-access data that does not formally originate from one country (and can therefore be rejected or questioned by the other country) could-albeit most likely not perceived with open arms-pave the way towards more leveled exchange and a shift from accusations based on assumption to negotiations based on neutral and externally provided facts. Figure 5 shows the transboundary basins of the Little Zab and Sirwan rivers as well as the main reservoirs located in this area. While our database contains surface water dynamics of the two large reservoirs on the Iraq side (Dukan and Darbandikhan Lakes), a number of smaller reservoirs constructed upstream during 2000-2020 (indicated in red) were mapped by making use of the Aqua Monitor algorithm 49 . Discussion The advent of cloud computing resources and decentralized data storage makes it possible to establish worldwide monitoring services. This provides new scientific insights into how water bodies behave in time, as shown by several previous authors. The implications for science and society may be vast. Such monitoring capabilities, especially when extended from surface areas to water volumes, and complemented by socio-economic data, will also support the monitoring of water-related global targets, such as those related to the Sustainable Development Goals. In this paper, we showed that in particular small water bodies (< 100 ha) show a large inter-and intra-annual variability in available water resources, alluding to implications for the security of food production, especially for societies that rely on smaller water bodies. Monitoring the trends in reservoir numbers (i.e., new reservoirs appearing), their available resources, how variable these resources are overtime, and possible changes in both may provide the baseline for such monitoring. Furthermore, the behavior of new and existing reservoirs may open debate on their impact on society and the environment and how such reservoirs could be better managed in order to protect both. Moreover, in transboundary basins, such information can also support conflict mitigation and better water management across riparian states. Ideally, they may even result in more efficient multipurpose use of reservoirs in multi-reservoir basin systems. To be able to truly understand impacts and trigger decision making to improve water management, more insights are required-for instance which ecosystems may be threatened by them, which communities are served by which water body, and how reliant such communities are on surface water alone. Such information remains highly localized and will require local data collection efforts in conjunction with satellite data analyses. Combining global water resources observations with more localized socio-economic details is key to understanding the exact impacts on communities. It will make our near real-time estimates much more helpful for many possible end-users. The case studies demonstrate convincing examples where global scale operational reservoir monitoring at detailed spatial (< 100 m) and temporal (weekly) scales have two important implications: first, any stakeholder (not only the one operating or controlling the water body) can monitor these in real-time. This may result in a level playing field on water information during politically and socially sensitive negotiations about water entitlements, transboundary agreements, and a better basis for law enforcement provided that our observations are formally recognized. Second, we foresee that near real-time observations can be a significant contributor to assisting www.nature.com/scientificreports/ water management in light of current and future water crises, helping to reduce the vulnerability of people and societies to scarcity 41,50 . Real-time observations, provided they are timely and skillful enough, may inform e.g., improved real-time water accounting, improved dam operations and curtailing decisions, or early mobilization of humanitarian aid. They will form an essential input for seasonal water resources forecasting, given that the present-day state of water resources provides skill for such forecasts in the first weeks to months ahead 41,51,52 . The prospects are that governments at local as well as national levels, international organizations, humanitarian aid agencies and NGOs can be provided with better and more localized information on water issues (too much or too little) and act upon those earlier; the reinsurance industry may tailor their payouts and premiums based upon monitored water shortages, and energy utilities are enabled to monitor hydropower potential for the coming months based on (upstream) reservoir states and potentially adjust power production based on anticipated rainfall and water availability and changing energy demands in a shared regional market to give a few examples. Limitations and future developments. Our current analysis is limited to the impoundments' surface area alone. Our dataset can be used to indicate storage variability or % of reservoir filling, assuming that the reservoir banks are not too steep to ensure that water level changes result in surface water area changes. For many applications, however, volumetric time series are a must. For instance, the Food and Agricultural Organization evaluates water accounting for many countries, requiring a month-to-month understanding of cubic-meter storage of resources (besides the month-to-month fluxes). The surface area variations alone are not sufficient. Moreover, our surface area dynamics dataset will not offer enough information where reservoir banks are steep, resulting in a small variability of the water surface area. The logical next step would be to combine our dataset with altimetry for both cases. Cooley et al. 27 already show the vast amount of possible observations that can be made with the ICESat-2 satellite for both small and large reservoirs. Busker et al. 25 computed surface volume time series for 137 large water bodies, using a combination of monthly surface area estimates 18 and the DAHITI water level archive 53 . The new SWOT mission 54 will offer an opportunity to establish storage-area or storagedepth relationships, which can then be used for real-time monitoring in an operational setting. In addition, a large additional value may lie in the adding of dam properties and installations to estimate live storage, potential hydropower production, and the likelihood of spills and spillage amounts. Further, the provided coverage in this paper is nowhere near complete, and limited curation has been done. In particular, many small reservoirs are missing, the maximum extent of reservoirs is not always correct in existing vector maps, and the prior water occurrence probability, used to fill gaps in unobserved parts of the water www.nature.com/scientificreports/ body for cloudy images, can be significantly improved. One of the possible extensions of our dataset could be to attribute large inter-annual changes of reservoirs by analyzing surface water area time series, identifying newly constructed reservoirs, periods of drought, or decommissioned reservoirs. Further extension of our database, curation of the reservoirs' existence, extent, and classification in natural versus man-made remain for future work. While our algorithm demonstrates excellent results for arid/semi-arid environments, several challenges remain open. Harmonized satellite imagery from Landsat and Sentinel optical missions results in high-frequency observations in reservoirs in these areas but provides limited results in regions with significant cloud cover presence. A solution would be to extend the monitoring with the free data from the Copernicus Sentinel-1 SAR mission satellites. The use of a simple spectral index approach, even with the use of dynamic thresholding, makes it challenging to discriminate water from snow and ice or hill shadows, including more spectral information and/or auxiliary datasets (such as height above the nearest drainage), are expected to improve the algorithm accuracy in these cases. Furthermore, either multi-class Otsu and/or the use of more advanced machine learning methods, such as Deep Neural Networks, may further improve the algorithm's applicability. Methods Our method to derive water dynamics of reservoirs builds upon previous studies focusing on the mapping of dams and water bodies 30,31,[55][56][57] . We derive surface water area dynamics of reservoirs by analyzing freely available medium-resolution satellite images acquired in the last 35 years by NASA's Landsat and ESA's Copernicus Sentinel missions. Our water detection algorithm was applied individually for all 71,208 water bodies and every satellite image intersecting with the given water body. The method is implemented using the Google Earth Engine platform to process satellite imagery, which perfectly suits the need to process a multi-petabyte satellite dataset. Still, the overall generation of the dataset took about six months of run time. One of the first challenges we had to overcome was that no harmonized global water reservoirs dataset exists today that maps both small-, medium-, and large-sized reservoirs. We have combined vector maps of water bodies from multiple vector datasets and attributed them as artificial water reservoirs if they intersect with dams (or are located close to dams) or are already attributed to reservoirs originally. Dams used here were collected from multiple existing datasets 29,30,54,55 . This harmonized reservoir vector dataset was then used to derive surface water dynamics from satellite images. Accurate detection of surface water from optical satellite imagery implies solving several challenges. Firstly, the water/land boundary can be fully or partially occluded by clouds or shadows from clouds or hills. Secondly, spectral properties of the land and water surface near the land/water boundary may vary significantly. Also, multiple effects can be present in larger reservoirs resulting in additional noise, such as water slope variability (due to wind or water flow) or the presence of vegetation or other masses floating on the water surface (ice, aquaculture). These challenges make using naive water detection methods less feasible for accurately estimating the surface water area of water bodies. In the sections below, we further describe how we tackled these challenges. Numerous algorithms and datasets have been developed over the last decade focusing on the accurate estimation of surface water dynamics from optical satellite imagery 18,[58][59][60][61][62][63] . A key requirement to use those algorithms that can classify water where it is occluded due to clouds 24,64,65 . To detect water mask in every satellite image that intersects with a given reservoir geometry, we first discriminate surface water using NDWI spectral index by applying the method of local thresholding based on the Canny edge detector and a binary version of the Otsu thresholding algorithm 61,66 . The water detection step is followed by the gap-filling step, eliminating false-negative (i.e., water pixels detected as non-water) water detection. We did not use water occurrence to remove falsepositive pixels (i.e., non-water pixels classified as water) due to the low accuracy of the water occurrence dataset. Instead, we only excluded pixels detected as water where NDWI values are less than -0.15, which corresponds to land in most cases. During the gap-filling step, we combine detected water masks with the water occurrence dataset 18 to determine which areas belong to false negatives and combine them with the detected water mask to obtain the gap-filled water mask. Finally, the resulting surface water area time series are post-processed with a temporal outlier filtering (quantile-based) to remove the remaining errors. Our algorithm does not explicitly apply cloud masking for performance reasons; instead, we filter out bright images fully covered by clouds using the global cloud frequency dataset 67 . For every satellite image overlapping with a reservoir, we compute the topof-the-atmosphere (TOA) reflectance value which corresponds to an 85% quantile over the maximum reservoir area (regional reducer). This metric is then used to indicate the whiteness of every image, with high values corresponding to images fully covered by clouds. Combined with the average annual cloud brightness observed on average over the reservoir area, we filter out the cloudiest images. Cloud pixels present in the remaining images are corrected during the post-processing step. The outline of this multi-step algorithm is shown in Fig. 6. Validation. To validate the algorithm's performance, we compare established surface area time series against in-situ measurements of water levels or storages with an observation frequency of daily or higher for 768 reservoirs in Spain, India, South Africa, and the United States. Figure 7. Shows an overview of validation locations used and the distribution of correlation coefficients computed per validated reservoir. The figure also shows an example of the relationship between our surface area estimates and in-situ storage time series for the Theewaterskloof dam in South Africa. Our time series correlate well with in-situ observations, with r2 values higher than 0.7 for 67% of reservoirs used in the validation. Performance degrades for reservoirs where surface water area variability is small (temperate climate zone or steep reservoir banks) or where in-situ measurements are of lower quality. An example of such a case is also provided for the Prompton Reservoir (United States). To monitor water storage changes in these reservoirs, alternative methods to monitor its variability are needed such as altimetry (in case storage changes more with surface elevation) or the addition of synthetic aperture radar (SAR) satellite www.nature.com/scientificreports/ imagery (in case hardly any cloud-free imagery is available). Detailed results of the validation study and datasets used during the validation are included as supplementary materials. The method used during the validation includes a temporal interpolation of daily in-situ measurements for every satellite-derived measurement point. The next step involves the removal of outliers using thresholding of the 2d kernel density estimation applied to the scatter plot 69 . The final step computes R 2 and RMSE after applying a non-parametric regression using the LOWESS algorithm 70 using the statsmodels python package 32 . Figure 6. Method of surface water area detection for reservoirs. The algorithm to compute surface water consists of the next steps: (1) Select satellite images least cloudy over the reservoir area (2) Compute spectral water index (NDWI here, but could be any spectral index suitable for surface water detection) (3) Apply Canny edge filter to detect water/land edges (additional steps can include edge suppression based on the spectral properties around edges) (4) Define sampling region for pixels surrounding water/land edges (5) Sample spectral index values within the buffer computed during step 4 and compute optimal threshold using Otsu method used to (6) Compute the surface water area (7) Select surface water occurrence (8) Fill gaps (falsenegatives) in the resulting water mask, remove incorrectly detected water (false-positives) by sampling water occurrence along water edges, and compute the final filled surface water area mask by clipping water occurrence at a given probability and combining it with the detected water mask. The figure was generated using Google Earth Engine Code Editor tool 14 . Code: https:// code. earth engine. google. com/ 54602 c8309 f44cf 83e42 cf93a 854dc 49.
v3-fos-license
2019-03-28T13:33:43.437Z
2019-02-27T00:00:00.000
86414557
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "http://article.sciencepublishinggroup.com/pdf/10.11648.j.ijbmr.20190701.11.pdf", "pdf_hash": "bbcc8e53bba583a19b7066531574f7bf99de11cc", "pdf_src": "MergedPDFExtraction", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:1645", "s2fieldsofstudy": [ "Medicine", "Environmental Science" ], "sha1": "ac8db754d230547433310ef8457319c803fa4b9f", "year": 2019 }
pes2o/s2orc
Prevalence of Intestinal Parasites Among Preschool Children and Maternal KAP on Prevention and Control in Senbete and Bete Towns, North Shoa, Ethiopia In developing countries, intestinal parasites like protozoa and helminths are highly prevalent in preschool children. There is also poor understanding of the mother’s knowledge, attitude, and practices towards parasitic infections. Therefore, this study is designed to assess the prevalence of intestinal parasite and maternal knowledge, attitude and practice on the prevention and control of intestinal parasites. Cross-sectional study was conducted on preschool children in Senbete and Bete towns. Stool specimens were collected and examined for intestinal parasites by using Kato-Katz and formol-ether concentration technique. Mother’s knowledge, attitude, and practice data were collected using a per-tested structured questionnaire. Data was analysed using SPSS-20 and P values less than 0.05 was considered as statistically significant value. Among 214 preschool children, the overall prevalence of intestinal parasite was 52.3%. The predominant parasites was Hymenolepis nana (23.8 %), followed by Giardia lamblia (19.6%). Among 214 interviewed mothers 129 (60.3%) had knowledge on prevention and control of intestinal parasites. And also 120(56.1%) of the respondent had positive attitude on the prevention and control of intestinal parasites. Moreover, 95(44.4%) of the mothers used toilet or container to dispose their children’s faeces and 186(86.9%) mothers gave drug for their child. High prevalence of intestinal parasite was found. Maternal education level, open field defecation and playing with soil were significantly associated with intestinal parasitic infections. Therefore, health education program to improve maternal knowledge, attitude and practice should be implemented. Background Intestinal parasites are major public health problem in several developing countries. According to World Health Organization (WHO), over 1.5 billion people are infected with one or more intestinal parasites. Moreover, 700 million people infected with hookworm and 807 million people infected with ascariasis [1]. Intestinal parasites are more predominant in the developing countries mostly in sub-Saharan Africa [2]. In Ethiopia there is high burden of intestinal parasites. The overall national prevalence of any helminths infection was 29.8% with variable degree of prevalence among regions [3]. Intestinal parasite infections are common among Control in Senbete and Bete Towns, North Shoa, Ethiopia preschool children with different causes such as playing with soil, sucking fingers and defecation in open field. Maternal awareness for the prevention and control of intestinal parasite has its own impact on the prevalence. To reduce the impact of intestinal parasites, increasing access to safe water, sanitation and health education are necessary [2]. WHO also recommends periodic preventive chemotherapy like albendazole or mebendazole as a public health intervention [1]. Globally in 2013, more than 266 million preschool-aged and 609 million school-aged children were estimated in need of preventive chemotherapy for soil transmitted helminths in 106 countries. In Africa more than 13.8 million preschool aged children in need of treatment were treated [1]. In Ethiopia the main strategies are mass drug administration, case detection and transmission control. However, information on the prevalence and distributions of intestinal parasites is incomplete and not updated periodically. Even there is no enough study throughout the country. Therefore this study is designed to assess the prevalence of intestinal parasite and maternal Knowledge, attitude and practice on the prevention and control of intestinal parasites. Study Design and Period A community based cross-sectional study was conducted in July 2018. Study Area Senbete and Bete towns are found in Jile timuga woreda, North Showa zone, Amhara region, Ethiopia. The annual average range of temperature in Senbete and Bete towns is 24 -30°C and annual rainfall of approximately 500-700 mm. The area has an altitudinal of 1000 to 1450m. The total population in Senbete and Bete town is 7,047 and 2,105 respectively according to Jile timuga woreda health office. Eligibility Criteria All mothers with children (1-5 years) living in Senbete and Bete towns at least for 1 year and willing to participate in the study was included while mothers having children taking standard intestinal parasite treatment for previous month and children who has seriously diseases were excluded. Data Collection Socio-demographic data and mother's knowledge, attitude and practice were collected with a structured questionnaire by trained health workers. Sample collection, handling and transportation Orientation was given for mother to collect about 2g fresh stool sample from their own preschool child using clean, dry and well labelled specimen cup. Then samples were transport to Bete town health centre laboratory. In health centre laboratory a portion of the sample was processed by Katokatz method using a template delivering a plug of 41.7 mg of stool as described by Nyantekyi [4]. The remaining sample was preserved in test-tube containing 10% formalin. All preserved samples and Kato-katz slides were transported to Ethiopian Public Health Institute (EPHI) parasitology laboratory and examined by using formal-ether concentration technique. Quality Control For each steps Standard Operational Procedure (SOP) was followed. Microscopic reading was done by senior laboratory technologists. Data quality was assured by prior training of data collectors about the objective of the study and data collection procedure. In addition quality control was performed with daily checking. Statistical Analyses of Data The data was analysed by using SPSS-version 20. Frequency and cross tabulation were used to summarize descriptive statistics of the data. Finally the association between variables was identified by OR, 95% CI and Pvalue. Ethical Consideration Ethical approval was obtained from Research and Review Committee of Addis Ababa University. Permission was obtained from Jile timuga woreda. Informed consent was obtained from each child's mother. Children with intestinal parasitic infection were treated with appropriate drug and dose for each parasites obtained. Operational Definitions Attitude: assessment of mothers opinion, thought about intestinal parasite prevention and control. 1. Positive attitude: mothers who responded below the mean (<9.2). 2. Negative attitude: mothers who respond above the mean (>9.2). Our questioner prepared using Likert scales for attitude questions. Calculation was based on maximum score scaling to include all responses. The maximum response is 5 and the minimum response 20. The mean response of respondent was 9.2. If all questions reposes are strongly agree, the score will be 5 and strongly disagree the score will be 20 (range is between 5 and 20). The order is 1 for strongly agree, 2 for agree, 3 for disagree and 4 for strongly disagree on the questionnaire. Knowledge: assessment of what mothers understanding about intestinal parasites prevention and control. The following definitions were used to score the level of understanding. The scoring method was adapted from Abera H, and Tebeje B. 2009 [5]. 1. Knowledgeable: scoring of 80% -100% from knowledge measuring questions about IP prevention and control. If the mother answered > 7 knowledge measuring questions. 2. Fairly knowledgeable: scoring from 50%-79% of knowledge measuring questions. If the mother answered 5-7 knowledge measuring questions. 3. Non-knowledgeable: scoring < 50% of knowledge measuring questions. If mother answered <5 knowledge measuring questions. Practice: assessment of mother's exercises on the prevention and control of intestinal parasite. Maternal and Children Socio-Demographic Status A Total of 214 mothers whose children are able to produce stool sample were included in this study. Mean age of mothers was 27.5 (SD 5.5) years. Almost all 199 (93%) of study participants were married. More than three forth (79%) of the mothers did not attained formal education. The majority of study participants (74.3%) have between 4-6 family members. And also a total of 214 children were enrolled of which 104 (48.6%) were male and 110 (51.4%) were female. The mean ages of the children were 3.4 (SD 1.1) years (Table 1). Prevalence of Intestinal Parasites Detected by Kato-Katz Result and Intensity of Infections Using kato-katz method three helminths were detected. The most frequently identified parasites was H. nana (14.5%) followed by S. mansoni (4.2%) and A.lumbricoides 1.4%. Among children where 9 S. mansoni detected, 7 of them had light infection while two had moderate infections (Table 2 & 3). Attitude of Mothers Towards Prevention and Control of Intestinal Parasites Among 214 mothers 120 (56.1%) had positive attitude while, 94(43.9%) had negative attitude towards the prevention and control of intestinal parasites. Almost half of the mothers 104 (48.6%) were strongly agreed that, lack of hygiene is the cause of infection with intestinal parasites. Ninety five (44.4%) of the mother have strongly agree with attitude towards using soap when washing hand is preventive for intestinal parasite infection (Table 6). Mothers' Practice on the Prevention and Control of Intestinal Parasites Half of the mothers (52.3%) had children infected with intestinal parasites at least one time in their life. 95(44.4%) of the mothers used toilet or container to dispose their children's faeces. Moreover, 186(86.9%) mothers gave drug for their child to prevent intestinal parasite (Table 7). Factors Associated with Total Prevalence of Intestinal Parasites Only mothers educational status (P=0.01) was associated with intestinal parasitic infections. Mothers who can read and write were less likely to have a child infected with parasitic infections compared to mothers who are unable read and write (OR-0.25 95% CI-0.08-0.72) ( Table 8). Factors Associated with Intestinal Helminths Infections In bivariate analysis educational status, having knowledge on meaning of intestinal parasite, using toilet or container for their child defecation, mothers who wash fruit before consuming, mothers who cut their child nail and using chemically treated, boiled or tap water were associated with intestinal helminths having (P< 0.2). In multivariate logistic regression, maternal education level and using toilet or container for their child defecation were significantly associated with intestinal helminths (Table 9). Factors Associated with Intestinal Protozoan Infections Mothers who washed fruit before eating, child who have habit of playing with soil and cutting nail when it grow were associated with intestinal protozoa in bivariate analysis (p-value <0.2). In multivariate logistic regression, children who had Control in Senbete and Bete Towns, North Shoa, Ethiopia habit of playing with soil had two times higher odds of being infected with intestinal protozoa compare those without such habits (OR-2.01 95% CI 1.04-3.8) (Table 10). Discussion The findings of the present study showed that overall prevalence of intestinal parasitic infection among preschool children was 52.3% where Hymenolepis nana was the most prevalent helminths and Giardia lamblia was the most prevalent protozoan parasite. This finding is relatively higher than study done in Gamo area, south Ethiopia 29.4% [6], Arbaminch town, Southern Ethiopia 27.9% [7] and Kenya 25.6% (8). However, our report is lower than study done in Shesha Kekele, Wondo Genet, in Southern Ethiopia 85.1% [4]. This study showed maternal education level, use of open field for defecation of their child and playing with soil was significantly associated with intestinal parasite. Variations in prevalence rates of intestinal parasites from different Ethiopian communities could be related to several factors including the educational level of the study population, personal and environmental hygiene and probably social habits such as use of toilet for children. In addition, some ecological factors such as temperature, relative humidity, rainfall could be responsible for observed differences in prevalence between communities. Among 214 preschool children, 90 (42.1%) of the children were infected with one intestinal parasites and 18 (8.4%) of the children were infected with two intestinal parasites. Other study done in Shesha Kebele, Wondo Genet, in Southern Ethiopia also showed 34.5%, 33.3% and 23.2% had single, double and multiple parasitic infections, respectively [4]. The prevalence of single infections among preschool children was higher in highland and lowland dwellers in Gamo area, South Ethiopia 83.9%. This difference might be due to small sample size [6]. However the present finding is in agreement with a study done in Senegal [9]. This study also revealed that among protozoan parasites, Giardia lamblia (19.2%) was frequently observed followed by Entamoeba histolytica/E. dispar (8.4%). Similarly other studies done in Arbaminch reported those two parasites 4.2% and 12.9%, respectively [7] and also 10.6% and 11.4%, respectively from the study in Gamo area [6]. However, the prevalence of Giardia lamblia is lower than study done in Mexican rural school children [11]. In this study, predominant helminthic intestinal parasites were Hymenolepis nana (21.4%) and Ascaris lumbricoides (5.1%). A study done in Gondar, Northwest Ethiopia reported 13.8% Hymenolepis nana and 5.9% Ascaris lumbricoides [10]. The same study done in Mexico and Egypt also showed, H. nana was predominant [11][12]. The observed differences might be from differences in sample size, study population and the methods used for diagnosis. Additional factors might be socio-demographic factors, climate and geographic difference. In this study 73.4% of mothers took training on prevention and control of intestinal parasites. However, 60.3% mothers had knowledge on prevention and control of intestinal parasites. Our finding is comparable to the previous study done in Shesha Kekele, Wondo Genet, Southern Ethiopia [4]. However, in contrast to a previous study conducted in rural Malaysia where, the present study revealed higher knowledge response from the study participant [13]. This difference could be due to the study population, and the previous study focused only on soil-transmitted helminths. In this study half of the mothers responded, their child was infected by intestinal parasite at least ones in his/her life time. However 44% of the mothers responded that they use toilet or a container to dispose their children's faeces and 86.9% mothers gave drug for their child to prevent intestinal parasite. Using toilet is preventive for intestinal parasites and deworming program by the government also contribute for the response of mothers. In this study there is no significant association between intestinal parasitic infections and socio-demographic status of participating mothers or children. However, maternal education level is an important predicting factor for intestinal parasitic infections in children. The other study done in Mexican rural areas also indicated that less educated mothers had higher risk of intestinal parasites [11]. The current study also showed the association of open defecation and increased risk for helminths infections. Those families who practiced open defecation were two times more vulnerable for intestinal helminths (OR 2.01 95% CI 1.02-3.34). This is supported by study done in Mexican rural areas [11]. According to this study properly functioning and cleaned toilets reduced helminths infections. Children who have habits of playing with soil had increased risk to be infected by protozoan parasites. Conclusion and Recommendations According to this study intestinal parasitic infections are a common health problem among preschool children. Maternal educational level, use of toilet or container for child defecation and habit of playing with soil were closely associated with the prevalence of intestinal parasitic infections. While the former two protects the children from infection, the latter predispose them. Therefore, long term control measures including health education and mass treatment should be given to reduce intestinal parasitic infections among preschool children.
v3-fos-license
2020-11-26T09:06:49.572Z
2020-11-24T00:00:00.000
229478693
{ "extfieldsofstudy": [ "Psychology" ], "oa_license": "CCBY", "oa_status": "HYBRID", "oa_url": "https://www.europeanproceedings.com/files/data/article/10044/12459/article_10044_12459_pdf_100.pdf", "pdf_hash": "20a530689605587e13a54ef8d840177be4cbb166", "pdf_src": "ScienceParsePlus", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:1646", "s2fieldsofstudy": [ "Business" ], "sha1": "342c4fcc2ff7b362bff7fecfa420d4f959f8376c", "year": 2020 }
pes2o/s2orc
TRAINING 'CHO’S' CHIEF HAPPINESS OFFICERS: A HIGHER EDUCATION COURSE DESIGN CHALLENGE Worldwide, organizational happiness presents as the future human resources management and internal marketing practices but there is scarce higher education (HE) training in this merger field. So, which are the ideal key components, scientific fields and respective modules of a Post-Graduation Program on "Organizational Happiness Management" and which is the full CHO's skills profile. The aim of this research is to debate the future challenges of HE in creating global, integrated and consolidated curriculum offers to train professionals for a new emerging profession, the Happiness Managers and to present an HE PostGraduation Program in Organizational Happiness Management. Research methods included literature review, essayistic view, bench marketing practices and future research methods, since we are projecting the needs of HE and labour market for the future, and designing the training for an emerging profession. We present a Post-Graduation Program, which unites the fields of Human Resources Management, Organizational Psychology, Internal Marketing and others, focusing on the workplace happiness. The Organizational Happiness Management Post-Graduation Program comprise several specialized modules designed to include every aspect that a CHO should be trained including: outdoor learning, lifelong learning, positive and flourishing organizations, organizational behaviour, sociology, internal marketing, leadership and followership, flow, ergonomy, emotional intelligence and mindfulness, workplace and office design, employer branding and Introduction Happiness in the form of pleasant moods and emotions, well-being, and positive attitudes has been attracting increasing attention throughout psychology research and the interest expands to workplace experiences (Fisher, 2010). Feeling happy is fundamental to human experience, and most people are at least mildly happy much of the time (Diener & Diener, 1996). For many years, this interest was put aside and not well-received by the scientific community but now is gas now "attention to happiness and other positive states has been legitimized with strong help from the rebirth of positive psychology in the past decade (Seligman & Csikszentmihalyi, 2000) Furthermore, challenges with the Bologna declaration in Europe, ask of HE innovative, integrated, inclusive and sustainable future for HE , and this attitude should trend also in curriculum design. The overall aim of this paper is to present a Postgraduation program to properly train Happiness Managers (HM) and CHO's-Chief Happiness Officers. Overall, the approach of organizational happiness (OH) as the future of human resources management and internal marketing practices in organizations is trending worldwide. In the last decade, there has been an explosion of new constructs involving employee happiness and well-being in companies (Fisher, 2010) and more and more studies are proving that valuing human resources is fundamental to the success of organizations, becoming a distinctive factor to increase their competitiveness (Ribeiro, 2019). OH belongs to a larger family of happiness-related constructs, and share some common causes and consequences (Fisher, 2010). According to the author, we can frame in OH with several overlap constructs, like jog engagement, organizational commitment, positive emotions, Flow, Vigor, intrinsic motivation and many others showed on Table 01. We can add contributions from studies on workplace meditation and mindfulness that have lately arose (Araújo et al., 2018), as well as about positive mental training (Ross, 2015), as well as traditional and very recent ideas on emotional intelligence, optimism, positive emotions at work, PERMA Model and Flow, from positive psychology contributions (Seligman, 2000(Seligman, , 2006Seligman & Csikszentmihalyi, 2000). Nevertheless, some researchers have pointed out that it's important to not forget quality of working life (QWL) (Fernandes, 1996) and some very basic analysis, for example, pressure over the body temperature, light, ergonomy, workplace conditions and office design, as well as in some organizations, basic human rights at work that in some cases might not be fulfilled. Furthermore, over all human beings have evolved in their needs in the 21 st century. For example, the new and revised Maslow pyramid of needs, suggests seven levels of motivations, and includes new levels, for example, self-transcendence (Koltko-Rivera, 2006), aesthetic needs, purpose and meaning into our lives and therefore we explore these constructs also in our work. According to the author's review, Maslow created the pyramid in 1943 and 1954 and in 1969 amended his model, placing self-transcendence as a motivational step beyond self-actualization, and yet many rich and complex construtcs like selftrancendence have been put aside, maybe because they present of too out-of-the-box thinking. With artificial intelligence and robots occupying basic jobs, it is time to view work not as the torture at as it once was viewed, but as the possibility to transcend to be happy and fulfilled (Araújo & Fernandes, 2016), for example, touching one of the latest somewhat mediatic subject from the Oriental countries, the ikigai, which has now earned the attention of scientific communities worldwide after several studies on overall happiness (García & Miralles, 2017). So, organizations are really rethinking themselves, creating environments (physical and psychological) of happiness and wellbeing at work and places where people can really be themselves and grow, have meaningful work and life experiences an even becoming a learning and healing organization (Inayatullah, 2002). We are finally having the opportunity as human beings in our stage of evolution at this moment to flourish and to seek meaningful work, and so, organizations are growing also and they are rewiring themselves to become happiness-focused and so a new profession arises: the Happiness Managers (HM) and the CHO's-Chief Happiness Officers (sometimes used as synonym, other times, in big corporations which have a department, used as an hierarquical logic). Although there is a small component in 'media publicity' on the name of this CHO'S, the chief happiness officers or happiness managers, human resources always had a nice view on promoting wellbeing at work, that is very well studded over the last decades. Nevertheless, evolution of employer branding approaches recently has also lead organizations to invest more in Organizational Happiness, and to prepare for talent attraction and retention (Davies, 2008;Edwards Martin, 2010). The HM/CHO is much more than a profession, is a talent-oriented job, merging several fields, and acting not only as a worker, a liaison, between departments, but as a model Himself/herself, a person which acts as a coach and with highly developed self-leadership and self-knowledge. Internal marketing provides a somewhat new fresh view on the subject gaining new perspective on the happy-productive worker (Sgroi, 2015) seeing it as an hypothesis to attract and retain talent and really provide an employer branding, setting organizations to more easily build a brand as employers and more easily attract high-quality and motivated talent to the organization. Facing this, new generations of youngsters entering HE are now graduating from courses that might not be necessary in 20 years. And this situation asks for changes in HE, in order to adjust and evolve quickly, since HE may not be equipped to give quick responses to these urgent needs of the market. Usually HE, overall the Academia, takes time to change and in some European countries (namely Portugal), there are several bureaucratic aspects until we can create new course offers to undergraduates or even master's degrees. So, HE not always attends the challenges of the labour market. Higher Education Curriculum design becomes a major concern on HE institutions who search for innovation, promoting interdisciplinary, re-imagining curriculums and cross designing courses. Furthermore, the teacher profile is also a very specific one, since many of the new skills for the future are developmental, intrinsic and transferable core skills like critical thinking, empathy, negotiation, adaptability, among others, so, teachers should develop a specific profile (Araújo & Fernandes, 2017). Moreover, technology is conquering it's on space on new innovative curriculums in HE, with microlearning, mobile-learning and podcast-learning, for example . New learning practices are also emerging, such as outdoor and lifelong learning project basedlearning, game-based learning and other innovative action learning approaches are also needed . We believe that the modern University or college, the highly evolved HE, adjusts and adapts itself to anticipate the needs of the market, but for that to exist it is important to anticipate further the new professions which are arising every day. In this paper we propose a series of innovative practices on how to engage students in this cross design curriculum training CHO's and we finish proposing a post graduation in Organizational Happiness Management, comprised of several specialized modules carefully designed to include every aspect that a CHO should be aware, ending with seminars in partnership with companies, and totalizing approximately 130/140 hours (note that in Portuguese Law a Post graduation must have at least 120 hours). This number of hours can be adjusted to the needs of each Higher Education institution, because the paper intends only to be a structured guideline. This post graduation will include modules centred in workers happiness and in organizational benefits. It's crucial to approach organizational psychology and sociology, human resources, management, organizational behavior, job satisfaction, engagement, and organizational commitment. We also added the contributions from positive psychology, namely optimism and emotional intelligence, meditation and mindfulness at work, mental training, positive emotions at work and flow and https://doi.org /10.15405/epiceepsy.20111.31 Corresponding Author: Patrícia Araújo Selection and peer-review under responsibility of the Organizing Committee of the conference 327 many other contributes. Quality of Working Life (QWL), specially ergonomics, workplace and office design for wellbeing, is also a crucial dimension to explore. Organizational dimensions will include internal marketing and employer branding approaches, talent attraction and retention, employee value proposition (EVP), leadership and followership, positive and flourishing organizations. To develop this professional profile, outdoor and lifelong learning, as well as technology-based learning strategies, will be used. We hope this paper compels the diverse multidisciplinary fields relating to OH to think with us about the future of HE and proposing training CHO's even before the market is truly and deeply aware of their need. Problem Statement Worldwide, organizational happiness presents as the future human resources management and internal marketing practices but there is scarce higher education (HE) training in this merger field. So, its is urgent to propose integrated, multidisciplinary courses, certified by HE institutions in order to give proper and scientific answer to market needs, instead of leaving OH training to 'soft training and expensive' courses, with no scientific background. Research Questions Which are the ideal key components, scientific fields and respective modules of a Post-Graduation Program on "Organizational Happiness Management" which explore, train and detail all HPM/CHO's skills profile? Purpose of the Study The overall aim of this paper is to present a Post graduation program while debating the future challenges of higher education (HE) in creating global, integrated and consolidated curriculum offers to train professionals for a new emerging profession, the Happiness Managers (HM) and the CHO's-Chief Happiness Officers. . Research Methods Research methods included literature review, essayistic view, bench marketing practices and future research methods, since we are projecting the needs of HE and labour market of the future, and designing the training for an emerging profession. Futures studies is the systematic study of possible, probable and preferable futures (Inayatullah, 2007), its vision-oriented and the intention is to move out of the present and create the possibility for new futures in a transformative way. Although the present research is not a rigidly defined type of Future (Inayatullah, 2007, p. 18) Findings The Organizational Happiness Management Post-Grad Program is comprised of several specialized modules designed to include every aspect that a Happiness Manager or a CHO should be trained which are now presented in detail in Table 2 and Table 3. Presentation This Post-Graduation Programme intends to train 'Organizational Happiness Managers', an emerging professional area that begins to be rapidly requested by organizations. In a constantly changing market, talent's attraction and retention has been gaining priority in organizations worldwide. Thus, talent management is increasingly becoming a multidisciplinary and interdepartmental area, mainly involving Internal Marketing and Human Resources. Internal Marketing is a marketing area that addresses the company's employee as the first customer -internal customer -and the job that is offered by the organization (the full package) as the first product that the organization is 'selling'. In this sense, more and more organizations human resources departments are concerned with the necessity to have a strong employer branding in order to attract and retain the talents that most contribute to their success. Furthermore, the recent construct of organizational happiness has brought together several concepts investigated for decades by organizational behavior, however, in conjunction with an internal marketing and management approach, it is now possible to create internal marketing plans and organizational development strategies focused on organizational happiness, in order to realize the success of the organization and, simultaneously, the success and the individual happiness of each employee. Objectives/ Competences (a) Understand the evolution of the organizational happiness construct, human resources management and internal marketing practices, framed in organizational psychology and sociology approaches; (b) Know basic notions of internal marketing plan, management of current and future talent, employer branding and other recent organizational perspectives such as employee value proposition, leadership and followership, positive and flourishing organizations; (c) Recognize the importance of happiness at work for the employee, exploring organizational behavior, job satisfaction, engagement, and organizational commitment; (d) Understand the contributions of positive psychology for well-being at work promotion, approaching optimism and emotional intelligence, meditation and mindfulness at work, mental training, positive emotions at work and flow, selftranscendence, aesthetic needs and ikigai; (e) Outline strategies to promote quality of life at work, exploring basic notions of ergonomy and workplace/office design for wellbeing; (f) Understand how to promote outdoor and lifelong learning, as well as technologybased learning strategies in on-the-job training; f) Design, implement and evaluate internal marketing plans and human resources management initiatives that promote organizational happiness. https://doi.org /10.15405/epiceepsy.20111.31 Corresponding Author: Patrícia Araújo Selection and peer-review under 6. The happiness manager as coach: introduction to coaching 6.1. Various definitions and frameworks of coaching 6.2. Benefits, recipients and roles: the coach and the coachee 6.3. Coach skills 6.4. Coaching models 6.5. The "secret" of coaching: The art of asking powerful questions; 6.6. Coaching at work models: FAST, PAW and CRA 331 (OBS: Although this module presents last, it is a module that will happen throughout the course, with invited organizations non-profit and profit-oriented, which will share their market experience and organizational happiness guidance) TOTAL NUMBER of HOURS 136 Conclusion We On the other hand its becoming increasingly common to see simple training programs of 10 or 15 hours providing certification in this complex future profession, which is a danger to organizations and to universities, since that kind of simplistic, fast-selling ease to get, expensive diplomas, aren´t capable of preparing ecletic and competent professionals as CHO's. Nevertheless, this easy-expensive-quick diplomas is a type of response to other complicated procedures to become a professional in some fields: for example, in Portugal in particular, the Portuguese Bar Association as increasingly difficult parameters for obtaining the speciality of Occupational Health Psychology, which is the disciplinary field and higher education training that, in our opinion, should primarily occupied the CHO's vacancies in the future labour market.. As the COVID19 crises and the world pandemic as shown us during 2020, its urgent to adapt and change constantly. Either for organizations in general in an economic perspective, and in Higher Education perspective, since Colleges and universities are responsible for training the professionals of the future work market. With this brief pragmatic paper, we wanted to openly share our experience in this HE innovative curriculum Design and promote the debate about HM and CHO's within the scientific Community, and at the same time, give tools to organizations to train and devlope their HM and CHO's. What will happen to the HM and Cho's in future organizations? What are the consequences and impacts of organizations starting to hire HM in the future? These are some questions other research is addressing (Araújo, 2020) and future research should address these impacts, nevertheless, overall, indicators seem that humanity is moving towards a healthier and humane workplaces and deep and meaningful work.
v3-fos-license
2024-06-14T14:50:23.219Z
2024-04-02T00:00:00.000
270480243
{ "extfieldsofstudy": [], "oa_license": "CCBY", "oa_status": "HYBRID", "oa_url": "https://www.tandfonline.com/doi/pdf/10.1080/10455752.2024.2332218?needAccess=true", "pdf_hash": "35e212cabf6efc207f3891dd4629d92cd9d9c16c", "pdf_src": "TaylorAndFrancis", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:1651", "s2fieldsofstudy": [ "Environmental Science", "Political Science", "Philosophy" ], "sha1": "35e212cabf6efc207f3891dd4629d92cd9d9c16c", "year": 2024 }
pes2o/s2orc
Technology as Capital: Challenging the Illusion of the Green Machine ABSTRACT Debates on technologies for harnessing renewable energies tend to generate a polarized arena in which critical voices are automatically denounced as defenders of fossil energy. This has created a difficult situation for activists and scholars voicing concerns about the comparatively low power density, low net energy, and environmental justice concerns of such technologies. In this paper, we highlight the contradictions and ambivalences underlying Promethean arguments for solar power, which are currently dividing the political left. Through a critical reading of three proponents of classical Marxism, we address the structural coherence and paradoxes of the discourses within which the faith in such “green” technology is mobilized. We illustrate how Promethean visions of solar power tend to suffer from a pervasive ontological separation of human ingenuity and global social metabolism. This raises important questions about the ambiguous and theoretically underdeveloped role of technology in historical materialism, and about how capital, once converted into the material form of technology, becomes exempt from political critique. Rather than accepting such an immaterial and ultimately depoliticized position on technology, we argue that Marxist scholarship should concede that Promethean approaches must be abandoned if we are to effectively address climate change and other challenges of the Anthropocene. Introduction Despite decades of unrealized technological promises, both neoliberal and ecosocialist responses to the Anthropocene rely on techno-utopian visions based on "green" technologies (Fremaux 2019;Hamilton 2016).Critical scholars have warned about the contradictory and often destructive conditions inherent to the realization of such visions (Dunlap 2021).Disagreements on green technology are particularly salient in questions concerning the transition away from fossil fuels toward advanced renewable energy technologies with the intention to sustain industrial levels of production (Capellán-Pérez, de Castro, and González 2019;Jacobson et al. 2017). 1ecent critical scholarship on a transition to renewable energy sources has revealed unsustainable and unjust practices pertaining to mineral requirements and extraction (Mejia-Muñoz and Babidge 2023), land requirements (Capellán-Pérez, de Castro, and Arto 2017), net energy returns (King and van den Bergh 2018), carbon emissions (Wagner et al. 2022), biodiversity (Sonter et al. 2020), and labor conditions (Davidson 2023).An interdisciplinary body of literature focuses on understanding how the industrial development of advanced renewables is constituted by and constitutive of social relations of production (Huber and McCarthy 2017) with often problematic implications for social emancipation (Stock 2021), environmental justice (Avila 2018), ecological sustainability (Gellert and Ciccantell 2020), gender equality (Stock et al. 2023), as well as efforts to resist corporate power (Franquesa 2022) and decolonize the harnessing of energy (Zografos 2022).Other scholars are attempting to mitigate the consequences of this reality by appealing to technological progress or commodity chain reform as a means of solving the globally unsustainable and environmentally unjust relations implicated in renewable energy development (Riofrancos 2022). While the physical and economic feasibility of a full transition to renewable energy is being contested by its proponents and detractors, we instead address the structural coherence and paradoxes of the larger discursive fields within which the faith in such a transition is mobilized.The purpose is to critique Promethean ontologies of technology that are inhibiting nuanced discussions on the role of advanced technology in achieving socially just and ecologically sustainable human-environmental relations.We focus on solar power as an example of an advanced renewable energy technology that is similar to other renewables (such as wind power and biofuels) in terms of its biophysical profile, pervasively characterized by low efficiencies (Smil 2015).While the critical literature on renewables has revealed important social and ecological dimensions of power at divergent sites of deployment, many still overlook the globally asymmetric exchange of resources necessary for such "green" technologies to be feasible. Our first aim is to trace the ambiguous role of solar power within a landscape of intersecting discourses and identities in which technology itself is a contested category.The idea of solar powerthe direct human harnessing of the sun's energyhas gained a symbolic and discursive significance far beyond its practical implications (Roos 2023).As it is emblematic both of mainstream neoliberal climate change policy and modern proponents of classical Marxism, it raises crucial questions about the role of advanced technology in the worldview of late modernity.While scholarship on technology has attempted to go beyond the optimist-pessimist binary (Tiles and Obderdiek 2014), the continued influence of Prometheanism suggests that fundamental disagreements on technology remain unresolved and highly relevant for the increasingly felt contradictions of "green" technology. Our second aim is to better understand the ontological position underlying Promethean conceptions of technology.This includes an analysis of the assumptions that are reproduced chiefly by some proponents of modern technological progress, but which could be found also among some detractors focusing exclusively on sites of deployment (for a critique, see Roos 2023Roos , 2024)). The first among these assumptions is "technological immaterialism," through which technology is understood as primarily ideas, blueprints, or designs.This assumption is commonly reproduced in instrumental notions of technology and supposedly materialist perspectives, including critical theory and science and technology studies (Roos 2021).Such an assumption contributes to the problematic understanding of technological progress as separated from its wider environmental and social consequences and prerequisites.From an ecosocialist point of view, to oppose technological immaterialism is to acknowledge technologies as inexorably dependent on a social metabolism exchanging matter-energy with its environment (Hornborg 1998;Roos 2021). Second, the assumption of "technological neutralism" implies that technology is nothing more than a means to an end (Ruuska and Heikkurinen 2021).That is to say, technologies are viewed as neither constitutive of nor constituted by social relations.John Dewey (2008, 354-355) exemplifies technological neutralism when he asserts that "there is no problem of why or how the plow fits, or applies to, the garden, or the watch-spring to time-keeping.They were made for those respective purposes; the question is how well they do their work, and how they can be reshaped to do it better."Applied to the case of solar power, arguments based on technological neutralism assume that it is unimportant, irrelevant, or beside the point to understand how solar power necessitates or facilitates (global) social relations of power. To oppose technological neutralism is to consider technological artifacts as socially and politically contingent.Langdon Winner (1980) has described the two basic ways that technological artifacts may be considered political.We may call these "weak" and "strong" politics respectively.On the one hand, technologies are linked with the political intentions of the owners or designers, or otherwise compatible with specific political relations (Winner 1980, 130).For instance, technologies such as solar PV technology, computers, or space rockets are political in the sense that they can be employed in different contexts (e.g.research, commerce, war, or capital accumulation).On the other hand, Winner identifies so-called "inherently political technologies" that "appear to require, or to be strongly compatible with, particular kinds of political relationships" (Winner 1980, 123). 2 In this understanding, solar PV technology, computers, or space rockets are political since they are inextricably contingent upon a social relation of power.This categorization allows us to understand that technologies may not simply be employed to further capitalist accumulation, but that they may be expressions of global, capitalist relations of production.This exemplifies a "strong" or "inherent" politics of technology.The proliferation of solar PV panels that followed the transfer of production processes from the U.S. and Europe to China illustrates the "strong" politics of such technologies (Bonds and Downey 2012;Roos 2022). Further on, we show how the assumptions of "technological immaterialism," "technological neutralism," and the "weak" version of technological politics feature heavily among Marxist Prometheanists, and how these assumptions operate to discursively and ideologically sequester modern technologies such as solar power from global asymmetries in resource exchange, where technology deployment in core regions occurs at the expense of "green dispossession" and "green sacrificial zones" in peripheral regions of the world (Brock, Sovacool, and Hook 2021;Chen 2013).We argue that immaterial and neutral conceptions of technology are the discursive foundation for "machine fetishism," whereby "productivity" is thought to exist independently from an asymmetric transfer of resources upheld through global terms of trade (Hornborg 1992). We focus on positions on solar powerand technology more broadlyamong three Marxist Prometheans as an "outlier" case for theorizing the roles of green technology and energy transition in Marxist scholarship (George and Bennet 2005).We interpret the three selected authors' works as "outliers" in the sense that they take deviant and sometimes extreme positions on technological optimism and degrowth compared to other ecosocialists.The strength of the "outlier" case is that it can identify foundational assumptions that could yield inferences concerning a wider group.We believe that a critical analysis of Marxist Prometheanism may help to illuminate some of the contradictions and problematic assumptions vitiating ecosocialist scholarship on "green" technology and energy transition more broadly. Our analysis focuses on arguments among three Marxist scholars commonly understood as heralds of leftist ecomodernism, namely David Schwartzman, Leigh Phillips, and Aaron Bastani (Dale 2019;Saito 2022;Trainer 2019).The selection of arguments has in part been determined by their public visibility and impact, in part by the frequently explicit contrasts between them.All the selected authors firmly oppose degrowth positions arguing for "convivial technologies" and a socially controlled reduction of matter-energy throughput attentive to a fair distribution of resources (Kerschner et al. 2018).While each of these scholars and their arguments have already received much attention and critique (Dale 2019;Featherstone 2020;Mueller 2020), few have attempted to unpack what we here identify as a shared ontology of technology that risks serving as an unwitting endorsement of capitalist relations of production.We problematize this puzzling inability of Marxist Prometheanism to apply the same critical analysis that is applied to the process of capital accumulation, to its most prominent material expressiontechnology. In the following sections, we first provide a critique of Promethean arguments for solar power, such as it is espoused by ecomodernists.We show how such arguments are based on an ontology of technology which precludes an understanding of solar power as intertwined with globally asymmetric flows of embodied land and labor.We then demonstrate the influence of such Promethean myopia, or "machine fetishism," among three Marxist scholars.We show how the failure to account for the biophysical prerequisites of solar power reveals an inclination to force social theory to fit ideological prescriptions.Through closer analysis of Aaron Bastani's Fully Automated Luxury Communism, we then identify the politically neutralizing and immaterialist ontology of technology which is at the root of Promethean arguments for solar power.Finally, we provide concluding remarks on the paradoxical challenge facing Marxist theory, namely, how to account for technology as capital. A Critique of Promethean Arguments for Solar Power The turn to fossil energy in nineteenth-century Britain gave rise to two interconnected and quintessentially modern conceptions that remain with us to this day.One is the notion that economic processes can be understood without any consideration of nature, the other is that engineering is a matter of harnessing natural forces and does not require any consideration of world society.In a nutshell, economics has nothing to do with nature while technology has nothing to do with society (cf.Mitcham 1994;Spash 2007).Both conceptions are continuously being challenged by critical perspectives (Bijker, Hughes, and Pinch 1989;Foster 2000;Hornborg 2023;O'Connor 1988;Roos 2021), but their hegemony remains intact in mainstream economics and engineering science. Visions of solar power reproduce an image of technology that can be traced to the appearance of the steam engine, that is, as simply an innovative means of harnessing energy, with no detrimental implications in terms of the social distribution of resources, i.e. as politically neutral (Landes 1969).Historical research shows that the expansion of steam technology in Britain was inextricably linked to the colonial appropriation of land and labor embodied in cotton (e.g.Beckert 2014; Berg and Hudson 2023) and therefore better understood as implicating "strong" politics.Modern analyses of global commodity flows similarly indicate that core areas of technological expansion rely on net imports of biophysical resources that do not concern mainstream economists (cf.Dorninger et al. 2021).The negative ecological repercussions of fossil energy use have long inspired visions of new technologies for harnessing renewable energy, but these visions for abandoning fossil energy risk being as constrained by illusions of technological neutralism as the conceptions that accompanied the turn to fossil energy more than two centuries ago.In requiring capital, any advanced technology automatically implicates asymmetric flows of embodied land and labor.This applies to solar power as it does to steam. Judging from recent literature and public debate throughout the world, a global majority of people now share the consensus that the combustion of fossil fuels is a major source of greenhouse gas emissions that contribute to global warming (e.g.Lynas et al. 2021).There is also extensive agreement that this impasse necessitates a transition to renewable energy technologies such as solar power.To advocate such a transitionas do, for instance, proposals for a so-called Green New Dealis to assume that it is both economically and physically realistic (see Boyle et al. 2021).Over the past decade, several researchers have offered calculations suggesting that a global transition to renewable energy is indeed feasible (e.g.Delucchi and Jacobson 2011;Jacobson and Delucchi 2011). Although advocacy for solar power at first glance may seem opposed to other interventions arguing instead for the expansion of nuclear power (e.g.Asafu-Adjaye et al. 2015), both kinds of proposals subscribe to an approach to technologies that tend to limit their system boundaries to the extent of their physical infrastructure.In both cases, however, it is reasonable to argue for complete Life Cycle Assessments that include not only the globally extracted materials, energy, and labor that are embodied in the infrastructure, but also all the resources mobilized in the economic processes that generated the money capital that is invested in it (Prieto and Hall 2013).From this perspective, calculations of "power density" (Smil 2015) that is, the amount of energy that can be extracted per square meter -should include not just the spatial extent of the technological infrastructure but the total, global ecological footprint of each square meter of infrastructure (Hornborg, Cederlöf, and Roos 2019).In focusing on the output of the machinery, while largely disregarding inputs of resources from its global context, mainstream understandings of solar power technologies tend to share the myopic outlook of ecomodernist proponents of nuclear power.Both visions reproduce the "machine fetishism" (Hornborg 1992;2001) of the Industrial Revolution, in which the steam engine was conceptually sequestered from the plantationsthe labor and landthat made it economically feasible.While often providing important insights into local dimensions of power, frameworks focusing on the transformative potential of solar power and wind power limited to sites of deployment systematically overlook the necessary inputs of resources from the global context (Stephens 2019). The essence of such "machine fetishism" is particularly conspicuous in the Ecomodernist Manifesto (Asafu-Adjaye et al. 2015).Its recipe for sustainability is to technologically intensify human activitiessuch as farming, energy extraction, forestry, and settlementin the belief that this will "spare nature."It dismisses the notion of limits to growth as "functionally irrelevant" and suggests that humans using "next-generation solar, advanced nuclear fission, and nuclear fusion" (23) will have access to "unlimited energy."Urbanization and industrial agriculture are to be celebrated as the "decoupling" of humanity from nature (as if modern urban consumers did not have substantial ecological footprints beyond the city limits).If technological progress and current trends continue, the manifesto suggests, human impact on the environment may "peak and decline this century" (14). Such an understanding of technological progress as a cornucopia is often referred to as "Promethean."It is easy to identify in nineteenth-century discourse on the Industrial Revolution, including classical Marxist texts (Benton 1989).The inclination to conceptually insulate the technical from the socialas in the Marxist distinction between "productive forces" and "relations of production"produces a fetishized view of technology as magically generative of wealth, rather than a socionatural mechanism for redistributing human time and resources by displacing work and environmental loads.Building machines for harnessing nature's forces requires social mechanisms for appropriating resources, whether through slavery or the world market.The world market prices of cotton textiles, raw cotton, iron, coal, and slaves were thus requisite to the expansion of steam technology in late eighteenth-century Britain (Hornborg 2006).To conceptually excise technology from its global social context is misleading whether we want to account for the expansion of steam technology in eighteenth-century Britain or for the feasibility of solar power 250 years later. Several energy researchers have hesitated about the feasibility of a largescale transition to renewable energy technologies.Some have argued that the problem is their low power densitythat is, the amount of energy that can be harnessed per unit of landcompared to fossil fuels (MacKay 2013;Smil 2015).Others have focused on their comparatively low net energy or EROIthat is, Energy Return on Energy Invested (Hall and Klitgaard 2011;Prieto and Hall 2013).Both these limitations highlight the importance of redefining "efficiency" in terms of inputs and outputs of natural resources (land and energy, respectively) rather than monetary cost/benefit analysis.In the ongoing debates between proponents and detractors of solar power and other "green" technologies such as electric cars, many participants have explicitly rejected the approach of mainstream ecomodernism and economics in favor of a focus on asymmetric biophysical resource flows and the displacement of environmental impacts to poorer sectors of the world-system (Bonds and Downey 2012).To the extent that a shift to renewable energy is feasible in some areas, this argument goes, it will occur at the expense of other areas with lower wages and less rigorous environmental legislation.Paradoxically, such a shift of perspective toward the material aspects of energy technologies may help us understand the extent to which such technologies crucially also have a social aspect, as it would not make sense to rely on physically inefficient technologies if world market prices did not make it rational to do so. Approaches to Solar Power among Three Promethean Marxist Although some prominent Marxist theorists such as John Bellamy Foster (2000) have made great efforts to show that Marx's worldview was not as Promethean as many environmentalists and ecosocialists have argued, several Marxists have asserted that an endorsement of modern technological progress is quite aligned with Marx's own convictions (Löwy 2002;Rahim 2020).This conflict of perspectives raises important questions about technology's role in Marxist theory, particularly about the Marxist claim to offer a "materialist" understanding of society. Exemplifying such ecomodernist Marxism, David Schwartzman (2008) has asserted that solar power is a necessary condition for the Marxian vision of communism, which includes a progressive dematerialization of technology through the expansion of information technology and a concentration of urban settlement that would leave more space for nature.His technological optimism is particularly evident in his prediction that humanity will eventually "expand outward in our solar system and even further into the galaxy" (Schwartzman 2008, 52-53). Schwartzman's refusal to recognize the kinds of natural constraints on growth emphasized within ecological economics and the degrowth movement is duplicated and elaborated by Leigh Phillips (2015) in his provocatively titled book Austerity Ecology and the Collapse-Porn Addicts.Phillips shows that the political Left has historically sided with industry, technology, and modernity rather than with anti-consumerism and ecologically motivated restraint.His explicit goal is to revive pro-industrial and progrowth sentiments within the Left.He devotes most of his book to an agitated critique of contemporary "anti-modernist" and "green" voices.Using similar arguments, he also disapproves of classical critics of modernity such as the Frankfurt School (Horkheimer, Adorno, Marcuse), Lewis Mumford, and E.F.Schumacher.Although many of these writers identify with the Left, Phillips' pervasive agenda is to highlight an "anti-modernist ideological overlap between contemporary green back-to-the-land ideology and volkisch agrarian mystique, resulting from common romanticist origins that were deeply antipathetic toward the Enlightenment" (243).Through guilt by association, Phillips insinuates that the above-mentioned writers have been tainted by recurrent "patterns of green xenophobia" and risk succumbing to "the lifeboat politics of limits to growth" (ibid.).In Phillips' account, green anti-modernism thus tends to shade into brown.Although he repeatedly affirms that this is not what he means, the overall message appears to be that to oppose growth puts you in the company of fascists.The underlying implication is that the endorsement of growth is morally superior to its antithesis.Phillips (2015) concedes that calls for more "immaterial" kinds of growth are a fantasy: "While we can steadily dematerialize production via technological innovation, and though knowledge itself is certainly immaterial, knowledge will always be linked to the material, both in its origins and its products" (38).However, he sympathizes with the ecomodernists of the Breakthrough Institute when they "argue that it is precisely through economic growth that humanity will be able to afford and develop the new technologies and infrastructure" for solving problems like climate change and biodiversity loss (67).Like the ecomodernists, he favors nuclear power (197-199) over renewable energy technologies, which would produce considerable environmental damage, "whether from the steel production required for pylons, concrete for the bases and fiberglass … for the blades, or the heavy metal pollution from solar panel manufacture" (182).He is also concerned about the comparatively low EROI of photovoltaic solar power (194).Rather than advocate green anti-modernism, Phillips suggests that we must "accelerate our modernity" (186)."To deliver on the promise of social justice," he asserts, "we need a high-energy planet, not modesty, humility and simple living" (190).Finally, like Schwartzman, he endorses space exploration to ensure "the survival of our species beyond the life of our sun" (258).This will permit humanity to "spread throughout the galaxy so as to assure the continued existence of the species in the life-vitiating event of a local supernova" (261). The conundrum posed by "green" technology is clearly highlighted by the voluminous debate around the documentary Planet of the Humans, produced by Michael Moore and Ozzie Zehner and directed by Jeff Gibbs.The film argues that ostensibly "sustainable" technologies such as wind and solar power rely on the same kinds of unsustainable practices that characterize the conventional fossil energy regime.Phillips (2020) responds by classifying the argument as right-wing, anti-progressive, and "antiworking class."The debate illustrates the role of technological transition as an existentially charged faith in salvation from the contradictions of industrial capitalism.Rather than conceding that the high-energy lifestyles promoted by modern civilization may be inextricably based on fossil energy (Love and Isenhour 2016), Promethean Marxists like Phillips reject such critical observations as morally and ideologically suspect.Admitting that the intermittency of renewable energy sources will require a firm backup, he advocates an expansion of nuclear power.He asserts that countries like France, Sweden, and Norway "have already completely or largely decarbonized their electricity grids"as if the technological infrastructures for delivering nuclear and hydroelectric power did not require massive amounts of fossil energy (Diaz-Maurin and Giampietro 2013).Such "decarbonization", he asserts, "could secure a future for both the planet and the political left."This sentence illustrates how the discussion about technological systems tends to be framed in terms of morally and politically charged prescriptions rather than empirically grounded theoretical reasoning.Although as much as 90% of world energy use currently derives from fossil sources (Voosen 2018), it is politically incorrect to conclude that global decarbonization is incompatible with modern civilization.If such a stance were to be expressed by a climate denialist funded by the fossil industry lobby, the denunciation as "anti-progressive" might indeed be appropriate, but neither we nor Planet of the Humans deny the disastrous reality of fossil-fuelled climate change."At its worst," says Phillips, "Planet of the Humans even attacks industrial civilization and technology itself."It is revealing to find a Marxist considering this kind of subversion so repulsive.Phillips's Promethean misunderstanding of technology is nowhere more clearly exposed than in his narrative of human history as continuous efforts "to solve problems via new technologies and then [to use] class struggle to force elites to share the benefits of those technologies with everyone."This is definitely not how we understand the history of technology since the birth of the steam engine.But Prometheanism, even in its Marxist variant, hitches the cart before the horse by tailoring social theory to fit ideological prescription, rather than letting advocacy be informed by critical social theory. A third Marxist keenly committed to Prometheanism is Aaron Bastani (2019), who also reminds us that Marx lyrically celebrated the progress of technology under capitalism. 3Bastani predicts that renewable energy technologiesparticularly solarwill "spell an end to energy scarcity altogether" (38).What we know for certain, he asserts, is that "solar is more than capable of meeting the world's expanding energy needs" (48).In Bastani's vision of "Fully Automated Luxury Communism," new technologies will make access to energy and resources limitless and free while replacing human work.He reaches this conclusion based on the premises that (1) technological progress is essentially immaterial -"amounting to nothing more than an upgraded rearrangement of previous information" (63)and that (2) in modern capitalism information is the basis of value but paradoxically becoming less scarce and thus cheaper and cheaper (49).Bastani envisages how particularly poorer countries near the equator in Africa, Central America, and Asia will benefit from clean and inexpensive solar energy, as "nature's gifts become an economic blessing" (108).Like Schwartzman and Phillips, he also envisages limitless returns to space exploration, as resource scarcity will be permanently abolished through the mining of asteroids (117-137). These three Marxist writers should represent a conundrum to ecosocialists who wish to reconcile Marxist theory and ecological sensibility.Although their Promethean recipes for human progress in the twenty-first century are frequently bizarre, it is difficult to deny that they are generally compatible with the approach to technology in classical Marxism.The fact that many modern Marxists would reject their suggestions raises the question of what this disagreement ultimately signifies: simply put, either our three Prometheans have completely misunderstood Marx, or the phenomenon of technology has been seriously misunderstood in Marxist theory.The debate clearly indicates that, at the very least, the concept of "technology" or "productive forces" occupies an ambiguous and theoretically underdeveloped role in historical materialism.In the next section, to further substantiate this claim, we take a closer look at Aaron Bastani's (2019) approach to technology. The Ontology of Technology in Bastani's Fully Automated Luxury Communism To unveil the underlying ontology of technology in Bastani's Fully Automated Luxury Communism, it is important to note that the book's premise is rooted in an understanding of historical transitions as exclusively progressive (31-39).Bastani's understanding of history can thus be clearly categorized as part of what economic historian Stefania Barca (2011) calls the "modern economic growth" narrative.This narrative is both teleological and immaterial in that it systematically ignores the adverse socio-ecological consequences of past historical transitions in its progressive interpretation of history.While Bastani repeatedly reminds the reader that progress is a political choice (9-12), the central premise of the book is that modern societies should embrace exponential technological progress.This normative position is symptomatic of a particular ontology of technology that refuses to take into account the material conditions under which such technological progress is made possible. At the heart of Bastani's vision lies the notion that technologies, or machines, are neutral products of the human mind (or engineering science) to be applied to fulfill certain ends.Bastani contends, for instance, that "[t]here is no necessary reason why [advanced technologies] should liberate us, or maintain our planet's ecosystems, any more than they should lead to ever-widening income inequality and widespread collapse" (242).As described above, technological neutralism is a conception of technology captured in the cliché "guns don't kill people, people kill people."From this perspective, technological artifacts themselves are not the issue, as it is howor to what endtechnologies are deployed that matters.In other words, technologies are ontologically sequestered from the contexts in which they arise and which they reproduce (Roos 2021).This ontology of technology underscores the famous opening to Marx's chapter 15 in Capital where, in response to John Stuart Mill, Marx contends that the role of machinery under capitalism is not to save labor, but that it may be employed for such a purpose under other social forms. 4In Bastani, this approach is omnipresent, as is evident from the fact that the global material prerequisites of the technological developments on which his vision depends are systematically ignored. Both Bastani and Marx in Capital vol. 1 (notably in chapter 15) deal with how machines are employed by capitalists to maximize profits.In this sense, the machine is understood within the context of capital accumulation.However, neither Marx nor Bastani acknowledges technologies as physical artifacts that are dependent on inputs of specific quantities of non-renewable materials per unit of mass produced, and therefore contingent on global capitalist relations of production (Bunker 2007;Gutowski et al. 2009).For Bastani, technological development signifies something else entirely."[A]s technology develops," he writes, "the value increasingly arises from the instructions for materials [i.e.immaterial information] as opposed to the materials themselves" (Bastani 2019, 63).This argument appears to be based on the hypothesis that the more advanced a technology is the less socio-ecological impact it is associated witha hypothesis that has been abundantly refuted (Gutowski et al. 2009). In Marx's analysis in Capital, the material prerequisites of the machine are not dealt with in detail since his main objective is to show how machines are used in the capitalist mode of production.The analysis starts with already existing machines without a deeper analysis of their social, geographical, or geological prerequisites.It does not include an adequate account of how the machines employed by the British bourgeoisie during the Industrial Revolution were made possible by the historically developed world division of labor (i.e.colonialism).Although Marx's analyses are largely devoted to comprehending the societal prerequisites and consequences of mechanization, neither in Capital5 nor in Grundrisse do we find an understanding of industrial technologies as modes of globally appropriating and redistributing embodied labor time and other biophysical resources. 6hile Marx's Prometheanism has long been debated (see Benton 1989; Burkett 1999; Löwy 2002), Marx's inclination to consider the productive potential of machines as originating within the machines themselves appears incontrovertible (see Marx [1867Marx [ ] 1990, 494, 502;, 494, 502;Marx[1939Marx[ ] 1993, 818-819), 818-819).However, Marx also acknowledged (albeit in passing) how the development of "the technical foundation of large-scale industry" necessitated a strict division of labor in British manufacture (Marx [1867(Marx [ ] 1990, 503-504), 503-504).The difference between these two points made by Marx hinges on whether we should understand technological productivity as an innate property generated by the machines themselves or as a relational property generated by specific socio-ecological relations.Bastani's focus on the technological artifacts in themselves, in precluding a relational understanding of productive capacity, fails to realize how such artifacts are embodiments of the specific global political circumstances on which their productive potential relies. This view resembles Feenberg's (1991, 14) take on technology as "not a thing in the ordinary sense of the term, but an 'ambivalent' process of development suspended between different possibilities."By such an account, technology is ontologically relative, and its actualized form depends on who controls the design.The ontological immateriality underlying such a view becomes apparent once we realize how technology is thereby assumed to exist before its social and material expression.The resulting technological neutralismor "weak" politics at bestis strongly related to the Promethean Marxist understanding of technology as primarily employed to maximize profits under the capitalist mode of production, but as liberated to serve other purposes under communism. Promethean Marxist visions based on a "weak" political understanding of technology tend to come into conflict with the "strong" understanding.Phillips reveals important aspects of the former view of technology when he explains that "[t]he long-standing promise of socialism was not that we'd have the same stuff as under capitalism but shared out equally; rather it was that through equality, we could release the forces of production from the fetters placed upon them by capitalism" (255).This quote illustrates how the "forces of production" (i.e."technologies") are conceptually sequestered from the "relations of production."The essence of the technology referred to consists of immaterial ideas, rather than something belonging to the material world.This explains why Promethean Marxists proclaim that we must "permit ourselves to dream a little" (Phillips 2015, 258) and reclaim our "collective imagination" currently in crisis (Bastani 2019, 31).In contrast, a materialist and "strong" political understanding of technology would take into consideration under what socio-ecological relations a particular technology is currently actualized to examine and assess these relations as necessary aspects of that technology. In Bastani's vision, global energy production will be doubled in the coming decades.This will be done by massively scaling up the production of solar PV technology and lithium batteries for electricity generation and storage.Bastani, like Phillips (2015, 38), acknowledges that such a project would require unreasonable volumes of materials from the Earth's crust.However, as we have seen, rather than seriously fathoming these limitations, as Phillips does, Bastani sidelines them altogether by suggesting that the necessary materials could be extracted from extraterrestrial bodies such as asteroids (2019,(38)(39).Bastani then proceeds to completely ignore the substantial material prerequisites of space travel itself.The remarkable consequence is that he manages to construe the creation and operation of solar technology as entirely independent of the social and ecological metabolism of the world economy.Leaping from one techno-fix to another, Bastani evades rigorous consideration of the material constraints facing proposals to double the world's energy use and shift away from fossil fuels through the massive installation of solar power. While utopian visions can certainly be justified, they must be grounded in real material conditions and understood in relation to the socio-ecological realities of today.The resort to space travel, which recurs in Schwartzman's, Phillips', and Bastani's Marxist utopias, is not coincidental.To circumambulate the increasingly obvious distributive and ecological constraints of technological utopianism, the fantasies of Promethean Marxists are compelled to leave planet Earth.We instead propose a social and ecological politics aiming to care for the Earth we inhabit: one that is prepared to radically transform the money that today serves as the cultural vehicle of machine fetishism and unsustainable relations of production, and one that engages in the exercise of "realistic envisioning" to identify the means needed for a subversive transformation of fossil metabolism (Roos 2023, 171-192).Such politics would be aligned with the broader program of "degrowth communism," which offers a much-needed synthesis of socialist and ecological perspectives (Saito 2022). Conclusion In this paper, we have tried to show how modern visions of technological futures tend to suffer from a pervasive ontological separation of human ingenuity and global social metabolism.It is widely assumed that problems of global unsustainability can be alleviated through technological solutions that are conceptually excised from the world economy that has made such technologies physically feasiblein restricted areas of the worldto begin with.It is taken for granted, for instance, that fossil energy will be replaced by renewable energy sources, while the material affluence and mobility of modernity that fossil fuels have historically made possible (for people in the Global North) shall be retained and even universalized.Such assumptions hinge on interpretations of technological progress as essentially immaterial and politically neutral, and based on engineering knowledge, while the material asymmetries of the requisite global resource flows are ignored. This failure to theorize the social asymmetries embodied in modern technology is particularly problematic in the context of Marxist analyses, whose most foundational justifications pertain precisely to questions of social justice.In both classical Marxist texts and those of modern adherents of Marxist theory, the distinction between "productive forces" (as "Nature's free gift to capital") and "relations of production" mirrors the mainstream separation of the material and the social, in which the former is to be understood as mere revelation of nature.The paradox confronting Marxist theory is how "capital", once converted into the material form of technology, becomes exempt from political critique.As we face climate change and other challenges of the Anthropocene, we shall have to concede that the existence of modern technology always implicates issues of globally skewed distribution, and thus cannot be politically neutral.In other words, technology is capital.
v3-fos-license
2023-12-07T16:16:39.025Z
2023-12-04T00:00:00.000
265709494
{ "extfieldsofstudy": [], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1111/ddi.13787", "pdf_hash": "d7066b82fadcff10d09a9f3fe5c90cf150be992a", "pdf_src": "Wiley", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:1652", "s2fieldsofstudy": [ "Environmental Science", "Biology" ], "sha1": "b63fa6c57bc754b12824f598144ebce25055b240", "year": 2023 }
pes2o/s2orc
Shifting hotspots: Climate change projected to drive contractions and expansions of invasive plant abundance habitats Preventing the spread of range‐shifting invasive species is a top priority for mitigating the impacts of climate change. Invasive plants become abundant and cause negative impacts in only a fraction of their introduced ranges, yet projections of invasion risk are almost exclusively derived from models built using all non‐native occurrences and neglect abundance information. | INTRODUC TI ON Invasive species -non-native species capable of reaching high abundances and causing ecological harm (Richardson et al., 2000) -are among the most ubiquitous threats to managed landscapes and native ecosystems, causing widespread ecological and economic impacts that include losses to biodiversity, ecosystem function, and crop yields (Pimentel et al., 2000(Pimentel et al., , 2005;;Pyšek & Richardson, 2010;Vilà et al., 2011).Climate change is projected to exacerbate these impacts by facilitating the spread of invasive species (Allen & Bradley, 2016;Bradley et al., 2010;Hellmann et al., 2008).Species distribution models based on invasive species occurrences (i.e., an observation of the species at any level of abundance) have been used to predict invasion risk under current and future climate conditions (e.g., Allen & Bradley, 2016;Hulme, 2006;O'Donnell et al., 2012). While risk of an invasive species occurrence can be useful for guiding management via early detection and rapid response (EDRR), the areas where species can occur are broader than areas that support abundant populations (Beaury et al., 2023;Bradley, 2013).Hence species distribution models using all species occurrences can overestimate invasion risk (Bradley, 2013).For management, overestimating invasion risk leads to hundreds of 'high risk' taxa in any given area -many more than are feasible to monitor and manage within time and monetary constraints (Beaury et al., 2020;Kuebbing & Simberloff, 2015).With limited management resources, identifying areas where invasive species can occur at high abundance is critical for informing proactive natural resource management as these are likely to be where ecological impacts are the greatest (Bradley et al., 2019;Parker et al., 1999;Pearse et al., 2019). Invasive species that reach high abundance have a greater chance of maintaining their current distribution, have greater capacity to extend their ranges (Verberk, 2011), and have a greater potential to cause negative ecological and economic impacts (Bradley et al., 2019;Parker et al., 1999;Pearse et al., 2019).Larger populations also have greater evolutionary potential, and therefore may respond and adapt more rapidly to changing environmental conditions (Verberk, 2011).Despite the critical role of abundance in supporting existing and expanding invasions, species distribution models that incorporate abundance data remain rare in the literature -in part due to the lack of high-quality, georeferenced abundance records (Bradley et al., 2018;Johnston et al., 2015).Unfortunately, species distribution models based on all occurrences (hereafter occurrence-based models) often fail to accurately predict areas that can support abundant populations (O'Neill et al., 2021).Instead, species distribution models based on abundant occurrences (i.e., locations where populations of invasive species achieve high local abundance) may serve as a better proxy for invasion risk (Beaury et al., 2023;O'Neill et al., 2021).To better prioritize monitoring and management decisions in a landscape with limited management resources, we need to leverage existing abundance data to understand the current and future distribution of habitats that can support abundant populations of invasive plants. The more widespread or abundant a species is, the more expensive management actions like suppression and removal become (Latombe et al., 2022;Rejmánek & Pitcairn, 2002).If future changes result in more favourable habitat for species to establish or spread (Allen & Bradley, 2016;Bradley et al., 2009), these taxa likely pose expanded invasion risk and should be prioritized for proactive management (Westbrooks, 2004).However, climate change could induce species ranges to not just expand or persist, but also potentially contract (Allen & Bradley, 2016;Bezeng et al., 2017;Bradley et al., 2009).Highlighting contractions in invasion risk allows us to prioritize sites for restoration (Bradley et al., 2009).For a few taxa, distribution models that incorporate abundance have proved useful for refining geographic assessments of potential expansion and contraction of invasion risk (Beaury et al., 2023;Jarnevich et al., 2021). Yet, despite the management implications, how climate change may affect the distribution of abundant populations of invasive plants remains unknown for most species in the United States (U.S.). Comparisons across models have also highlighted the complex relationship between the geographic distribution of abundant populations and environmental space (Catford et al., 2011(Catford et al., , 2016;;Ricciardi et al., 2021).Drivers of such species-specific variation are largely uncertain but may be associated with life history strategies, including plant growth form (Bonser & Geber, 2005;Rowe & Speck, 2005). Plants of different growth forms vary in morphological and physiological adaptations, and hence are likely to vary in their sensitivity and response to changes in environmental conditions (Bonser & Geber, 2005;Rowe & Speck, 2005).As a result, the geographic distributions of abundant populations likely differ between plant growth forms, which in turn will affect the structure and biodiversity of invaded native communities (Guerin et al., 2019) as well as the type and effectiveness of different management strategies (Weidlich et al., 2020). Here, we compiled occurrences of abundant populations for invasive plants across the eastern U.S. We used these data to predict areas that are climatically suitable for abundant populations, (hereafter defined as a species' abundance habitat), under current and future climatic conditions.To support climate-informed management of invasive species we asked the following questions: (1) what areas are currently climatically suitable for abundant plant populations? (2) how are hotspots of abundance habitat (where the abundance habitat for multiple species overlaps) projected to shift with climate change?and (3) do current and future projections of abundance hotspots differ based on plant growth form?We predict that future areas that are climatically suitable for abundant populations will shift biogeography, habitat suitability, invasion hotspot, invasive plant, invasive species, proactive management, range shift, species distribution model northward, mirroring range shifts observed in previous distribution models (Allen & Bradley, 2016) as species track suitable climatic conditions.We also predict that abundance hotspots will differ between the major plant growth forms, with the greatest shifts in abundance hotspots will be observed in shorter-lived growth forms such as vines and herbs that are often able to produce seeds or propagules within one growing season, compared to more long-lived growth forms like trees that typically require several seasons to reproduce and spread (Giorgis et al., 2016).Using results from species distribution models, we created management products, including state watch lists of species projected to maintain or expand abundance habitat under a +2°C climate change scenario.By compiling and standardizing plant abundance data and using the subset of abundant occurrences to model areas climatically suitable for abundant populations for a large number of plant taxa, our study highlights areas at higher risk of invasive species spread and potential impact -a much better proxy for invasion risk than occurrence-based distribution models alone. | Data processing for candidate taxa We compiled georeferenced records of plant species with abundance data (reported as percent cover) in the U.S. from 14 data sources (Appendix S1).These data repositories represent contributions from hundreds of natural resource managers and include manager reported observations (e.g., EDDMapS), standardized vegetation services (e.g., NPS, FIA), and state data depositories (e.g., CalFlora). From these sources, we used the USDA PLANTS database (USDA, NRCS, 2022) to identify plant species that were introduced to the contiguous (lower 48 states) United States.For each species we retained occurrences that included measures of plant percent cover or average plant cover class (a range of percentage cover values); replacing average cover class values with the median percentage cover value within the reported range (e.g., 15%-20% cover was replaced with 17.5%).We removed cover values that fell outside of 1%-100% range, locations outside of the contiguous U.S., and duplicate records across the pooled data sources.For most data sources, no additional information was available on the scale or methods used to collect plant cover data and few species had sufficient coverage of abundance values (i.e., 0%-100% cover across a range of habitats) to support models of continuous abundance.For these reasons, we did not aim to predict continuous abundance, electing instead to predict areas climatically suitable for abundant populations, defining an abundant population as any recorded occurrences of a species with ≥5% plant percent cover (see reasoning below).This approach allowed us to include a large number of invasive plants with existing abundance data and provided an important refinement of existing, hotspot analyses based on non-native species occurrences.While plant abundance data span the contiguous U.S., we focused on the eastern U.S. due to biogeographic differences between the eastern and western regions, and hence likely differences in plant-climatic associations (Bailey, 2009;Omernik & Griffith, 2014).Because we wanted to include as many species as possible, we included any species with at least one abundant population east of 100°W (Seager et al., 2018), assuming this indicated the species could establish and become abundant within the eastern U.S. To define areas where species can become abundant, previous studies have selected abundance points associated with percent cover thresholds near or below 10% cover for defining populations/ occurrences as abundant (Bradley, 2016;Jarnevich et al., 2021;O'Neill et al., 2021), but recent analyses suggested little difference between suitability predicted from 5% and 10% cover thresholds (Beaury et al., 2023).Therefore, we selected points with a ≥5% cover or average cover class to define a species as having established an abundant population in a given location (hereafter, abundance record).To increase the likelihood of robust model performance, we only fitted models to species with over 100 abundance records or with 50 records after pre-processing (see below).This resulted in an initial set of 175 candidate taxa that had sufficient total abundance records and at least one record in the eastern U.S., with a total of 455,455 abundance records to use in species distribution models. | Species background data For each candidate species we predicted potential distributions of abundance populations under current and +2°C climate projections using the Software for Assisted Habitat Modelling library in the Vistrails (v.2.2.3) scientific workflow system (sahm; Morisette et al., 2013).To reduce spatial bias in the abundance records and avoid pseudoreplication, we followed preprocessing steps outlined in Morisette et al. (2013), thinning each species' abundance data by 4 km using the 'spThin' package (v.0.2.0; Aiello-Lammens et al., 2015) in R (v.4.1.2;R Core Team, 2021) to match the resolution of our predictors.Following methods outlined by Young et al. (2020) and Jarnevich et al. (2021), we used a target background approach (Phillips et al., 2009) to generate pseudo-absence data to mimic sampling biases in abundance location data (Appendix S1).The target background approach reduces the effect of spatially biased abundance records by drawing background points with the same sampling biases (Young et al., 2020).For each species we randomly selected up to 10,000 target background points from the full set of abundance records (e.g., Appendix S1), subset to the target species' growth form from within a 99% kernel density estimate isopleth (an isopleth is a line representing a constant value, as in a contour line on a topographical map) around the focal species' point locations.The number of background points varied based on the size of the isopleth and number of points found within it (sample size in Appendix S1). Growth form data were assigned based on the USDA PLANTS database.For taxa with more than one growth form recorded (e.g., Subshrub/Vine), we chose the most representative growth form based on information from the primary literature on plant ecology. When generating background points, we grouped growth forms likely to be searched for and recorded together to reduce spatial biases associated with small groups.We combined vines with forb/ herbs to generate background points, assuming both would be found when searching understory communities.Likewise, we combined shrub, subshrub, and shrub/tree growth forms, assuming all three would be a focus of understory woody plant surveys.As a result, plants were grouped into one of four growth forms for targeted background sampling: tree, graminoid, vine/forb/herb, and shrub/subshrub/shrubtree (Appendix S1).For 11 candidate species, small sample sizes and/or disjunct distributions of points prevented us from generating background points; for these species, we extended the spatial extent of the kernel density estimate isopleth (Calenge, 2006) to ensure background point generation. | Environmental variables We selected eight environmental predictor variables from a candidate set of 78 variables created by Engelstad et al. (2022) that encompassed a suite of temperature and precipitation metrics that are known to influence the establishment and spread of invasive plant taxa.We based our environmental variable selection on the following criteria: (a) availability of future climate projections for the variable and (b) importance for explaining the spatial distributions of 62 invasive plants on our candidate list that were also examined in recent models based on invasive species occurrence (Engelstad et al., 2022) For each variable, we downloaded +2°C future climate projections from TerraClimate (Abatzoglou et al., 2018) and used the 'terra' package (v.1.5-21) (Hijmans et al., 2022) in R to create our future environmental variable output rasters.TerraClimate integrates 23 CMIP5 global climate models to create future projections (see Qin et al., 2020).The future climate variables are built on the current climate interpolations, making them directly comparable.For each climate dataset (current and +2°C), all environmental variables were processed to the same extent (contiguous U.S.), spatial resolution (4 km 2 ) and coordinate reference system (Alber's Equal Area) using nearest neighbour resampling.To reduce collinearity among predictor variables, for each species we retained all environmental predictors with ≤ |0.7| correlation (Dormann et al., 2013), using the maximum absolute value across Pearson, Spearman, and Kendall coefficients.When a pair of variables exceeded a 0.7 correlation coefficient, we retained only the variable with the highest variable importance in the model based on the amount of deviance explained by a univariate generalized additive model produced in sahm (Young et al., 2020). | Modelling climatically suitable abundance habitat For each species and climate dataset, we predicted potential abundance habitat using five species distribution modelling algorithms: Boosted Regression Trees (BRT), Generalized Linear Model (GLM), Multivariate Adaptive Regression Splines (MARS), Maxent (v. 3.4.4),and Random Forests (RF).To maximize the amount of data for model fitting for each candidate taxa we used all abundance records in the contiguous U.S. All models were fit using the default parameters within SAHM outlined in Young et al. (2020).For each species, we randomly split abundance records into a training data set (70%) and testing data set (30%).Models were internally evaluated on the training dataset using 10-fold cross validation (Young et al., 2020). Despite the utility of spatial cross validation for overcoming potential modelling problems associated with spatial autocorrelation between training and testing datasets, we did not use spatial cross-validation splits in this study.This is because we have encountered issues with spatial splits when modelling invasive taxa with highly disjunct populations (e.g., species occurring primarily in the northeastern and northwestern U.S.) when using SAHM for modelling.We checked for overfitting by examining differences in area under the receiver operating characteristic curve (AUC-ROC) values between training and average cross-validation split datasets, using an a priori criteria of >±0.05 and visual inspection of response curve complexity.When overfitting was identified, we adjusted model-specific parameters (e.g., Maxent beta multiplier value, MARS penalty, BRT learning rate, etc.; see Appendix S2) to improve model fit.We excluded the output of individual model algorithms when the null model (no environmental predictor variables) was selected (n = 1).We evaluated the final model fit for each algorithm using the True Skill Statistic (TSS), AUC-ROC value, and Boyce Index (Hirzel et al., 2006).We checked model fit and selected the best fit model for each algorithm prior to applying the models to our future climate variables.Of our 175 candidate species, we excluded 30 species that had fewer than 50 abundance records following pre-preprocessing and spatial thinning as we have encountered issues with model fit when modelling invasive taxa with fewer than 50 records post-thinning and previous studies have suggested 50 data points as a minimum for species distribution models (Santini et al., 2021;Wisz et al., 2008).We also excluded one species (Anthoxanthum odoratum) because three of the five model algorithms were substantially overfit and unable to be optimized.As a result, we modelled the current and future projections of climatic suitability for abundant populations across the contiguous U.S. for 144 of our initial 175 candidate species.Details of climate variables retained, model optimization parameters, and model fit statistics for the final 144 taxa are reported in Appendices S2 and S3. | Abundance hotspot analysis To identify current and future hotspots where the abundance habitat for multiple species overlaps, we employed an ensemble approach.We opted for an ensemble approach over individual model outputs as recent work comparing models from several algorithms for species found that ensembles of carefully constructed models can outperform single algorithms (Valavi et al., 2022).In our study, we carefully constructed models by evaluating each individual model contributing to the ensemble and revised model algorithm parameters as needed, hence our methods closely resemble those of Valavi et al. (2022).For our ensemble approach, for each species (n = 144), we binned the five algorithms' continuous mapped outputs We summed the maps of abundance habitat for each species to create a map of abundance hotspots across the eastern U.S. for both current and future climate conditions.The values of these hotspot maps ranged from 0 to 144, reflecting the total number of candidate taxa with abundance habitat projected for each map pixel.We also created aggregated hotspot maps for individual growth forms (Forb/Herb, Graminoid, Shrub, Tree, and Vine).While several of the modelled species have abundant populations in the western U.S., we limited our hotspot analysis to east of 100°W because species with abundant populations only in the west were excluded from our initial species selection, creating an incomplete picture of invasion hotspots in the western U.S. We used these hotspot maps to generate watchlists for eastern U.S. states, listing the species with predicted abundance habitat under current and future climate scenarios (Appendix S4).We then compared the area of abundant habitat under current and +2°C climate predictions and categorized the differences based on whether habitat is maintained (areas predicted as climatically suitable in both current and future climate conditions), increases (currently climatically unsuitable areas predicted to be climatically suitable in the future) or decreases (currently climatically suitable areas that are predicted to be climatically unsuitable in the future) given projected climate change.To further explore the differences in shifts of abundance habitat, we calculated the distance and direction of geographic shift based on the shift in the centroid (mean latitude and longitude of abundance habitat) between the current and future areas climatically suitable for abundant populations for each species.The direction of geographic shift measurement was described as shifting towards northeast (bearing 0° ≥ 90°), southeast (bearing 90° ≥ 180°), northwest (bearing 180° ≥ 270°), or southwest (bearing 270° ≥ 360°).We used two-way analysis of variance (ANOVA) to test whether distances between current and future centroid locations differed significantly between plant growth forms, direction of geographic shift, or the interaction between the two.We also employed circular one-way ANOVA using the package 'circular ' (v. 0.4-95;Lund et al., 2017) to test whether plant growth forms differ in the direction of geographic shift. | RE SULTS Across the 144 invasive plants modelled here, the areas climatically suitable for abundant populations (i.e., 'abundance habitat'; classified as suitable by ≧11/15 models) varied from 14,560 km 2 to 4,394,738 km 2 (mean 1,292,743 km 2 , analogous to ~13% of U.S. For the remaining 134 species, under current climate conditions abundance habitat in the eastern U.S. varied from 0 km 2 to 2,675,763 km 2 (mean 794,371 km 2 ) and future eastern abundance habitat varied from 215 km 2 to 2,916,080 km 2 (mean 755,574 km 2 ) (Appendix S5).One of these species (Brassica nigra) had future abundance habitat but no current abundance habitat projected for the eastern U.S. On average, abundance habitat in the eastern U.S. is projected to decrease slightly with a +2°C climate scenario (Appendix S5).However, our analysis reveals numerous invasion hotspots that are largely maintained. Current hotspots of abundance habitat center around three locations in the eastern U.S.: the northeast coast of Florida and Georgia, the Great Lakes region, and the mid-Atlantic region of the U.S. (Figure 1a); habitat in each of these regions is predicted to be climatically suitable for abundant populations of at least 30 of the 144 modelled species.Future hotspots of abundance habitat show an overall shift northward, with +2°C hotspots concentrated along the eastern Georgia coastline, the upper mid-Atlantic region, and in the lower New England area (Figure 1b).Given 2°C climate change projections, areas in the eastern U.S. are projected to become climatically suitable for abundant populations of an average of four new invasive species, with New England states becoming climatically suitable for abundant populations of up to 21 new invasive plant species (Figure 3a).On average, 18 invasive plants per 4 km 2 grid cell will maintain abundant populations in the eastern U.S., with up to 40 species projected to maintain abundance habitat in the northeast regions off the Great Lakes and New England (Figure 3b).In contrast, across the eastern U.S. conditions are projected to become climatically unsuitable for an average of five species with abundant populations, with regions such as the eastern Midwest projected to become climatically unsuitable for up to 22 invasive species due to climate change (Figure 3c). The centers of abundance habitats for the 134 invasive plant species with abundance habitat east of 100°W longitude are projected to move between 17.5 and 1585.5 km, (average of 212.5 km, Figure 2, Appendix S5).For these 134 species, the centroids of abundance habitat are projected to show a significant directional geographic shift (Rayleigh t-statistic = 0.521, p < .001),shifting predominantly towards the Northeast (n = 65 species, 49%) or Northwest (n = 45 species, 34%) region of the U.S. In contrast, relatively few species show centroid shifts towards the Southeast (19 species, 14%) and Southwest (five species, 4%) (Table 1, Figure 2). Appendix S5). Actual evapo-transpiration between April and October was most frequently included as a predictor variablebeing included in 97% (139/144) of species models -and was also the most frequently included predictor variable for all directional shifts, particularly Northeast range shifting taxa.Precipitation seasonality was also important for species with abundance habitat shifting towards the Southeast and Southwest while maximum summer temperature and minimum winter temperature were frequently included in models for species shifting towards the Northwest (Appendix S2). Current and future abundance habitat projected by ≧11/15 models varied substantially across taxa and across the eastern U.S. region (Figure 3e,f).On average, 16% (range 0%-63%) of current abundance habitat will remain climatically suitable for abundant populations of our candidate taxa under future climatic conditions (Appendix S5).In contrast, an average of 3.9% of the eastern U.S. is reclassified from either unsuitable or unknown (masked) to suitable for abundant populations given 2°C warming, while 5.6% of the eastern U.S. is reclassified from suitable to unsuitable under future conditions.The majority of species (81%, n = 109/134) are projected to maintain at least 1% of their current abundance habitat east of 100°W (range: 1%-63%, average: 19%), and 38 species are projected to maintain at least 25% of their current abundance F I G U R E 2 Direction of change in the centroids of abundance habitat identified for 134 invasive species with future eastern United States (U.S.).distributions.Arrows are drawn from the current centroid location (black dots) to the future centroid location (grey dots) given predictions from a +2°C warming scenario.Centroid locations display the mean latitude and longitude value calculated from the latitude and longitude values for all pixels of abundance habitat identified for each species within each climate scenario.For this reason, some average centroid locations appear located outside of the bounds of the contiguous U.S. landmass. TA B L E 1 The number and proportion (percentage) of the 134 invasive plant species with climatically suitable abundance habitat east of 100°W longitude, categorized by growth form, that are projected to shift the centroid of their abundance habitat towards the Northwest (NW), Northeast (NE), Southeast (SE), and Southwest (SW) given a +2°C warming scenario.Mean and standard deviation (SD) of range shift distance (in kilometres) is based on change in centroid location between current and future abundance habitat for taxa within each growth form. Growth form NW (%) NE (%) SE (%) SW (%) Mean distance (SD) Species total Forb/herb 15 ( 27) 28 ( 50) 10 ( 18) 3 ( 5 (Table 1).The interaction between the distance and direction of abundance habitat shifts was significantly different across plant growth forms (F 9,123 = 2.576, p = .009).The overall greatest directional shift in distance between current and future abundance habitat centroids was observed in vines with an average shift of 651 km (SD = 624 km) towards the Northwest.This trend appears largely driven by one species (Dioscorea bulbifera; DIBU), which is projected to shift 1585.6 km towards the Northwest under our +2°C climate scenario (Appendix S5). Across growth forms, we observed similar trends in the projected change in the area that is climatically suitable for abundant populations (Appendix S5).Graminoids had the largest mean area maintained as abundance habitat under both current and future conditions (768,347 km 2 ; 19.8%), followed by forbs/herbs (716,011 km 2 ; 18.4%).The mean overlap in current and future abundance habitat for trees, shrubs, and vines was less than 12%, meaning these growth forms maintain the least amount of current abundance habitat given projected climate change.Trees, in particular, have the lowest overlap between climate scenarios (285,948 km 2 ; 7.4%). Approximately half (55%) of forbs/herbs species showed an overall decrease in abundance habitat, which was slightly lower than the relative proportion (66%-68%) of species observed across the other growth forms (Appendix S5). | DISCUSS ION Invasive species movement and range expansion are a top threat to successful adaptation of ecological communities to climate change (Mainka & Howard, 2010;Peters et al., 2018;Walther et al., 2009).invasive species have the greatest potential to reach high abundance and have the greatest impacts (O'Neill et al., 2021;Vander Zanden & Olden, 2008;Yokomizo et al., 2009). Proportionally the number of range shifting taxa identified in our study (15%, n = 21/144 taxa) is similar (11%, n = 100/896 taxa) to that identified in models using occurrence-only data by a previous hotspot analysis, conducted at a similar spatial scale (5 km × 5 km) by Allen and Bradley (2016).Our use of abundance rather than occurrence-only data allows us to focus on a smaller number of species of potentially high impact because abundance is correlated with ecological impact (Bradley et al., 2019).This may explain why the number of range shifting abundant taxa identified by our study (up to 21 novel species with abundance habitat) is substantially smaller than the number of range shifting taxa (up to 100 novel species) identified by Allen and Bradley (2016).Different modelling approaches could influence the differences between observed hotspots in this study versus Allen and Bradley (2016), who mainly identified invasion hotspots in northeastern U.S. F I G U R E 4 The number of invasive plants per location likely to have habitat suitable for abundant populations (≥5% cover) given current climatic conditions (a-e) and +2°C warming climate scenario (f-j) for each of five major plant growth forms. states.However, O'Neill et al. ( 2021) found distinct hotspots for occurrence versus abundance habitat under current climate, suggesting that the different hotspots are not simply a modelling artefact.The taxa in our study represent those commonly reported as abundant in the eastern U.S. by natural resource managers via online repositories.Leveraging information on abundance habitat will prevent both the overinvestment of management resources on areas or species unlikely to become abundant as well as the underinvestment on areas or species likely to increase in abundance and lead to the greatest future impact (Bradley et al., 2019;Pearse et al., 2019).For example, areas projected to gain abundance habitat could contain 'sleeper' populations of species that are currently limited by climate but could become invasive with climate change (Spear et al., 2021); these existing populations are priority targets for eradication.Previous studies that focus on non-native occurrence data alone cannot predict potential changes in the areas climatically suitable for abundant populations (Bradley, 2016;Jarnevich et al., 2021;O'Neill et al., 2021) and therefore would fail to identify sleeper populations.Similarly, abundance habitat provides a more targeted estimate of risk from range-shifting invasive species, − information that could be used to build more proactive state regulations against the continued propagation of highrisk species as ornamentals (Beaury, Patrick, & Bradley, 2021). Expanded efforts to collect and use abundance data as part of current invasive species monitoring would improve our ability to inform risk for management activities (Bradley et al., 2018).We consider species already well established within the U.S., yet there may be novel invaders previously excluded by climate that may be able to establish in southern regions.Currently, only 10% of land managers in eastern North America monitor for new invasive taxa (Beaury et al., 2020) due to lack of funding and personnel (Beaury et al., 2020;Kuebbing & Simberloff, 2015).Proactively managing novel range-shifting taxa via early detection and rapid response will require managers to split time and resources between both current and future invasive taxa.This task will be more feasible Our analyses highlight substantial potential shifts in the distributional patterns of abundant invasive plant populations across the eastern U.S. with changing climate.Such changes lead to markedly different management strategies.For example, species with maintained abundance habitat in a given area are likely already being managed and these efforts will need to continue, although some aspects such as the timing of management and efficacy of control measures are likely to be affected by climate change (Bradley et al., 2010;Hellmann et al., 2008).Species with expanded abundance habitat in a given area will require either a new focus on monitoring for range-shifting invasions and/or a new focus on eradicating sleeper populations (Spear et al., 2021).Areas with predicted future contractions in non-native species abundances could be further screened for microhabitat features, such as soil type and topography, which may allow them to become candidate sites for restoration.This will likely require developing climate-informed restoration practices focused on warm-adapted, fast developing, and functionally diverse native plants that can resist further invasion (Hess et al., 2019;Yannelli et al., 2020). Variation in species abundances reflect variation in underlying population dynamics, which are driven by both demographic and environmental processes (Waldock et al., 2022).While not feasible without information on community biodiversity and structure, other analytical approaches such as joint species distribution models and mechanistic distribution models could improve future predictions for species or areas.These alternative modelling approaches could refine predictions of abundance habitat by accounting for traits and interspecific interactions, which may enable or prevent taxa from maintaining abundant populations despite potentially suitable environmental conditions (O' Reilly-Nugent et al., 2020).Similarly, other important predictors, which can influence invasive plant distributions, such as forest cover or human landscape modifications, were unaccounted for in our study (Mod et al., 2016).Baer and Gray (2022) showed that biotic predictors improve the performance of species distribution models at finer spatial scales (1 km), and in the eastern U.S., the majority of management efforts occur at relatively small spatial scales -either within single or a network of properties within a single state (Beaury et al., 2020).Yet, future projections for biotic and many abiotic predictors at this scale are currently lacking.While many modelling improvements are possible, outputs from correlative climate models in our study serve as an important first step in assessing current and future invasion risks using existing abundance data.For example, by combining correlative mapping products with site-specific knowledge on field conditions and processes that affect invasion success, such as the magnitude and type of human activities, distance to roads, dispersal pathways, and soil characteristics (Catford et al., 2011), managers and researchers can tailor broad lists of range shifting taxa to local or regional scales at which early detection and rapid response management actions are undertaken.et al., 2008).Indeed, 91% (123 of 134) of the species in our study are ornamental species that were deliberately introduced to the U.S. (Lehan et al., 2013).With the growth of relatively unregulated online plant sales (Beaury, Patrick, & Bradley, 2021;Humair et al., 2015), coupled with inconsistent regulations of invasive plants across state borders (Beaury, Fusco, et al., 2021;Lakoba et al., 2020), human Invasion success varies across plant growth forms (Ni et al., 2021) and differential invasion success of growth forms alters the composition and structure of native invaded communities (Guerin et al., 2019).Forbs/herbs represent the dominant growth form of abundant invasive plants in the eastern U.S., making up 42% of our dataset.Similarly, they are also the dominant growth form of established invasive plants in the U.S., contributing 51% (452/896) of species in Allen and Bradley (2016)'s analysis of occurrence hotspots. Fast-growing growth forms, such as forbs/herbs and grasses are associated with shifts in native communities away from woody growth forms, potentially as these invasives suppress native seedling regeneration (Guerin et al., 2019).This suggests that eradication and control of forbs/herb species will remain a high priority for mitigating negative effects of these invasive species on native ecosystems across the U.S. The other growth forms in our study make up similar proportions of the occurrence versus abundance species assemblages observed by Allen and Bradley (2016) with the notable exception of vines.Vines are proportionally rare when we focus on occurrences alone (3% or 30/896 species) but are proportionately more common when we focus on abundance data (9% or 12/144 species).Our analyses show vines are also projected to have the largest average climate-driven shift in abundance habitat centers of 300 km (Table 1).Vines often have functional traits important to invasion success and impact, including high relative growth rate or above-ground biomass, which in turn correlates with higher fecundity or competitive ability (Díaz & Cabido, 1997;Giorgis et al., 2016;Ni et al., 2021).For example, the air potato vine (D. bulbifera L.), which our study projected to have the greatest average shift in the center of current and future abundance habitat, can grow up to 25 cm per day, producing vines up to 51 m in length (Rayamajhi et al., 2016). In comparison to Allen and Bradley (2016), our results suggest that vines might have a proportionally higher risk of becoming abundant and hence invasive in new areas than other growth forms. Given the caveats associated with correlative distribution models (Jarnevich et al., 2015), the spatial predictions of abundance habitat expansions or contractions reported here should be treated as hypotheses, particularly for species projected to show large reductions in abundance habitat and those with fewer abundance records. Previous work by Sofaer et al. (2018), showed that despite good model performance metrics, multi-taxa occurrence distribution models were highly variable and often failed to accurately predict future range expansions and contractions among taxa.This uncertainty also extended to metrics of changes in the magnitude and direction of abundance habitat, although models showed more accurate predictions of habitat that was always or never suitable for a species (Sofaer et al., 2018).The extent to which this uncertainty affects our results remains unknown; however, our conservative threshold approach for assessing climatically suitable abundance habitat (based on ≧11/15 model agreement), combined with our ensemble approach that aggregated a large number of species distribution models, may reduce some of these projection inaccuracies (Naimi et al., 2022). | CON CLUS IONS Spatial analyses of invasive plant range shifts can inform proactive management (Allen & Bradley, 2016;Bellard et al., 2013).However, with limited management resources and hundreds of invasive species potentially shifting into new areas, it is imperative that we find ways to identify and prioritize the range shifting species likely to have the greatest impacts on native ecosystems.Using species distribution models, we show that current abundance hotspots in the east- . The final eight environmental variables included in our models were as follows: Minimum winter temperature, Mean diurnal temperature range, Maximum summer temperature, Precipitation seasonality, Mean summer potential water deficit, Mean evapotranspiration between April and October, Isothermality, and Mean annual precipitation (Appendix S2).These current climate variables are averaged over ~30 years of data spanning 1981-2018 and derived from BioClim and ClimateEngine (Appendix S2).Hence, our models focused on predicting areas with climatic suitability for invasive populations (abundance habitat), although other factors, such as forest cover and soil characteristics, may restrict distributions further (see Discussion). using three thresholding measures (first percentile [threshold that classifies the 1 percent of training data with the lowest suitability predictions as unsuitable], tenth percentile [threshold that classifies the 10 percent of training data with the lowest suitability predictions as unsuitable], and the maximum of sensitivity-specificity(Freeman & Moisen, 2008) where sensitivity is the true positive rate and specificity is the true negative rate.The resulting binary maps for each algorithm and threshold were summed to create an ensembled map with model agreement values ranging from 0 (no predicted climatic suitability for abundant populations) to 15 (all five model algorithms × three threshold measures predicted climatic suitability for abundant populations).Each ensemble was geographically reduced by a Multivariate Environmental Similarity Surface (MESS;Elith et al., 2010) to limit the effect of environmental extrapolation, where locations with environmental conditions outside the range of those found in the model training data are masked out.We used an additional threshold of ≥11 of 15 model agreement to identify the areas with the highest climatic potential for supporting abundant populations for each species (i.e., 'abundance habitat'). land area; Appendices S5 and S6).Silktree (Neyraudia reynaudiana (Kunth) Keng ex Hitchc.) had the smallest area of abundance habitat and tree of heaven (Ailanthus altissima [Mill.]Swingle) had the largest, which covered roughly 45% of the land area in the contiguous U.S. Given the inherent variance in species distribution model projections, and our additional, conservative 11/15 model threshold cutoff for classifying climatically suitable habitat it is possible for that species with few eastern U.S. records that the models could predict no abundance habitat in the eastern U.S. at the 4 km × 4 km scale of our study.Indeed, for six species, all future abundance habitat was west of 100 o W longitude while for four species (Colocasia esculenta, Paederia foetida, Paulownia tomentosa, and Sansevieria hyacinthoidea), there were no projected areas of future abundance habitat in the contiguous U.S. (Appendix S5). F The number (N) of invasive plant species with habitat identified as climatically suitable for abundant populations (≥5% cover) in the eastern contiguous United States given (a) current climatic conditions, (b) +2°C climate warming scenario and (c) the difference between +2°C and current climatic conditions.habitat given the +2°C warming scenario.As a result, one third of species (51/134) will see an overall increase in abundance habitat in the eastern U.S., with the area identified as climatically suitable projected to increase between 215 and 786,463 km 2 (mean 164,922 km 2 ).In contrast, 83 species (62%) are projected to experience a decline in abundance habitat, with the overall area of abundance habitat decreasing by an average of 163,973 km 2 (reductions range from 1752 to 577,118 km 2 ) (Appendix S5).Across the 134 species, an average of 35% of land in the eastern U.S. was masked due to climate dissimilarity, meaning predicting climatic suitability under future conditions would require extrapolating beyond the environmental space covered by the model training for a species.For individual states, Missouri is projected to become climatically suitable for abundant populations of the most novel plants (n = 86). also did not significantly differ between growth forms (F 4,123 = 1.216, p = .308),despite average shifts in abundance habitat varying from 170 km (SD = 146 km) in graminoids to 303 km (SD = 425 km) in vines Our study uses occurrences of abundant populations in species distribution models to refine spatial projections of invasion risk and proactively identify potential shifts in abundance habitat with climate change.Under current environmental conditions, we identified three regional abundance hotspots: (1) the northeast region of Florida and Georgia, (2) the Great Lakes region, and (3) the mid-Atlantic region of the eastern U.S.These areas could support abundant populations of up to 40 different invasive plant species, with the centroids of these hotspots projected to shift by hundreds of kilometres with climate change.By modelling abundance habitat under current and future climate, our study provides targeted species lists that can be used by managers to focus limited resources for early detection and rapid response on areas where given the narrower list of range-shifting invasive plants (average of four per 4 km 2 grid cell) -a resource commonly requested by invasive species practitioners to inform prevention and management within jurisdictions ranging from protected areas to states.Additionally, the data from abundance habitat projections available in Appendices S4 and S6 of this study have been incorporated into county-based mapping tools by Early Detection and Distribution Mapping Systems (EDDMapS;Wallace & Bargeron, 2014) to facilitate access and utilization by practitioners.Given that observations of novel establishment and spread of invasive taxa are typically reported by the general public, practitioners could use these species watchlists and interactive online maps to develop educational materials for high-risk taxa to facilitate public involvement in early detection and rapid response efforts. activities will likely continue to facilitate long distance dispersal of invasive plants, which may enhance the invasion success of these species, particularly long-lived, slow growing taxa such as trees, by enabling them to better track suitable climatic conditions and realize larger portions of their abundance habitat.Ornamental introduced plants projected to remain or become abundant with climate change are prime candidates for state regulation. ern U.S. are projected to shift an average of 213 km predominantly towards the northeast.Our results suggest that changes in climate suitability could facilitate the establishment of abundant populations of up to 21 new invasive plants, with forbs/herbs remaining the most common invasive plants in the eastern U.S. Our study provides the first comprehensive assessment of changing invasive plant risk for the eastern U.S. across a large number of abundant taxa.By identifying areas of high potential risk and impact, our abundance habitat maps can inform early detection and rapid response in areas where invasive plants are expanding as well as identify candidate sites for restoration in areas where invasive plants are contracting.ACK N O WLE D G E M ENTS Any use of trade, firm, or product names is for descriptive purposes only and does not imply endorsement by the U.S. Government.Thank you to Brandon Hays for modelling assistance, Erin Jerome for assistance with ScholarWorks@UMass Amherst, and Toni Lyn Morelli for feedback on earlier versions of this manuscript.The U.S. Geological Survey Science Analytics and Synthesis Program and Invasive Species Program supported development of the modelling backbone.AEE is supported by funding from the U.S. Geological Survey, and the Northeast Climate Adaptation Science Center (NE CASC) through Grant No. G21AC10233-01.The National Science Foundation Graduate Research Internship Program award to support EMB.
v3-fos-license
2019-06-26T13:04:04.147Z
2019-06-01T00:00:00.000
195355816
{ "extfieldsofstudy": [ "Medicine", "Biology" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.life-science-alliance.org/content/lsa/2/3/e201900402.full.pdf", "pdf_hash": "40e2cbd8a22d6a50389be6ebc83908c4e8933ac7", "pdf_src": "Highwire", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:1653", "s2fieldsofstudy": [ "Biology" ], "sha1": "744ce271acd4fe7f44a263d5a779a71203382ed0", "year": 2019 }
pes2o/s2orc
An unusual and vital protein with guanylate cyclase and P4-ATPase domains in a pathogenic protist Toxoplasma gondii harbors an alveolate-specific guanylate cyclase linked to P-type ATPase motifs, which is an essential actuator of cGMP-dependent gliding motility, egress, and invasion during acute infection. Introduction cGMP is regarded as a common intracellular second messenger, which relays endogenous and exogenous cues to the downstream mediators (kinases, ion channels, etc.), and thereby regulates a range of cellular processes in prokaryotic and eukaryotic organisms (Lucas et al, 2000;Hall & Lee, 2018). The synthesis of cGMP from GTP is catalyzed by a guanylate cyclase (GC). Levels of cGMP are strictly counterbalanced by phosphodiesterase enzyme (PDE), which degrades cGMP into GMP by hydrolyzing the 39-phosphoester bond (Beavo, 1995). PKG (or cGMP-dependent protein kinase) on the other hand is a major mediator of cGMP signaling in most eukaryotic cells; it phosphorylates a repertoire of effector proteins to exert a consequent subcellular response. All known PKGs belong to the serine/threonine kinase family (Lucas et al, 2000). Much of our understanding of cGMP-induced transduction is derived from higher organisms, namely, mammalian cells, which harbor four soluble GC subunits (α 1 , α 2 , β 1 , and β 2 ) functioning as heterodimers, and seven membrane-bound GCs (GC-A to GC-G), occurring mostly as homodimers (Lucas et al, 2000;Potter, 2011). There are two variants of PKG (PKG I and PKG II) reported in mammals. The type I PKGs have two soluble alternatively spliced isoforms (α and β) functioning as homodimers, whereas the type II PKGs are membrane-bound proteins (MacFarland, 1995;Pilz & Casteel, 2003), which form apparent monomers (De Jonge, 1981) as well as dimers (Vaandrager et al, 1997). The cGMP pathway in protozoans shows a marked divergence from mammalian cells (Linder et al, 1999;Gould & de Koning, 2011;Hopp et al, 2012). One of the protozoan phylum Apicomplexa comprising >6,000 endoparasitic, mostly intracellular, species of significant clinical importance (Adl et al, 2007) exhibits even more intriguing design of cGMP signaling. Toxoplasma, Plasmodium, and Eimeria are some of the key apicomplexan parasites causing devastating diseases in humans and animals. These pathogens display a complex lifecycle in nature assuring their successful infection, reproduction, stage-conversion, adaptive persistence, and interhost transmission. cGMP cascade has been shown as one of the most central mechanisms to coordinate the key steps during the parasitic lifecycle (Gould & de Koning, 2011;Govindasamy et al, 2016;Baker et al, 2017;Brown et al, 2017;Frénal et al, 2017). In particular, the motile parasitic stages, for example, sporozoite, merozoite, ookinete and tachyzoite deploy cGMP signaling to enter or exit host cells (Baker et al, 2017;Frénal et al, 2017) or traverse tissues by activating secretion of micronemes (apicomplexan-specific secretory organelle) (Brochet et al, 2014;Brown et al, 2016;Bullen et al, 2016). Micronemes secrete adhesive proteins required for the parasite motility and subsequent invasion and egress events (Brochet et al, 2014;Brown et al, 2016;Bullen et al, 2016;Frénal et al, 2017), which are regulated by PKG activity. The work of Gurnett et al (2002) demonstrated that Toxoplasma gondii and Eimeria tenella harbor a single PKG gene encoding for two alternatively translated isoforms (soluble and membranebound). The physiological essentiality of PKG for the asexual reproduction of both parasites was first revealed by a chemical-genetic approach , whereas the functional importance of this protein for secretion of micronemes, motility, and invasion of T. gondii tachyzoites and E. tenella sporozoites was proven by Wiersma et al (2004). Successive works in T. gondii have endorsed a critical requirement of TgPKG for its asexual reproduction by various methods Lourido et al, 2012;Sidik et al, 2014;Brown et al, 2017). Evenly, PKG is also needed for the hepatic and erythrocytic development of Plasmodium species (Falae et al, 2010;Taylor et al, 2010;Baker et al, 2017). It was shown that PKG triggers the release of calcium from the storage organelles in Plasmodium (Singh et al, 2010) and Toxoplasma (Brown et al, 2016). Calcium can in turn activate calcium-dependent protein kinases and exocytosis of micronemes (Billker et al, 2009;Lourido et al, 2012). The effect of cGMP signaling on calcium depends on inositol 1,4,5-triphosphate (IP 3 ), which is produced by phosphoinositide-phospholipase C, a downstream mediator of PKG (Brochet et al, 2014). Besides IP 3 , DAG is generated as a product of phosphoinositide-phospholipase C and converted to phosphatidic acid, which can also induce microneme secretion (Bullen et al, 2016). On the other hand, cAMP-dependent protein kinase acts as a repressor of PKG and Ca 2+ signaling, thereby preventing microneme secretion as well as a premature egress (Jia et al, 2017;Uboldi et al, 2018). Unlike the downstream signaling events, the onset of cGMP cascade remains underappreciated in Apicomplexa, partly because of a complex structure of GCs, as described in Plasmodium (Linder et al, 1999;Baker, 2004). Two distinct GCs, PfGCα and PfGCβ (Carucci et al, 2000;Baker, 2004;Hopp et al, 2012) were identified in Plasmodium falciparum. Lately, Gao et al (2018) demonstrated the essential role of GCβ for the motility and transmission of Plasmodium yoelii. Herein, we aimed to characterize an unusual GC fused with a P4-ATPase domain in T. gondii, and test the physiological importance of cGMP signaling for asexual reproduction of its acutely infectious tachyzoite stage. Our findings along with other independent studies published just recently (Brown & Sibley, 2018;Bisio et al, 2019;Yang et al, 2019), as discussed elsewhere, provide significant new insights into cGMP signaling during the lytic cycle of T. gondii. T. gondii encodes an alveolate-specific GC linked to P-type ATPase Our genome searches identified a single putative GC in the parasite database (ToxoDB) (Gajria et al, 2008), comprising multiple P-type ATPase motifs at its N terminus and two nucleotide cyclase domains (termed as GC1 and GC2 based on the evidence herein) at the C terminus. Given the predicted multifunctionality of this protein, we named it TgATPase P -GC. The entire gene size is about 38.3 kb, consisting of 53 introns and 54 exons. The ORF encodes for a remarkably large protein (4,367 aa, 477 kD), comprising P-type ATPase (270 kD) and nucleotide cyclase (207-kD) domains and includes 22 transmembrane helices (TMHs) (Fig 1A). The first half of TgATPase P -GC (1-2,480 aa) contains 10 α-helices and four conserved ATPaselike subdomains: (i) the region from Lys 110 to His 174 encodes a potential lipid-translocating ATPase; (ii) the residues from Leu 207 to Gly 496 are predicted to form a bifunctional E1-E2 ATPase binding to both metal ions and ATP, and thus functioning like a cation-ATPase; (iii) the amino acids from Thr 1647 to Ser 1748 harbor yet another metal-cation transporter with an ATP-binding region; (iv) the region from Cys 2029 to Asn 2480 contains a haloacid dehalogenase-like hydrolase, or otherwise a second lipid-translocating ATPase ( Fig 1A). The second half (2,481-4,367 aa) encodes a putative GC comprising GC1 and GC2 domains from Ser 2942 -Lys 3150 and Thr 4024 -Glu 4159 residues, respectively ( Fig 1A). Both GC1 and GC2 follow a transmembrane region, each with six helices. The question-marked helix (2,620-2,638 aa) antecedent to GC1 has a low probability (score, 752). An exclusion of this helix from the envisaged model, however, results in a reversal of GC1 and GC2 topology (facing outside the parasite), which is unlikely given the intracellular transduction of cGMP signaling via TgPKG. Moreover, our experiments suggest that the C terminus of TgATPase P -GC faces inwards (see Fig 2B). Phylogenetic study indicated an evident clading of TgATPase P -GC with homologs from parasitic (Hammondia, Eimeria, and Plasmodium) and free-living (Tetrahymena, Paramecium, and Oxytricha) alveolates ( Fig S1). In contrast, GCs from the metazoan organisms (soluble and receptor-type) and plants formed their own distinct clusters. Quite intriguingly, the protist clade contained two groups, one each for apicomplexans and ciliates, implying a phylum-specific evolution of TgATPase P -GC orthologs. P-type ATPase domain of TgATPase P -GC resembles P4-ATPases The N terminus of TgATPase P -GC covering 10 TMHs and four conserved motifs is comparable with P4-ATPases, a subfamily of P-type ATPases involved in translocation of phospholipids across the membrane bilayer and vesicle trafficking in the secretory pathways (Palmgren & Nissen, 2011;Andersen et al, 2016). The human genome contains 14 different genes for P4-ATPases clustered in five classes (1a, 1b, 5, 2, and 6), all of which have five functionally distinct domains (Andersen et al, 2016): A (actuator), N (nucleotide binding), and P (phosphorylation) domains are cytoplasmic; whereas T (transport) and S (class-specific support) domains are membraneanchored. Besides, a regulatory (R-) domain usually exists either at the N-or C terminus or at both ends (Palmgren & Nissen, 2011). In mammalian orthologs, the region between TMH1-TMH6 constitutes a functional unit for lipid flipping, and the segment between TMH7 and TMH10 undertakes a supportive role. ATPases have an intrinsic kinase activity to phosphorylate itself at the aspartate residue located in the P domain while catalytic cycle is taking place and later get dephosphorylated by the A domain when transportation is terminated (Bublitz et al, 2011;Palmgren & Nissen, 2011). The phosphorylated Asp located in Asp-Lys-Thr-Gly (DKTG) sequence is highly conserved in P-type ATPases. The consensus residue region has been formulized as DKTG [T,S][L,I,V,M][T,I]. The A domain has Asp-Gly-Glu-Thr (DGET) as P4-ATPase-specific signature residues that facilitate dephosphorylation. Besides, two other conserved sequences, Thr-Gly-Asp-Asn (TGDN) and Gly-Asp-Gly-x-Asn-Asp (GDGxND), are located in the P domain that bind Mg 2+ and connect the ATP-binding region to the transmembrane segments The primary and secondary topology of TgATPase P -GC as predicted using TMHMM, SMART, TMpred, Phobius, and NCBI domain search tools. The model was constructed by consensus across algorithms regarding the position of domains and transmembrane spans. The N terminus (1-2,480 aa) containing 10 α-helices resembles P-type ATPase with at least four subdomains (color-coded). The C terminus (2,481-4,367 aa) harbors two potential nucleotide cyclase catalytic regions, termed GC1 and GC2, each following six transmembrane helices. The question-marked (?) helix was predicted only by Phobius (probability score, 752). The color-coded signs on secondary structure show the position of highly conserved sequences in the ATPase and cyclase domains. The key residues involved in the base binding and catalysis of cyclases are also depicted in bold letters. (B, C) Tertiary structure of GC1 and GC2 domains based on homology modeling. The ribbon diagrams of GC1 and GC2 suggest a functional activation by pseudo-heterodimerization similar to tmAC. The model shows an antiparallel arrangement of GC1 and GC2, where each domain harbors a seven-stranded β-sheet surrounded by three α-helices. The image in panel (C) illustrates a GC1-GC2 heterodimer interface bound to GTPαS. The residues of GC2 labeled with asterisk (*) interact with the phosphate backbone of the nucleotide. (Axelsen & Palmgren, 1998). T domain includes the ion-binding site, which has a conserved proline in Pro-Glu-Gly-Leu (PEGL) sequence, located usually between TMH4 and TMH5 (Andersen et al, 2016). The alignment of ATPase domains from TgATPase P -GC, PfGCα, and PfGCβ with five different members of human P4-ATPases revealed several conserved residues (Fig S2). For example, the second subdomain of TgATPase P -GC (defined as Ca 2+ -ATPase, yellow colored in Fig 1A) carries DGET signature of the A domain albeit with one altered residue in the region (ETSKLDGET instead of ETSNLDGET). PfGCα contains two amino acid mutations in the same region (ETSLLNGET) when compared with the human ATP8A1, which translocates phosphatidylserine as its main substrate (Lee et al, 2015). Another replacement (Ser to Thr 781 ) was observed both in TgATPase P -GC and PfGCα at the IFTDKTGTIT motif, which harbors the consensus phosphorylated aspartate residue (D) in the P domain. The nucleotide-binding sequence, KGAD in the N domain (third region indicated as cation-ATPase)-the most conserved signature among P-type ATPases-is preserved in TgATPase P -GC but substituted by a point mutation in PfGCα (Ala to Ser 1739 , KGSD). Additional mutation (D to E) was detected in the DKLQEQVPETL sequence located in the last ATPase subdomain of TgATPase P -GC (highlighted with blue in Fig 1A). Not least, the GDGxND signature is conserved in TgATPase P -GC but degenerated in PfGCα ( Fig S2). Notably, most signature residues could not be identified in PfGCβ, signifying a degenerated ATPase domain. Taken together, our in silico analysis suggests that the N-terminal ATPase domain of TgATPase P -GC belongs to the P4-ATPase subfamily, and thus likely involved in lipid translocation. GC1 and GC2 domains of TgATPase P -GC form a pseudoheterodimer GC The arrangement and architecture of GC1 and GC2 domains in TgATPase P -GC correspond to mammalian transmembrane adenylate cyclase (tmAC) of the class III (Linder & Schultz, 2003). The latter is activated by G-proteins to produce cAMP after extracellular stimuli (e.g., hormones). The cyclase domains of tmACs, C1, and C2 form an antiparallel pseudo-heterodimer with one active and one degenerated site at the dimer interface (Linder & Schultz, 2003). Amino acids from both domains contribute to the binding site, and seven conserved residues are identified to play essential roles for nucleotide binding and catalysis (Linder & Schultz, 2003;Sinha & Sprang, 2006;Steegborn, 2014). These include two aspartate residues, which bind two divalent metal cofactors (Mg +2 , Mn +2 ) crucial for substrate placement and turnover. An arginine and asparagine stabilize the transition state, although yet another arginine binds the terminal phosphate (Pγ) of the nucleotide. A lysine/aspartate pair underlies the selection of ATP over GTP as the substrate. By contrast, a glutamate/cysteine or glutamate/alanine pair defines the substrate specificity as GTP in GCs. Nucleotide binding and transition-state stabilization are conferred by one domain, whereas the other domain by direct or via the interaction of bound metal ions with the phosphates of the nucleotide in tmACs (Linder, 2005;Sinha & Sprang, 2006;Steegborn, 2014). The sequence alignment of GC1 and GC2 domains from TgAT-Pase P -GC to their orthologous GCs/ACs showed that GC1 contains a 74-residue-long loop insertion (3,033-3,107 aa), unlike other cyclases ( Fig S3). A shorter insertion (~40 aa) was also found in PfGCα. The tertiary model structure (depleted for the loop inserted between α3 and β4 of GC1) shows that both domains consist of a seven-stranded β-sheet surrounded by three helices (Fig 1B and C). The key functional amino acid residues with some notable substitutions could be identified as distributed across GC1 and GC2. In GC1 domain, one of the two metal-binding (Me) aspartates is replaced by glutamate (E2991), whereas both are conserved in GC2 (D4029 and D4073) (Figs 1C and S3). The transition-state stabilizing (Tr) asparagine (N3144) and arginine (R3148) residues are located within the GC1 domain ( Fig 1C); however, both are replaced by leucine (L4153) and methionine (M4157), respectively, in the GC2 domain ( Fig S3). Another arginine (R4125) that is responsible for phosphate binding (Pγ) in tmACs is conserved in the GC2 domain ( Fig 1C), whereas it is substituted by K3116 in GC1 (Fig S3). The cyclase specificity defining residues (B) are glutamate/ alanine (E2987/A3137) and cysteine/aspartate (C4069/D4146) pairs in GC1 and GC2, respectively (Figs 1C and S3). The E/A identity of the nucleotide-binding pair in GC1 is indicative of specificity towards GTP. Thus, we propose that GC1 and GC2 form a pseudo-heterodimer and function as a guanylate cyclase ( Fig 1C). Similar to tmACs, one catalytically active and one degenerated site are allocated at the dimer interface to make TgATPase P -GC functional. However, the sequence of GC1 and GC2 is inverted in TgATPase P -GC, which means that, unlike tmAC, GC1 domain contributes to the nucleotide and transition-state binding residues of the active site, whereas GC2 harbors two aspartates crucial for metal ion binding. Although our multiple attempts to test the recombinant activity of GC1, GC2, and GC1 fused to GC2 (GC1+GC2) were futile (Fig S4), we were able to show the involvement of TgATPase P -GC in cGMP synthesis by mutagenesis studies in tachyzoites (discussed below, Fig 4). Overexpression and purification of recombinant GC1 and GC2 domains With an objective to determine the functionality of TgATPase P -GC, we expressed ORFs of GC1 (M 2850 -S 3244 ), GC2 (M 3934 -Q 4242 ), and GC1+GC2 (M 2850 -Q 4242 ) in Escherichia coli ( Fig S4A). Positive clones were verified by PCR screening and sequencing ( Fig S4B). An overexpression of GC1 and GC2 as 6x His-tagged in the M15 strain resulted in inclusion bodies, which did not allow us to use native conditions to purify proteins. We nevertheless purified them through an Ni-NTA column under denaturing conditions. Purified GC1 and GC2 exhibited an expected molecular weight of 47 and 38 kD, respectively ( Fig S4C). Our attempts to purify GC1+GC2 protein were futile, however. To test the catalytic activity of purified GC1 and GC2 domains, we executed an in vitro GC assay. Neither for GC1 nor for GC2 was any functionality detected when tested separately or together. Further optimization of the protein purification process yielded no detectable GC activity. The GC assay was also performed with the bacterial lysates expressing specified domains; however, no cGMP production was observed, as judged by high-performance liquid chromatography ( Fig S4D). Furthermore, we examined whether GC1 and GC2 can function as adenylate cyclase using a bacterial complementation assay, as Figure 2. TgATPase P -GC is a constitutively expressed protein located at the apical end in the plasma membrane of T. gondii. (A) Scheme for the genomic tagging of TgATPase P -GC with a 39-end HA epitope. The SacI-linearized plasmid for 39-insertional tagging (p39IT-HXGPRT-TgATPase P -GC-COS-HA 3'IT ) was transfected into parental (RHΔku80-hxgprt − ) strain followed by drug selection. Intracellular parasites of the resulting transgenic strain (P native -TgATPase P -GC-HA 3'IT -TgGra1-39UTR) were subjected to staining by specific antibodies (24 h post-infection). Arrows indicate the location of the residual body. The host-cell and parasite nuclei were stained by DAPI. Scale bars represent 2 μm. (B) Immunofluorescence staining of extracellular parasites expressing TgATPase P -GC-HA 3'IT . The α-HA immunostaining of the free parasites was performed before or after membrane permeabilization either using PBS without additives or detergent-supplemented PBS with BSA, respectively. The appearance of TgGap45 signal (located in the IMC) only after permeabilization confirms functionality of the assay. TgSag1 is located in the plasma membrane and, thus, visible under both conditions. Scale bars represent 2 μm. (C) Immunostaining of extracellular parasites encoding TgATPase P -GC-HA 3'IT after drug-induced splitting of the IMC from the plasma membrane. Tachyzoites were incubated with α-toxin (20 nM, 2 h) before immunostaining with α-HA antibody in combination with primary antibodies recognizing IMC (α-TgGap45) or plasmalemma (α-TgSag1), respectively. Scale bars represent 2 μm. (D, E) Immunoblots of tachyzoites expressing TgATPase P -GC-HA 3'IT and of the parental strain (RHΔku80-hxgprt − , negative control). The protein samples prepared from extracellular parasites (10 7 ) were directly loaded onto membrane blot, followed by staining with α-HA and α-TgGap45 antibodies. Samples in panel (E) were collected at different time periods during the lytic cycle and stained with α-HA and α-TgGap45 (loading control) antibodies. COS, crossover sequence; S.C., selection cassette. described elsewhere (Karimova et al, 1998) (Fig S4E). GC1, GC2, and GC1+GC2 proteins were expressed in the BTH101 strain of E. coli, which is deficient in the adenylate cyclase activity and so unable to use maltose as a carbon source. The strain produced white colonies on MacConkey agar containing maltose, which would otherwise be red-colored upon induction of cAMP-dependent disaccharide catabolism. We observed that unlike the positive control (adenylate cyclase from E. coli), BTH101 strains expressing GC1, GC2, or GC1+GC2 produced only white colonies in each case (Fig S4E), which could either be attributed to inefficient expression or a lack of adenylate cyclase activity in accord with the presence of signature residues defining the specificity for GTP in indicated domains ( Fig S3). Notwithstanding technical issues with our expression model or enzyme assay, it is plausible that the P4-ATPase domain is required for the functionality of cyclase domains in TgATPase P -GC, as also suggested by two recent studies (Brown & Sibley, 2018;Bisio et al, 2019). In similar experiments conducted with Plasmodium GCs, PfGCα, and PfGCβ, the GC activity could only be confirmed for PfGCβ but not for PfGCα (Carucci et al, 2000), which happens to be the nearest ortholog of TgATPase P -GC (refer to phylogeny in Fig S1). TgATPase P -GC is constitutively expressed in the plasma membrane at the apical pole To gain insight into the endogenous expression and localization of TgATPase P -GC protein, we performed epitope tagging of the gene in tachyzoites of T. gondii (Fig 2A). The parental strain was transfected with a plasmid construct allowing 39-insertional tagging of TgAT-Pase P -GC with a HA tag by single homologous crossover. The resulting transgenic strain (P native -TgATPase P -GC-HA 3'IT -TgGra1-39UTR) encoded HA-tagged TgATPase P -GC under the control of its native promoter. Notably, the fusion protein localized predominantly at the apex of the intracellularly growing parasites, as construed by its costaining with TgGap45, a marker of the inner membrane complex (IMC) (Gaskins et al, 2004) (Fig 2A, left). The apical location of TgATPase P -GC-HA 3'IT was confirmed by its colocalization with the IMC sub-compartment protein 1 (TgISP1) (Beck et al, 2010) (Fig 2A, right). Moreover, we noted a significant expression of TgATPase P -GC-HA 3'IT outside the parasite periphery within the residual body (Fig 2A, marked with arrows), which has also been observed for several other proteins, such as Rhoptry Neck 4 (RON4) (Bradley et al, 2005). To assess the membrane location and predicted C-terminal topology of the protein, we stained extracellular parasites with α-HA antibody before and after detergent permeabilization of the parasite membranes ( Fig 2B). The HA staining and apical localization of TgATPase P -GC were detected only after the permeabilization, indicating that the C terminus of TgATPase P -GC faces the parasite interior, as shown in the model ( Fig 1A). We then treated the extracellular parasites with α-toxin to separate the plasma membrane from IMC and thereby distinguish the distribution of TgATPase P -GC-HA 3'IT between both entities. By staining of tachyzoites with two markers, that is, TgGap45 for the IMC and TgSag1 for the PM, we could show an association of TgATPase P -GC-HA 3'IT with the plasma membrane ( Fig 2C). Making of a transgenic line encoding TgATPase P -GC-HA 3'IT also enabled us to evaluate its expression pattern by immunoblot analysis throughout the lytic cycle, which recapitulates the successive events of gliding motility, host-cell invasion, intracellular replication, and egress leading to host-cell lysis. TgATPase P -GC is a bulky protein (477-kD) with several transmembrane regions; hence, it was not possible for us to successfully resolve it by gel electrophoresis and transfer onto nitrocellulose membrane for immunostaining. We nonetheless performed the dot blot analysis by loading protein samples directly onto an immunoblot membrane (Fig 2D and E). Unlike the parental strain (negative control), which showed only a faint (background) α-HA staining, we observed a strong signal in the TgATPase P -GC-HA 3'IT -expressing strain ( Fig 2D). Samples of transgenic strain collected at various periods embracing the entire lytic cycle indicated a constitutive and steady expression of TgATPase P -GC-HA 3'IT in tachyzoites ( Fig 2E). TgATPase P -GC is essential for the parasite survival Having established the expression profile and location, we next examined the physiological importance of TgATPase P -GC for tachyzoites. Our multiple efforts to knockout the TgATPase P -GC gene by double homologous recombination were unrewarding, suggesting its essentiality during the lytic cycle (lethal phenotype). We, therefore, used the strain expressing TgATPase P -GC-HA 3'IT to monitor the effect of genetic disruption immediately after the plasmid transfection (Fig 3). To achieve this, we executed a CRISPR/ Cas9-directed cleavage in TgATPase P -GC gene and then immunostained parasites at various periods to determine a time-elapsed loss of HA signal (Fig 3A). Within a day of transfection, about 4% of vacuoles had lost the apical staining of TgATPase P -GC-HA 3'IT ( Fig 3B). The number of vacuoles without the HA signal remained constant until the first passage (P1, 24-40 h). However, the parasite growth reduced gradually during the second passage (P2, 72-88 h) and fully seized by the third passage (P3, 120-136 h) (Fig 3B). The same assay also allowed us to quantify the replication rates of HAnegative parasites in relation to the HA-positive parasites by counting their numbers in intracellular vacuoles ( Fig 3C). As expected, the fraction of small vacuoles comprising just one or two parasites was much higher in the nonexpression (HA-negative) parasites. Inversely, the progenitor strain expressing TgATPase P -GC-HA 3'IT showed predominantly a higher percentage of bigger vacuoles with 16-64 parasites. By the third passage, we detected only single-parasite vacuoles in the mutant, demonstrating an essential role of TgATPase P -GC for the asexual reproduction. Genetic repression of TgATPase P -GC blights the lytic cycle Although an indispensable nature of TgATPase P -GC for tachyzoites could be established, the above strategy did not yield us a clonal mutant for in-depth biochemical and phenotypic analyses because of an eventually mortal phenotype. Hence, we engineered another parasite strain expressing TgATPase P -GC-HA 3'IT , in which the native 39UTR of the gene was flanked with two loxP sites ( Fig 4A). Cre recombinase-mediated excision of the 39UTR combined with a negative selection, as reported earlier (Brecht et al, 1999), permitted down-regulation of TgATPase P -GC-HA 3'IT . Genomic screening using specific primers confirmed a successful generation of the mutant (P native -TgATPase P -GC-HA 3'IT -39UTR excised ), which yielded a 2.2-kb amplicon as opposed to 5.2-kb in the progenitor strain (P native -TgATPase P -GC-HA 3'IT -39UTR floxed ) ( Fig 4B). Immunoblots of a clonal mutant showed an evident repression of the protein (Fig 4C). Densitometric analysis of TgATPase P -GC-HA 3'IT revealed about 65% reduction in the mutant compared with the progenitor strain. Knockdown was further endorsed by loss of HA-staining in immunofluorescence assay (IFA) (Fig 4D), where about 94% vacuoles lost their signal and the rest (~6%) displayed only a faint or no HA signal ( Fig S5). Next, we evaluated if repression of TgATPase P -GC translated into declined cGMP synthesis by the parasite. Indeed, we measured 60% regression in the steady-state levels of cGMP in the mutant (Fig 4E), equating to the decay at the protein level (Fig 4C and D). We then measured the comparative fitness of the mutant, progenitor, and parental strains by plaque assays (Fig 4F). As anticipated, the mutant exhibited about 65% and 35% reduction in plaque area when compared with the parental and progenitor strains, respectively, which correlated rather well with the residual expression of TgATPase P -GC in the immunoblot as well as cGMP assays ( Fig 4C-E). The progenitor strain also showed~30% impairment corresponding to reduction in its cGMP level compared with the parental strain that is likely due to epitope tagging and introduction of loxP sites between the last gene exon and 39UTR (Fig 4A). These data together with the above results show that TgATPase P -GC functions as a GC, and its catalytic activity is necessary for the lytic cycle. TgATPase P -GC regulates multiple events during the lytic cycle The availability of an effective mutant encouraged us to study the importance of TgATPase P -GC for discrete steps of the lytic cycle, including invasion, cell division, egress, and gliding motility (Fig 5). The replication assay revealed a modestly higher fraction of smaller vacuoles with two parasites in early culture (24 h) of the mutant compared with the control strains; the effect was assuaged at a later stage (40 h), however (Fig 5A, left). The average parasite numbers in each vacuole was also scored to ascertain these data. Indeed, no significant difference was seen in numbers of the TgATPase P -GC mutant with respect to the control strains in the latestage culture, even though a slight delay was observed in the early culture (Fig 5A, right). Two recent studies depleting TgATPase P -GC using different methods (Brown & Sibley, 2018;Yang et al, 2019) also concluded that the protein is not required for parasite replication. We quantified about 30% decline in the invasion efficiency of the mutant down from 80 to 53% (Fig 5B). Hence, a minor replication defect at 24 h may be a consequence of poor host-cell invasion by the parasite. The effect of protein repression was more pronounced in egress assay, where the mutant showed 70% decline in natural egress when compared with the parental strain and 40% defect in relation to the progenitor strain (40-48 h postinfection, Fig 5C), as also shown by others (Brown & Sibley, 2018;Bisio et al, 2019;Yang et al, 2019). Notably though, the egress defect was not apparent upon prolonged (64 h) culture. Such compensation at a later stage is probably caused by alternative (calcium dependent protein kinase) signaling cascades (Lourido et al, 2012)-a notion also reflected in the study of Yang et al (2019), where TgATPase P -GC deficiency could be compensated by Ca 2+ ionophore. Because invasion and egress are mediated by gliding motility (Frénal et al, 2017), we tested our mutant for the latter phenotype. We determined that the average motile fraction was reduced by more than half in the mutant, and trail lengths of moving parasites were remarkably shorter (~18 μm) than the control strain (~50 μm) (Fig 5D). Not least, as witnessed in plaque assays (Fig 4F), we found a steady, albeit not significant, decline in the invasion and egress rates of the progenitor when compared with the parental strain, which further confirms a correlation across all phenotypic assays. A partial phenotype prompted us to pharmacologically inhibit the residual cGMP signaling via PKG in the TgATPase P -GC mutant. We used compound 2 (C2), which has been shown to block mainly TgPKG but also calcium-dependent protein kinase 1 (Donald et al, 2006). As rationalized, C2 treatment subdued the gliding motility of the mutant as well as of the progenitor strain (Fig 5E). The impact of C2 was accentuated in both strains, likely because of cumulative effect of genetic repression and drug inhibition. Inhibition was stronger in the TgATPase P -GC mutant than the progenitor strain that can be attributed to a potentiated inhibition of the residual cGMP signaling in the knockdown strain. Phosphodiesterase inhibitors can rescue defective phenotypes of TgATPase P -GC mutant To further validate our findings, we deployed two inhibitors of cGMP-specific PDEs, namely, zaprinast and BIPPO, which are known to inhibit parasite enzymes along with human PDE5 and PDE9, respectively (Yuasa et al, 2005;Howard et al, 2015). We reasoned that drug-mediated elevation of cGMP could mitigate the phenotypic defects caused by deficiency of GC in the mutant. As shown ( Fig S6A), both drugs led to a dramatic increase in the motile fraction and trail lengths of the progenitor and TgATPase P -GC-mutant strains. The latter parasites were as competent as the former after the drug exposure. A similar restoration of phenotype in the mutant was detected in egress assays; the effect of BIPPO was much more pronounced than zaprinast, leading to egress of nearly all parasites ( Fig S6B). In contrast to the motility and egress, a treatment of BIPPO and zaprinast resulted in a surprisingly divergent effect on the invasion rates of the two strains ( Fig S6C). BIPPO exerted an opposite effect, i.e., a reduction in invasion of the progenitor and mutant. Impairment was stronger in the former strain; hence, we noted a reversal of the phenotype when compared with the control samples. A fairly similar effect was seen with zaprinast, although it was much less potent than BIPPO, as implied previously (Howard et al, 2015). These observations can be attributed to differential elevation of cGMP and possibly cAMP (above certain threshold) caused by PDE inhibitors, which inhibits the host-cell invasion but promotes the parasite motility and egress. Genetic knockdown of TgPKG phenocopies the attenuation of TgATPase P -GC To consolidate the aforesaid work on TgATPase P -GC, we implemented the same genomic tagging, knockdown, and phenotyping approaches to TgPKG (Figs S7 and 6). Briefly, a parasite strain expressing TgPKG with a C-terminal HA-tag under the control of endogenous regulatory respectively. The replication rates were analyzed 24 and 40 h postinfection by scoring the parasite numbers in a total of 500-600 vacuoles after staining with α-TgGap45 antibody (panel A, left) (n = 4 assays). The average parasite numbers per vacuole is also depicted (panel A, right). Invasion and egress rates were calculated by dual staining with α-TgGap45 and α-TgSag1 antibodies. In total, 1,000 parasites of each strain from four assays were examined to estimate the invasion efficiency. The natural egress of tachyzoites was measured after 40, 48, and 64 h by scoring 500-600 vacuoles of each strain (n = 3 assays). To estimate the gliding motility, fluorescent images stained with α-TgSag1 antibody were analyzed for the motile fraction (500 parasites of each strain), and 100-120 trail lengths per strain were measured (n = 3 assays). (E) Effect of PKG inhibitor compound 2 (2 μM) on the motility of TgATPase P -GC mutant and its progenitor strain (500 parasites of each strain, n = 3 assays). A total of 100 trails in the progenitor, and 15 trails of the mutant (due to severe defect) were measured. *P ≤ 0.05; **P ≤ 0.01; ***P ≤ 0.001; and ****P ≤ 0.0001. cGMP signaling in Toxoplasma Günay-Esiyok et al. Cre recombinase-mediated down-regulation of TgPKG protein led to an analogous inhibition of the parasite growth in plaque assays (Fig 6C). Yet again, comparable with the TgATPase P -GC mutant (Fig 5A), cell division of the TgPKG mutant was only moderately affected, as estimated by a smaller fraction of bigger vacuoles containing 32 or 64 parasites in early (24 h) and late (40 h) cultures ( Fig 6D). We scored a noteworthy invasion defect in the TgPKG mutant (Fig 6E). In accord, the mutant exhibited a defective egress at all tested time points (Fig 6F). The motile fraction dropped by almost 50% in the mutant, and trail lengths were accordingly shorter (~24 μm) compared with the control strains (~55 μm) ( Fig 6G). Not least, treatment with C2 further reduced the motile fraction and trail lengths of the mutant and its progenitor (Fig S8). The impact of C2 was somewhat stronger in the mutant, but none of the two strains exhibited a complete inhibition, again resonating with the TgATPase P -GC mutant ( Fig 5E). Collectively, our results clearly show that individual repression of TgATPase P -GC and TgPKG using a common approach imposes nearly identical phenotypic defects on the lytic cycle. Discussion This study characterized an alveolate-specific protein, termed TgATPase P -GC herein, which imparts a central piece of cGMP signaling conundrum in T. gondii. Our research in conjunction with three recently published independent studies (Brown & Sibley, 2018;Bisio et al, 2019;Yang et al, 2019) provides a comprehensive functional and structural insight into the initiation of cGMP signaling in T. gondii. The parasite encodes an unusual and multifunctional protein (TgATPase P -GC) primarily localized in the plasma membrane at the apical pole of the tachyzoite stage. Previous reports have concluded a surface localization of TgAT-Pase P -GC by inference, which we reveal to be the plasma membrane as opposed to the IMC (Fig 2C). Interestingly, a confined localization of GCβ in a unique spot of the ookinete membrane was recently reported to be critical for the protein function in P. yoelii (Gao et al, 2018). Similar work in T. gondii (Brown & Sibley, 2018) demonstrated that deletion or mutation of ATPase domain in TgATPase P -GC mislocalized the protein to the ER and cytosol, whereas deletion or mutation of GC domains did not affect the apical localization. An impaired secretion of micronemes was observed in both cases, suggesting the importance of ATPase domain for localization and function. Earlier, the same group reported that the long isoform of TgPKG (TgPKG I ) associated with the plasma membrane is essential and sufficient for PKG-dependent events; however, the shorter cytosolic isoform (TgPKG II ) is inadequate and dispensable (Brown et al, 2017). Our results illustrate that the C terminus of TgATPase P -GC faces inside the plasmalemma bilayer ( Fig 2B), where it should be in spatial proximity with TgPKG I to allow efficient induction of cGMP signaling. Moreover, we show that TgATPase P -GC is expressed throughout the lytic cycle of tachyzoites but needed only for their entry or exit from host cells, which implies a post-translational activation of cGMP signaling. This work along with others (Brown & Sibley, 2018;Bisio et al, 2019;Yang et al, 2019) reveals that TgATPase P -GC is essential for a successful lytic cycle. Its knockdown by 39UTR excision using Cre/ loxP method demonstrated a physiological role of cGMP for the invasion and egress. In further work, we uncovered that TgPKG depletion phenocopies the TgATPase P -GC knockdown mutant, resonating with previous work (Brown et al, 2017). In addition, our in silico analysis offers valuable insights into catalytic functioning of GC domains in TgATPase P -GC. GC1 and GC2 dimerize to form only one pseudo-symmetric catalytic center in contrast to the homodimer formation in mammalian pGCs (Linder & Schultz, 2003;Linder, 2005;Steegborn, 2014). GC1 and GC2 have probably evolved by gene duplication, causing degeneration of the unused second regulatory binding site, as reported in tmACs (Tesmer et al, 1997;Linder, 2005;Steegborn, 2014). The function of TgATPase P -GC as a guanylate cyclase aligns rather well with its predicted substrate specificity for GTP, although its contribution for cAMP synthesis (if any) remains to be tested. Our attempts to functionally complement an adenylate cyclase mutant of E. coli with GC domains or to get catalytically active recombinant proteins from E. coli were not fruitful; nonetheless, we could demonstrate that repression of TgATPase P -GC leads to a comparable reduction in the cGMP level, indicating its function as a guanylate cyclase. Our predicted topology of TgATPase P -GC harboring 22 transmembrane helices differs from the work of Yang et al (2019), suggesting the occurrence of 19 helices, but echoes with the two other reports (Brown & Sibley, 2018;Bisio et al, 2019). Unlike the C-terminal GC of TgATPase P -GC, the function of N-terminal ATPase domain remains rather enigmatic. The latter resembles P4-ATPases similar to other alveolate GCs (Linder et al, 1999;Carucci et al, 2000;Kenthirapalan et al, 2016;Baker et al, 2017). Lipids and cation homeostasis have been shown to influence the gliding motility and associated protein secretion, which in turn drives the egress and invasion events (Endo & Yagita, 1990;Rohloff et al, 2011;Brochet et al, 2014;Bullen et al, 2016;Frénal et al, 2017). There is little evidence however, how lipid and cation-dependent pathways embrace each other. It is thus tempting to propose a nodal role of TgATPase P -GC in asymmetric distribution of phospholipids between the membrane leaflets, and cation flux (e.g., Ca 2+ , K + , and Na + ) across the plasma membrane. The recent studies (Brown & Sibley, 2018;Bisio et al, 2019) also entail a regulatory role of P4-ATPase domain on the functioning of GC domain. Conversely, the GC domain of GCβ was found sufficient to produce cGMP independently of the ATPase domain in P. yoelii (Gao et al, 2018). This may be due to degenerated conserved sequences in the ATPase domain of PfGCβ, as shown in the sequence alignment ( Fig S2). Equally, the expression of PfGCα and PfGCβ resulted in the functional protein only for PfGCβ, but not for PfGCα (Carucci et al, 2000). Indeed, TgATPase P -GC is more homologous to PfGCα (identity, 43%; E value, 3E −140 ), which may explain the differences in the PfGCβ and TgATPase P -GC mutants. Two additional components, CDC50.1 and UGO, were suggested to secure the functionality of TgATPase P -GC by interacting with ATPase and GC domains, respectively (Bisio et al, 2019). It was already known that most of the mammalian P4-ATPases require CDC50 proteins as accessory subunits, which are transmembrane glycoproteins ensuring formation of active P4-ATPase complex (Coleman & Molday, 2011;Andersen et al, 2016). A similar interaction between GCβ and CDC50A protein was shown in P. yoelii (Gao et al, 2018). The CDC50.1 expressed in T. gondii was also found to bind with the P4-ATPase domain of TgATPase P -GC to facilitate the recognition of phosphatidic acid and thereby regulate the activation of GC domain. The second interacting partner UGO on the other hand was proposed to be essential for the activation of GC domain after phosphatidic acid binding (Bisio et al, 2019). The study of Yang et al (2019) showed that the depletion of TgATPase P -GC impairs the production of phosphatidic acid, which is consistent with our postulated lipid flipping function of P4-ATPase domain. However, a systematic experimental analysis is still required to understand the function of P4-ATPase and its intramolecular coordination with GC domains in TgATPase P -GC. The topology, subcellular localization, modeled structure, and depicted multifunctionality of TgATPase P -GC strikingly differ from those of the particulate GCs from mammals. Other distinguished features of mammalian pGCs, such as extracellular ligand binding and regulatory kinase-homology domains, are also absent in TgATPase P -GC, adding to evolutionary specialization of cGMP signaling in T. gondii. Likewise, other PKG-independent effectors of cGMP, that is, nucleotide-gated ion channels as reported in mammalian cells (MacFarland, 1995;Lucas et al, 2000;Pilz & Casteel, 2003), could not be identified in the genome of T. gondii, suggesting a rather linear transduction of cGMP signaling through PKG. Notably, the topology of TgATPase P -GC is shared by members of another alveolate phylum Ciliophora (e.g., Paramecium and Tetrahymena) (Linder et al, 1999), which exhibit an entirely different lifestyle. Moreover, a similar protein with two GC domains but lacking ATPase-like region is present in Dictyostelium (member of amoebozoa) (Roelofs et al, 2001). Such a conservation of cGMP signaling architecture in several alveolates with otherwise diverse lifestyles signifies a convoluted functional repurposing of signaling within the protozoan kingdom. Not least, a divergent origin and essential requirement of cGMP cascade can be exploited to selectively inhibit the asexual reproduction of the parasitic protists. Expression of GC1 and GC2 of TgATPase P -GC in E. coli Heterologous expression of the GC1 and GC2 domains was performed in the M15 and BTH101 strains of E. coli, for protein purification and functional complementation, respectively (see results below). The open reading frames of GC1 (2,850-3,244 bp), GC2 (3,934-4,242 bp), and GC1+GC2 (2,850-4,242 bp) domains starting with the upstream start codon (ATG) were amplified from the tachyzoite mRNA (RHΔku80-hxgprt − ). The first-strand cDNA used for ORF-specific PCR was generated from the total RNA by oligo-T primers using a commercial kit (Life Technologies). The ORFs were cloned into the pQE60 vector at the BglII restriction site, resulting in a C-terminal 6xHis-tag (primers in Table S1). To express and purify the indicated proteins, 5 ml culture of the recombinant M15 strains (grown overnight at 37°C) were diluted to an OD 600 of 0.1 in 100 ml of Luria-Bertani (LB) medium containing 100 μg/ml ampicillin and 50 μg/ml kanamycin and incubated at 37°C until OD 600 reached to 0.4-0.6 (4-5 h). The cultures were then induced with 0.1 mM IPTG overnight at 25°C. The bacterial cell lysates were prepared under denaturing conditions, and proteins were purified using Ni-NTA column according to the manufacturer's protocol (Novex by Life Technologies). Briefly, the cells were harvested (3,000g, 20 min, 4°C) and resuspended in 8 ml lysis buffer (6 M guanidine HCl, 20 mM NaH 2 PO 4 , and 500 mM NaCl, pH 7.8) by shaking for 10 min. They were disrupted by probe sonication (5 pulses, 30 s each with intermittent cooling on ice-cold water) and flash-frozen in liquid nitrogen followed by thawing at 37°C (3×). Intact cells were removed by pelleting at 3,000g for 15 min. Lysate-containing supernatant (cellfree extract) was loaded on an Ni-NTA column pretreated with binding buffer (8 M urea, 20 mM NaH 2 PO 4 , and 500 mM NaCl, pH 7.8) ensued by two washings with 4 ml washing buffer (8 M urea, 20 mM NaH 2 PO 4 , and 500 mM NaCl) with pH 6.0 and pH 5.3, respectively. Proteins were eluted in 5 ml of elution buffer (20 mM NaH 2 PO 4 , 100 mM NaCl, and 10% glycerol, pH 7.8). The eluate was concentrated and dialyzed by centrifugal filters (30-kD cutoff, Amicon ultra filters; Merck Millipore). A refolding of the purified proteins led to precipitation. The amount of urea was, thus, gradually reduced to 0.32 M by adding 4× volume of the buffer with 20 mM NaH 2 PO 4 , 100 mM NaCl, and 10% glycerol in successive centrifugation steps. The final protein preparation was stored at −80°C in liquid nitrogen. The function of GC1, GC2, and GC1+GC2 as being potential adenylate cyclase domains was tested in the E. coli BTH101 strain, which lacks cAMP signaling because of enzymatic deficiency and thus unable to use maltose as a carbon resource (Karimova et al, 1998). The pQE60 constructs encoding indicated ORF sequences were transformed into the BTH101 strain. The bacterial cultures were grown overnight in 5 ml of LB medium containing 100 μg/ml ampicillin and 100 μg/ml streptomycin at 37°C. Protein expression was induced by 200 μM IPTG (2 h, 30°C) followed by serial dilution plating on MacConkey agar (pH 7.5) supplemented with 1% maltose, 200 μM IPTG, and 100 μg/ml of each antibiotic. The strain harboring the pQE60 vector served as a negative control, whereas the plasmid expressing CyaA (native bacterial adenylate cyclase) was included as a positive control. Agar plates were incubated at 30°C (~32 h) to examine for the appearance of colonies. GC assay GC1 and GC2 (3 μg) purified from M15 strain of E. coli were examined by GC assay. The enzymatic reaction (100 μl) was executed in 50 mM Hepes buffer (pH 7.5) containing 100 mM NaCl, 2 mM MnCl 2 , and 2 mM GTP (22°C, 650 rpm, 10 min). The assay was quenched by adding 200 μl of 0.1 M HCl, followed by high-performance liquid chromatography or cGMP-specific ELISA. GTP and cGMP (2 mM each) were used as standards for HPLC. A successful epitope-tagging of the genes was verified by recombination-specific PCR and sequencing of subsequent amplicons. The stable drug-resistant transgenic parasites were subjected to limiting dilution in 96-well plates with confluent HFF cells to obtain the clonal lines for downstream analyses. The eventual strains expressed TgATPase P -GC-HA 3'IT or TgPKG-HA 3'IT under the control of their native promoter and TgGra1-39UTR. Using a similar strategy, we generated additional transgenic strains, in which TgGra1-39UTR was replaced by the native 39UTR of TgATPase P -GC and TgPKG genes. Here, nearly 1 kb of 39UTR beginning from the translation stop codon of the TgATPase P -GC and TgPKG genes was amplified from gDNA of the RHΔku80-hxgprt − strain and then cloned into the p39IT-HXGPRT plasmid (harboring 39COS of individual genes) at EcoRI/SpeI sites, substituting for TgGra1-39UTR (primers in Table S1). The constructs were linearized and transfected into parasites, followed by drug selection, crossover-specific PCR screening, and limiting dilution to obtain the clonal transgenic strains, as described above. CRISPR/Cas9-assisted genetic disruption of TgATPase P -GC The mutagenesis of TgATPase P -GC was achieved using CRISPR/Cas9 system, as reported previously for other genes (Sidik et al, 2014). To express gene-specific sgRNA and Cas9, we used pU6-sgRNA-Cas9 vector. The oligonucleotide pair, designed to target the nucleotide region from 145 to 164, was cloned into vector by golden gate assembly method using TgATPase P -GC-sgRNA-F1/R1 primers (Table S1). The assembly was initiated by mixing the pU6-sgRNA-Cas9 vector (45 ng), BsaI-HF enzyme (5 U; New England Biolabs), oligonucleotides (0.5 μM), T4 ligase (5 U; ThermoFisher Scientific) in a total volume of 20 μl. The conditions were set as 37°C (2 min) for BsaI digestion and 20°C (5 min) for ligation, and repeated for 30 cycles, followed by incubation at 37°C (10 min) before T4 ligase inactivation at 50°C (10 min) and BsaI inactivation at 80°C (10 min). The product was directly transformed into XL1B strain of E. coli. Positive clones were verified by DNA sequencing, followed by transfection of 15 μg construct into the P native -TgATPase P -GC-HA 3'IT -TgGra1-39UTR strain (stated as the progenitor strain in Fig 3A) to disrupt the gene. A Cas9-mediated cleavage at the TgATPase P -GC locus caused a loss of HA signal in transfected parasites, which was monitored by IFAs at various time points of cultures ( Fig 3A). Cre-mediated knockdown of TgATPase P -GC and TgPKG by excision of 39UTR A knockdown of the genes of interest was performed by Cre recombinase-mediated excision of the loxP-flanked (floxed) 39UTR in the HA-tagged strains. The progenitor strains (P native -TgATPase P -GC-HA 3'IT -39UTR floxed or P native -TgPKG-HA 3'IT -39UTR floxed ) were transfected with a plasmid (pSag1-Cre) expressing Cre recombinase that recognizes and excises the loxP sites flanking 39UTR and HXGPRT selection cassette (Figs 4A and S7A). Tachyzoites transfected with Cre-encoding vector were then negatively selected for the loss of HXGPRT expression using 6-thioxanthine (80 μg/ml) (Donald et al, 1996). The single clones with Cre-excised 39UTR were screened by PCR using indicated primers (Table S1) and validated by sequencing. The expression level of the target proteins in the mutants was confirmed by immunofluorescence and immunoblot analysis, as described below. For each phenotyping assay, the mutant parasites were generated fresh by transfecting Cre expression plasmid into the progenitor strain followed by drug selection and isolation of clones by PCR screening. The mutants were not propagated beyond 2-3 wk to minimize any adaptation in culture. Measurement of cGMP in tachyzoites Confluent HFF monolayers were infected either with the parental (RHΔku80-hxgprt − , MOI: 1.3), progenitor (P native -TgATPase P -GC-HA 3'IT -39UTR floxed , MOI: 1.5), or the knockdown mutant (P native -TgATPase P -GC-HA 3'IT -39UTR excised , MOI: 2) strain for 36-40 h. Infected cells containing mature parasite vacuoles were then washed twice with ice-cold PBS to eliminate free parasites, scraped by adding 2 ml colorless DMEM, and extruded through a 27G syringe (2×), followed by centrifugation (420g, 10 min, 4°C). The parasite pellets were dissolved in 100 μl of cold colorless DMEM for counting and cGMP extraction. The parasite suspension (5 × 10 6 , 100 μl) was mixed with 200 μl of icecold 0.1 M HCl, incubated for 20 min at room temperature, and flashfrozen in liquid nitrogen until used. The samples were thawed and squirted through pipette to disrupt the parasite membranes. The colorless DMEM and HFF cells, treated similarly, were used as negative controls. The samples were transferred onto centrifugal filters (0.22-μm, Corning Costar Spin-X, CLS8169; Sigma-Aldrich) to eliminate the membrane particulates (20,800 g, 10 min, 4°C). The flow-through was filtered once again via 10-kD filter units (Amicon Ultra-0.5 ml filters; Millipore) to obtain pure samples (20,800 g, 30 min, 4°C), which were then subjected to ELISA using the commercial Direct cGMP ELISA kit (ADI-900-014; Enzo Life Sciences) to measure cGMP levels of parasites. The acetylated (2 h) format of the assay was run for all samples, including the standards and controls, as described by the manufacturer. The absorbance was measured at 405 nm; the data were adjusted for the dilution factor (1:3) and analyzed using the microplate analysis tool (www.myassays.com). To resolve the C-terminal topology of TgATPase P -GC, fresh extracellular parasites were stained with rabbit α-HA (1:3,000) and mouse α-TgSag1 (1:10,000) antibodies, or with rabbit α-TgGap45 (1:8,000) and mouse α-TgSag1 (1:10,000) before and after permeabilization as described elsewhere (Blume et al, 2009). 5 × 10 4 parasites were fixed on BSA-coated coverslips using 4% paraformaldehyde with 0.05% glutaraldehyde. Permeabilized cells were subjected to immunostaining as indicated above, except for that all solutions were substituted to PBS for nonpermeabilized staining (i.e., no detergent and BSA). To test the membrane location of TgATPase P -GC, the IMC was separated from the plasma membrane by treating extracellular parasites with α-toxin from Clostridium septicum (20 nM, 2 h) (List Biological Laboratories), followed by fixation on BSA-coated coverslips and antibody staining. In both cases, the standard immunostaining procedure was performed subsequently. Lytic cycle assays All assays were set up with fresh syringe-released parasites, essentially the same as reported earlier (Arroyo-Olarte et al, 2015). Parasitized cultures (MOI: 2; 40-44 h post-infection) were washed with standard culture medium, scraped, and extruded through a 27G syringe (2×). For plaque assays, HFF monolayers grown in sixwell plates were infected with tachyzoites (150 parasites per well) and incubated for 7 d without perturbation. The cultures were fixed with ice-cold methanol (−80°C, 10 min) and stained with crystal violet solution (12.5 g dye in 125 ml ethanol mixed with 500 ml 1% ammonium oxalate) for 15 min, followed by washing with PBS. The plaque sizes were measured by using the ImageJ software (NIH). To set up the replication assays, host cells grown on coverslips placed in 24-well plates were infected with 3 × 10 4 parasites before fixation, permeabilization, neutralization, blocking, and immunostaining with α-TgGap45 and Alexa594 antibodies, as explained in IFA. The cell division was assessed by enumerating intracellular parasites within their vacuoles. To measure the gliding motility, 4 × 10 5 parasites suspended in calcium-free HBSS with or without drugs (BIPPO, 55 μM; zaprinast, 500 μM; and compound 2, 2 μM) were incubated first to let them settle (15 min, room temperature) and glide (15 min, 37°C) on BSA-coated (0.01%) coverslips. The samples were subjected to IFA using α-TgSag1 and Alexa488 antibodies, as mentioned above. Motile fractions were counted on the microscope, and trail lengths were quantified using the ImageJ software. Immunoblot analysis Standard Western blot was performed to determine the expression level of TgPKG-HA 3'IT , whereas the dot blot analysis was undertaken for TgATPase P -GC-HA 3'IT because of its large size (477-kD). For the former assay, the protein samples prepared from extracellular parasites (2 × 10 7 ) were separated by 8% SDS-PAGE (120 V) followed by semidry blotting onto a nitrocellulose membrane (85 mA/cm 2 , 3 h). The membrane was blocked with 5% skimmed milk solution prepared in 0.2% Tween 20/TBS (1 h with shaking at room temperature), and then stained with rabbit α-HA (1:1,000) and mouse α-TgRop2 (1:1,000) antibodies. For the dot blot, protein samples equivalent to 10 7 parasites were spotted directly onto nitrocellulose membrane. The membrane was blocked in a solution containing 1% BSA and 0.05% Tween 20 in TBS for 1 h, followed by immunostaining with rabbit α-HA (1:1,000) and/or rabbit α-TgGap45 (1:3,000) antibodies diluted in the same buffer. Proteins were visualized by Li-COR imaging after staining with IRDye 680RD and IRDye 800CW (1:15,000) antibodies. Densitometric analysis was performed using the ImageJ software, as reported elsewhere (https://imagej.nih.gov/ij/ docs/examples/dot-blot). Structure modeling The membrane topology of TgATPase P -GC was assessed based on the data obtained from TMHMM (Sonnhammer et al, 1998), SMART (Letunic et al, 2014), Phobius (Käll et al, 2004), NCBI-conserved domain search (Marchler-Bauer & Bryant, 2004), and TMpred (Hofmann & Stoffel, 1993) (Fig 1A). To detect conserved residues of the active sites in TgATPase P -GC, GC1 and GC2 regions were aligned with the cyclase domains of representative organisms using Clustal Omega program (Sievers et al, 2011). Similarly, conserved motifs in the ATPase domain of TgATPase P -GC were obtained by alignment with members of human P4-ATPases using MAFFT online alignment server (v7) (Katoh & Standley, 2013). Conserved residues were colorcoded by the Clustal Omega program. The conserved motifs and cyclase domains for the tertiary model were predicted using UniProt (https://www.uniprot.org/). The catalytic units of GC1 (aa 2,929-3,200, lacking the loop from aa 3,038 to 3,103) and GC2 (aa 3,195) were modeled by SWISS-MODEL (https://swissmodel.expasy.org/), based on a ligand-free tmAC as the structural template (UniProt ID: 1AZS). The Qualitative Model Energy Analysis and the Global Model Quality Estimation scores were determined to be −3.44 and 0.59, respectively, reflecting the accuracy of the model. Subsequently, the ligand GTPαS was positioned into the model of pseudo-heterodimer corresponding to the location of ATPαS in tmAC (Protein Data Bank ID: 1CJK). Phylogenetic analysis The open reading frame sequences of TgATPase P -GC orthologs were obtained from the NCBI database. Briefly, the whole sequences of 30 proteins were aligned, and based on this alignment, a consensus tree was generated using the CLC Genomics Workbench v12.0 (QIAGEN Bioinformatics). Maximum likelihood method was used for clustering; bootstrap analysis was performed with 100 iterations; neighborhood joining was used for construction; and JJT model was selected for amino acid substitutions. The eventual tree was visualized as a cladogram by Figtree v1.4.3 (http://tree.bio.ed.ac.uk/ software/figtree/), followed by text annotation in the Microsoft PowerPoint. Data analyses and statistics All experiments were performed at least three independent times, unless specified otherwise. Figures illustrating images or making of transgenic strains typically show a representative of three or more biological replicates. Graphs and statistical significance were generated using GraphPad Prism v6.0. The error bars in graphs signify means with SEM from multiple assays, as indicated in figure legends. The P-values were calculated by t test (*P ≤ 0.05; **P ≤ 0.01; ***P ≤ 0.001; and ****P ≤ 0.0001).
v3-fos-license
2023-02-09T15:26:15.627Z
2022-03-23T00:00:00.000
256689072
{ "extfieldsofstudy": [], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.nature.com/articles/s41467-022-29300-w.pdf", "pdf_hash": "358f6f4b927ad782939bda6b881b41cbe6c6b3db", "pdf_src": "SpringerNature", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:1655", "s2fieldsofstudy": [ "Geology" ], "sha1": "358f6f4b927ad782939bda6b881b41cbe6c6b3db", "year": 2022 }
pes2o/s2orc
Paternal transmission of migration knowledge in a long-distance bird migrant While advances in biologging have revealed many spectacular animal migrations, it remains poorly understood how young animals learn to migrate. Even in social species, it is unclear how migratory skills are transmitted from one generation to another and what implications this may have. Here we show that in Caspian terns Hydroprogne caspia family groups, genetic and foster male parents carry the main responsibility for migrating with young. During migration, young birds stayed close to an adult at all times, with the bond dissipating on the wintering grounds. Solo-migrating adults migrated faster than did adults accompanying young. Four young that lost contact with their parent at an early stage of migration all died. During their first solo migration, subadult terns remained faithful to routes they took with their parents as young. Our results provide evidence for cultural inheritance of migration knowledge in a long-distance bird migrant and show that sex-biased (allo)parental care en route shapes migration through social learning. Animals often migrate in social groups, but little is known about the social learning of migration behaviours. Here, Byholm et al. analyse high-resolution tracking data from Caspian Terns and reveal that juveniles’ survival and learning of migration routes depend critically on following a parent. T he seasonal migrations of many species of mammals, fish, reptiles, and birds span vast geographical scales across the globe [1][2][3][4] . While migrating, animals often move in groups ranging in size from just a few individuals to tens of thousands 5 . Joining a group of conspecifics may provide multiple benefits, including improved navigation capacity, foraging opportunities, and predator detection 6,7 . In particular, the migrations performed by gregarious birds are a well-known phenomenon in nature which have long fascinated human observers 2 . While in some bird species, individuals may end up in flocks by coincidence, being funnelled together by weather and geography 8 , others travel in highly-structured social groups, often consisting of family units within larger flocks 2,9 . In such species, social learning is an important way in which naïve young acquire migratory skills 10,11 , and conservation projects have successfully used learning behaviour to teach migration routes to inexperienced migrants by leading them with ultralight aircraft 11 . In social species, learning from experienced individuals is thus fundamental to the development of migratory performance, and often overrides innate genetically inherited preferences for migratory tendency and orientation 10,11 . Due to the practical challenge of following specific individuals relying on field-based observation alone, there is limited understanding of how animals migrating together actually interact. Therefore, it remains largely unknown the extent to which migratory skills are transmitted obliquely (from any experienced individual in the population to any naïve individual) and/or vertically (from parents to offspring) from one generation to the next 12 . In contrast to our understanding of migration in some mammals 4,13 , it is also not known if male and female bird parents have distinct roles in this context or what is the cultural inheritance of migration. That is, if social learning and/or teaching 14,15 affects whether young of long-distance bird migrants repeat the migrations they experienced when migrating with informed adults as young in consecutive years. To study how migratory knowledge is transferred between generations in a species which typically migrates solo or in family groups 16 , we used GPS-devices to track 31 Caspian Terns Hydroprogne caspia between the breeding grounds in northern Europe and the wintering grounds in Africa during 2016-2020. We tracked adults and young from the same family groups to explore the importance of within-family interactions during the migration of these birds, and simultaneously gain insight into how extended parental care affects the survival of young birds during migration. Finally, to test whether this system fits the definition of cultural inheritance of migratory knowledge, i.e., that early-life experiences influence decisions later in life, we quantified the degree to which subadults making their first solo migrations used the same routes and stopover sites they visited when accompanied by a parent. Results Interactions between family members on migration. Terns initiated autumn migration between July 15 and August 24 ( Table 1, Supplementary Table 1). All young that survived to initiate migration migrated together with an adult bird upon leaving the breeding grounds. A total of 9 young terns migrated with their male parent (in 7 cases the male parent migrated with a single young and in one case a male parent migrated with two young), one young migrated with a male foster parent and one young migrated with a female parent. The observed male:femaleratio among parents migrating with young (10:1) was strongly male biased (χ 2 = 8.33, df = 1, p = 0.004). One young tern that was abandoned on the breeding islet (Young 2, Family group 3) and another (Young 2, Family group 5) that lost contact with its parent within 24 h of leaving the breeding area were both predated by white-tailed eagles Haliaeetus albicilla within days. Similarly, soon after initiating migration, two young (Young 1, Family group 2; Young 1, Family group 4) were predated by a northern goshawk Accipiter gentilis and a white-tailed eagle, respectively, while temporarily left unattended by their parents en route (Supplementary Table 1). Breeding partners never migrated together (Fig. 1, Supplementary Fig. 1) and initiated migration on average 14.7 ± 9.9 (mean ± s.d.) days apart, with no significant difference between the sexes in terms of which sex initiated migration first (pairwise t-test, t = 0.89, df = 8, p = 0.40). Consequently, parent-young pairs and solo migrants from the same family groups did not differ in the onset of migration (Table 1). During migration, all surviving young migrated together with the parent until reaching the wintering grounds ( Fig. 1, Supplementary Fig. 1, Supplementary Movie 1). Consequently, travel speed did not differ between adults and young of the same parent-young pair (pairwise t-test, t = −0.39, df = 4, p = 0.72), and at the population level, adults and young did not differ in their travel speeds (Table 1). However, solo-migrating parents migrated with a significantly higher travelling speed than did parent-young pairs ( Table 1). Because young terns spent more time roosting (and less time foraging) than adults at stopover sites (GLMM, b = 0.05 ± 0.02, t = 2.70, p = 0.01), adults and young migrating together spent significantly less time foraging while present at stopover sites than solo adults ( Table 1). Breakup of the bond between parents and young. Adults and young migrating together remained close to one another (pairwise distance: ΔD 0.6 ± 2.0 km (mean ± s.d.). Supplementary Movie 1) until the migration event was disrupted due to mortality, tag failure or the pair arrived successfully on the wintering grounds. Among the successful pairs, information on the breakup of the parental bond was available for 4 pairs from three different family groups. After arriving on the wintering grounds, the adult and young gradually started to spend more time apart (ΔD: 1.1 ± 3.7 km) and after spending 68.8 ± 17.5 days together at the destination the bond eventually broke between late-October and early-December in all pairs (ΔD: 333.7 ± 661.0 km) as inferred from segmented regressions of inter-individual distances (Fig. 2). After separation, the young either remained at the same wintering location as the adult (n = 2) or continued migrating further south (n = 2; one of which was the case of the young which followed a foster parent) ( Supplementary Fig. 1). Fidelity to migration routes and stopover sites. Adult and subadult Caspian terns showed strong fidelity to autumn migration routes across years (Fig. 3) more often selecting the same stopover sites in multiple seasons instead of unique ones each year (Chi-square test for equal proportions, χ 2 = 4.50, df = 1, p = 0.03). Of the 32 registered stops, 69% (n = 22) took place at stopover sites visited in both seasons. However, the 62% (n = 8/ 13) rate of stopover site re-use observed among adults alone did not deviate from 1:1 (χ 2 = 0.69, df = 1, p = 0.41), and consequently, the overall result is largely due to subadults re-using stopovers they visited on their first autumn migration (χ 2 = 4.26, df = 1, p = 0.04). Of 19 stops recorded among young/subadults, 74% (n = 14) occurred at stopover sites used during both the first and second autumn migrations (Fig. 3). The lower re-use of stopover sites among adults migrating alone as compared to parent-young pairs is likely influenced by the larger distances between stopover sites observed among lone adults compared to among pairs of parents and young (Table 1). Discussion Previous work on how sociality shapes the migration of naïve individuals has been conducted on species migrating in large social groups 4,11 . However, in such systems it is challenging to distinguish the role(s) group members have in transmitting migratory knowledge to naïve individuals. We show that in Caspian terns, a species typically migrating solo or in small groups 16 , breeding partners do not migrate together and that male (and foster male) parents carry most of the responsibility for guiding naïve young during their first outbound migration. The bond between a parent and young only gradually breaks down upon arrival to the wintering area. The finding that adult males are largely responsible for accompanying naïve young migrants on their first outbound migration in a bird species providing bi-parental care is a finding that deserves further attention. It is unknown how widespread such paternal effects in migratory behaviour is among birds 17 , but female offspring desertion and male-biased parental care is widespread in birds 18 , including terns 19 , and possibly originates in a sexual conflict where female parents gain in terms of residual Table 1 Migration metrics and the results of GLMMs (Family ID specified as random intercept) as compared between pairs of parents and young and solo-migrating birds from the same family group. fitness over time 20 . However, the fact that all the young terns which lost contact with their parent died from predation (Supplementary Table 1) does not lend direct support to this idea. Population studies which record lifetime reproductive success and individual survivorship are needed to reveal the ultimate explanation as to why female parents only occasionally migrate together with their young. Given that Caspian terns, like some other birds [21][22][23] , re-visit stopover sites along the same migration routes they followed in previous years (Fig. 3), the extension of parental care throughout the migratory period will likely have long-term impacts on the fitness of juvenile birds. In Caspian terns, the benefits of male parental care extend thousands of kilometres from the breeding grounds and suggests social learning is a key factor shaping the migratory behaviour of young. The observed differences in migration metrics (Table 1) between parent-young pairs and solo-migrating adults from the same families also indicate that parental care during migration impacts the migratory behaviour of adult birds. Similar to earlier work showing that birds in flocks fly slower than solo birds 24 , we found that solo-migrating adults travel faster than parent-young pairs, and travel greater distances between stopovers compared to parent-young pairs. Accompanying young appears to slow down parents during periods of active flight. Differences in migration speed and foraging behaviour during stopovers as observed between parent-young pairs and solo adults suggest migrating with young may come with a cost to the parent. According to the most accepted criteria defining teaching behaviour, i.e. that (1) the teacher modifies their behaviour in the presence of a naive observer, (2) there is a cost incurred by doing so, and (3) the teacher's modified behaviour leads the observer to acquire the behaviour faster or more efficiently than it might have done otherwise 25,26 , our results suggest that adult Caspian terns accompanying young on migration constitute an example of teaching behaviour 14,15 . However, since there are no differences in terms of overall active travelling time, travel distance, nor the timing of arrival at the winter quarters between solo adults and parent-young pairs (Table 1), cost disparities appear to reset at the end of the migratory journey. This suggests that any possible costs associated with accompanying young on migration may be of relatively short-term nature, calling for longitudinal investigations of the lifetime fitness effects of deciding whether or not to accompany young. Our finding that young Caspian terns can join un-related adult males on migration when leaving the breeding site demonstrates that migration routes in addition to being vertically transmitted from the biological parents to offspring are occasionally transmitted as a result of fostering behaviour 27 . To the extent that transgenerational social transmission of migration behaviour is adaptive, such behavioural plasticity can be expected to impact the array of migration routes within genetic lineages. In contrast to findings in some mammals 28 , social learning and cultural inheritance does, however, not seem to have led to fixed grouplevel cultures in Caspian terns given how much variation there is in routes (Supplementary Fig. 1). In several cases, when naïve young after becoming independent broke away from the parent at advanced stages of migration, the young continued to migrate to other (more distant) wintering sites (Figs. 1c and 2, Supplementary Fig. 1) using highly goal-oriented flight behaviour. Although genetically inherited programs 29 may contribute, the observed movements made by young without their parents strongly suggests they joined with other birds on these flights 11 . In a species like the Caspian tern that aggregates at staging areas during migration 16 , there would typically be experienced individuals present that naïve birds could join. That the bond between the parent and young gradually breaks apart over more than two months upon arrival in the wintering quarters indicates that gaining survival skills and new contacts takes a long time after arriving on the wintering grounds. Taken together, the extended parental care and combination of vertical and oblique (from foster parents) pathways for migration knowledge transfer will have consequences for the migratory routes and stopovers used at the population level. Given that Caspian terns are consistent in their use of socially-transmitted migration routes, this system constitutes an example of cultural inheritance of migratory knowledge 12,30-33 . By allowing for rapid adjustments in migratory habits, the dynamic transgenerational social transfer of migratory knowledge we observed in Caspian terns may alleviate them of the initial selection pressures on suboptimal timing, routes, and behaviours observed in other migratory bird species 34 . Such behavioural plasticity may be particularly beneficial in highly variable environments or when resources are scattered in space and time 35,36 . Similarly, such a developmental bias 33 can be expected to be of relevance for the process of species range expansion. The fact that Caspian terns have a disjunct, yet widespread global species range 16 may partially be explained by the manner in which migration knowledge is rapidly transmitted across generations. Thus, whether migratory species can persist in the face of global climate change and widespread habitat loss 37 can be expected to depend in part on how effectively knowledge of successful migratory routes and stopover sites is transmitted from experienced to naïve individuals. Considering the recent widespread declines of migratory birds 38 , there is an urgent need to improve our general understanding of how social learning contributes to flexibility in migratory strategies, to ultimately be able to take appropriate management actions to conserve migratory species across the world. Methods Ethics statement. Permits to trap, take blood samples and deploy GPS-tracking devices on Caspian terns in Finland were issued by the Regional State Administrative Agency for Southern Finland (ESAVI/1068/04. 10 Field protocol. Caspian terns were caught at breeding sites along the Finnish west coast (62°14′N, 21°17′E) and the Swedish east coast (57°16′N, 16°37′E and 65°18′N, 22°23′E). Adults (n = 13) were caught at the nest during the last week of the incubation phase in late May using spring nets, and juveniles (n = 18) were caught by hand from the breeding islet just prior to fledging. Solar-powered GPS-trackers (Ecotone GPS-GSM or OrniTrack GPS-GSM/GPRS trackers) were attached with a leg-loop harness using Teflon ribbon 39 . Tags weighed 18-20 g corresponding to c. 3% of tern's body mass at the time of deployment (595 ± 74 g; mean ± s.d.). The amount and type of data the devices delivered varied (from 5 min to several hours between GPS-fixes) depending on device model and programming schedule as well as voltage level. Sex was determined from DNA as extracted from blood samples using the salt extraction method 8 . Demarcation of tracking data. A total of 33 migration events (made up of 434,632 GPS-fixes) conducted by 27 (of the originally 31) terns that successfully initiated migration were studied in analyses on migration 40 . The initiation of migration was defined from when the tracked bird left the breeding area in continuous directional flight southwards at a typical travel speed (35.3 ± 8.3 km/h) for a distance of 116.5 ± 79.3 km in one stretch without returning to the breeding area that season (Supplementary Table 1). For the purpose of this study, the time window considered was defined from when the first member of a family group departed the breeding islet for migration until the end of the calendar year (or until the individual died/the tracker failed). The tracking data was segmented into periods of roosting, foraging and directional flight behaviour using the relative position of the original GPS-fixes (hereafter 'fixes') and the speed calculated as the quotient of the linear distance and time between consecutive pairs of fixes. If the flight speed between ≥3 consecutive fixes was <10 km −h , fixes were assigned to the intermediary class "resting". Resting fixes located <30 km from each other and where the bird spent >24 h were aggregated into a minimum convex polygon; resulting polygons were considered stopover sites. Fixes located inside a polygon or <5 km from the polygons' outer boundary with a calculated flight speed of ≥2 km −h were classified as foraging behaviour at the stopover area, and otherwise classified as roosting behaviour. Of the remaining fixes (flight speed ≥10 km −h ), those located >20 km from identified stopover sites were classified as migration flight behaviour. To be able to study when the parental bond broke up, tracking data of pairs of parental birds and young successfully reaching the wintering area were merged into one data set and sorted chronologically. Acknowledging that the internal clocks of the GPS-trackers are not likely to be in perfect synchrony, pairwise (parent-young) distances were calculated between fixes ≤20 min apart in time (ΔD) and used as a measure of the tightness of the bond between the adult and young. Even though this measure therefore is likely to overestimate real pairwise distances, this measurement error remains unchanged in time enabling studies of how the tightness of the parental bond changes with advancing season. In analyses comparing differences in migration behaviour between parent-young pairs and adults migrating alone (cf. Table 1), only data covering the time-window between departure from the breeding area and arrival in the wintering areas was considered. The deterioration of the bond between parents and young was determined using segmented regressions (excluding ΔD > 100 km for Family group 7 [Dec-18 -Dec-31] and Family group 8 [Dec-6 -Dec-31] to allow model convergence) where ΔD was set as dependent and date as explanatory variables. Because the average GPS sampling rate differed among tracking devices and measures of the proportion of time foraging were positively biased with GPS sampling rate (linear regression, F = 13.59, df = 19, p = 0.002), the residuals from the model between GPS sampling rate and the proportion of time spent foraging at roosts were used in model #4 (Table 1) instead of the raw data. In Table 1 the degrees of freedom vary between models, e.g., because some birds failed to arrive to the wintering area or because the status of adult birds (migrating together with young or migrating alone) changed en route when chicks died. Repeat use of migration routes and stopover sites during the autumn migration event was inferred from 12 GPS-tracks (of which 4 are also part of the family tracking material [Supplementary Table 1], 40 ) delivered by trackers carried by adult and young/subadult Caspian terns in two consecutive seasons. For adults, tracking data spans the time window from when birds left the breeding islet until having spent ≥10 days in the wintering area. Tracking data for first autumn migration of naïve migrants was delimited in the same way, but because they did not breed as subadults in their first year of return, the geographical location of the last night roost used before the initiation of autumn migration was used as a starting point of the migration event. When analysing re-use of stopover sites, minimum convex polygons used by the same bird in consecutive years <15 km apart were considered to represent the same stopover site. Statistical analyses. Initial handling and sorting of GPS-tracking data as well as construction of maps was performed with ArcMap 10.3.1 and MS Excel 2019. All statistical analyses are two-sided and were conducted in R 4.0.3 41 . Where statistical models involved multiple individuals from the same family group (fit by REML and linear link in lme4 package), family group ID was entered as a random intercept to account for the non-independence of response variables in GLMMs. The break-up of the bond between parents and young was inferred from segmented regressions using the segmented-package in R. The animated movie in Supplementary Movie 1 was made in Google Earth Pro 7.3.3.7786. Reporting summary. Further information on research design is available in the Nature Research Reporting Summary linked to this article. Data availability Detailed metadata on tracked birds part of family units and birds studied in analyses of repeat use of migration routes and stopover sites is available in Supplementary
v3-fos-license
2023-07-15T15:09:35.935Z
2023-07-01T00:00:00.000
259863271
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.mdpi.com/2227-9059/11/7/1976/pdf?version=1689214447", "pdf_hash": "010ff2cdd8ce0c871eff698083efc0eafdec6888", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:1657", "s2fieldsofstudy": [ "Medicine" ], "sha1": "8359de7a9aab990e9da3b4d3cdb4679645982873", "year": 2023 }
pes2o/s2orc
Spontaneous Epiretinal Membrane Resolution and Angiotensin Receptor Blockers: Case Observation, Literature Review and Perspectives Introduction: Epiretinal membrane (ERM) is a relatively common condition affecting the macula. When symptoms become apparent and compromise a patient’s quality of vision, the only therapeutic approach available today is surgery with a vitrectomy and peeling of the ERM. Angiotensin receptor blockers (ARBs) and angiotensin-converting enzyme inhibitors (ACE-Is) reduce the effect of angiotensin II, limit the amount of fibrosis, and demonstrate consequences on fibrinogenesis in the human body. Case Description and Materials and Methods: A rare case of spontaneous ERM resolution with concomitant administration of ARB is reported. The patient was set on ARB treatment for migraines and arterial hypertension, and a posterior vitreous detachment was already present at the first diagnosis of ERM. The scientific literature addressing the systemic relationship between ARB, ACE-Is, and fibrosis in the past 25 years was searched in the PubMed, Medline, and EMBASE databases. Results: In total, 38 and 16 original articles have been selected for ARBs and ACE-Is, respectively, in regard to fibrosis modulation. Conclusion: ARBs and ACE-Is might have antifibrotic activity on ERM formation and resolution. Further clinical studies are necessary to explore this phenomenon. Introduction Fibrosis in different organs of the human body represents a growth, stiffening, and/or scarring of tissues, and it is characterized by excess deposition of extracellular matrix (ECM) components including collagen [1]. Fibrosis is also involved in the development of epiretinal membranes (ERMs), which consist of fibrocellular proliferation over the internal limiting membrane (ILM) [2]. Angiotensin receptor blockers (ARBs) and angiotensin-converting enzyme inhibitors (ACE-Is), beyond their function as antihypertensive drugs [16,17], are known to reduce scar formation through modulation of the angiotensin and TGF-β1 pathways in the fibrotic tissue [18][19][20]. Their role in ERM formation has not yet been explored. Hereby, we report a rare case observation of spontaneous ERM resolution associated with the commencement of ACE-I treatment with a review of the literature on ACE-I and systemic fibrosis modulation to finally delineate future perspectives. Case Description and Materials and Methods A 58-year-old woman was referred to the Department of Ophthalmology, Oslo University Hospital, Norway, for a surgical evaluation of ERM producing metamorphopsia and perceived vision loss in the left eye in February 2018. The ophthalmic history revealed posterior vitreous detachment (PVD) 2-3 years before when she had received laser barrage treatment for a peripheral retinal tear at the 4 o'clock region of the left eye performed at a local eye clinic. The patient had been myopic since adolescence. Her best corrected visual acuity (BCVA) at the first observation was −0.2 logMAR with −4.00 sphere and −1.00 at 110 • cylinder in the right eye, and −0.1 logMAR with −4.00 sphere and −0.75 at 70 • cylinder in the left eye. At the slit-lamp examination, both eyes were within normality with clear lenses. The right eye showed a little atrophic peripapillary crescent compatible with moderate myopia and an inferotemporal area of pigment degeneration. In the left eye, fundoscopy also showed a little peripapillary atrophic crescent, an altered foveal reflex, and a peripheral laser barrage that had produced good retinochoroidal adhesion around the above-mentioned retinal tear. The first-observation OCT demonstrated ERM foveoschisis in the left eye ( Figure 1). The subsequent follow-ups showed spontaneous resolution of the ERM that started between the first and the second observation, and continued up to the last eye examination (6 observations) over a period of 4 years, 9 months, and 1 week (from February 2018 to December 2022). No pars plana vitrectomy was indicated due to the good visual function and spontaneous resolution of the ERM. The BCVA did not change over the years of observation despite the drastic anatomical improvement. The scientific literature addressing the systemic relationship between ARBs, ACE-Is, and fibrosis in the past 25 years (since 1997) was searched in the PubMed, Medline, and EMBASE databases. Inclusion criteria were studies linking ARBs and ACE-Is to protection from fibrosis development in patients with systemic diseases. Exclusion criteria were review studies, pilot studies, case series, case reports, photo essays, and studies written in languages other than English. Her systemic history showed obesity and incipient diabetes mellitus that resolved after a gastric bypass in 2015 followed by a 70 kg weight loss. Other systemic complaints were sleep apnea (treated with C-PAP) and migraines. At the first observation by the vitreoretinal surgeon, she was on dietary supplements, spray estrogen, metoprolol 50 mg qd for migraine attack prevention, and high blood pressure treatment. Since the arterial hypertension and migraine were incompletely controlled between the first and the second observation at the Department of Ophthalmology (February 2019 and May 2019), the patient was started on an ARB (Candesartan), which she continued to take thereafter at a dosage of 24 mg qd. The scientific literature addressing the systemic relationship between ARBs, ACE-Is, and fibrosis in the past 25 years (since 1997) was searched in the PubMed, Medline, and EMBASE databases. Inclusion criteria were studies linking ARBs and ACE-Is to protection from fibrosis development in patients with systemic diseases. Exclusion criteria were review studies, pilot studies, case series, case reports, photo essays, and studies written in languages other than English. Results Fibrosis development in a wide spectrum of systemic conditions has been investigated in the past 25 years. None of these are related to the eyes or eye disorders. Thirty-eight original articles were selected for ARBs and fibrosis modulation, and sixteen for ACE-Is and fibrosis modulation. Tables 1 and 2 summarize the studies included in the review for the ARBs and ACE-Is. Losartan and irbesartan (and captopril and ramipril) [21]; (and calpain) [22]; (and rifaximin) [30]; (and obeticholic acid) [32]; (and valsartan) [23]; (or r ZD-7155) [36] Liver ¥ , pleura § , skeletal muscle § Table 2. Selected articles dealing with fibrosis modulation by ACE-Is. ARBs and Fibrosis In the heart, ARBs have been shown to reduce the fibrogenic response in a myocardialinfarction-induced rat model [46], which has also been confirmed for Valsartan in another rat study [61]. Activation of the Ang AT(1) receptor was found to be an important factor in the development of pericardial thickening and collagen build-up in a pig model [62], the blockage of which could stop the development of pericardial fibrosis after heart surgery. In particular, the ARB (Candesartan) and another ACE-I (Temocapril) equally reduced ventricular fibrosis through different mechanisms in a hypertensive diastolic heart failure rat model [27]. Candesartan reduced the atrial fibrosis in a rat model through the suppression of connective tissue growth factor [63], while Losartan inhibited frizzled 8 and downregulated the WNT-5A pathway in an atrial fibrillation fibrosis reduction model in rats [42]. ARBs reduced fibrosis of the aortic valve in calcific aortic valve disease likely by lowering inflammation and interleukin 6 [25]. ARBs and neprilysin inhibitor (Valsartan and Sacubitril) prevented maladaptive cardiac fibrosis and dysfunction during pressureoverload-induced heart hypertrophy in a mouse model [64]. They also reduced fibrosis in isoproterenol-induced cardiac hypertrophy in a rat model [39]. ARBs (Valsartan) can improve cardiac fibrosis in diabetic nephropathy mice and achieve that by inhibiting miR-21 expression [40]. In the lungs, ARBs have been at least as effective as ACE−Is in reducing fibrosis development in radiation-induced lung fibrosis [29], both drug types being protective against radiation-induced pneumonitis and fibrosis by modulating TGF-β and alpha-actomyosin (αSMA) [65]. The use of an ARB (Olmesartan) demonstrated that both angiotensin 1 and 2 receptors are involved in fibrosis development in a mouse model of bleomycin-induced pulmonary fibrosis [43]. ARBs reduced lung fibrosis in a newborn rat model exposed to hyperoxia [66]. In particular, Losartan and calpain inhibition reduced pleural fibrosis in a mouse model [22]. ARBs and neprilysin inhibitors (Valsartan and Sacubitril) reduced fibrosis, pulmonary pressures, vascular remodeling, as well as right-ventricle hypertrophy in a rat model [67], while both ARBs and ACE-Is have been shown to possess a modulating effect in idiopathic pulmonary fibrosis [23]. ARBs have also shown efficacy in preventing radiation-induced fibrosis in the renal parenchyma of rats [68]. In a hypertensive rat model, a low dose of an ARB (Candesartan) reduced the fibroblast proliferation and TGF-β expression with a subsequent reduction in perivascular fibrosis [31]. An ARB (Losartan) reduced both the epithelial-mesenchymal transition and fibrosis development in a unilateral ureteral obstruction in a rat model [35]. This appeared to be active not only in unilateral ureteral obstruction but also in other renal diseases, therefore enhancing the beneficial effect of ARBs in kidney diseases. The same ARB was also effective in suppressing inflammation and fibrosis in the pancreas of a rat model, similar to what had already been demonstrated in the heart, kidney, and liver [69]. ARBs have been shown to improve the state of renal tubulointerstitial fibrosis [47]. In particular, Fimasartan has been shown to be effective in reducing renal oxidative stress, inflammation, and fibrosis in a unilateral ureteral obstruction mouse model [28]. An ARB (Candesartan) reduced liver fibrosis by suppressing collagen I and TGF-β1 expression as well as reducing hepatic stellate cell activation and the lipid peroxidation of proteins [56] through a therapeutic effect on cholestasis-induced liver fibrosis in rats. In another rat non-alcoholic steatohepatitis model, similar effects of ARBs were demonstrated in addition to a reduced production of aspartate aminotransferase [37,68]. The combination of ARBs and rifaximin achieved an additive affect against non-alcoholic-steatohepatitisinduced fibrosis in a rat model [30]. In a bile duct ligation rat model, the inhibitory effects of ARBs on hepatic fibrosis were found to be superior to those of ACE-Is [21]. Candesartan, at a regularly-used dose, was shown to be effective in reducing liver fibrosis in humans affected by chronic hepatitis C [48]. Short-term treatment with a hepatic-stellate-cellselective drug carrier, mannose-6-phosphate-modified human serum albumin (losartan-M6PHSA), was also effective at reducing liver fibrosis [37]. An antifibrotic effect of an ARB (Telmisartan), which is an angiotensin 1 (AT) receptor blocker and a PPARγ partial agonist, was demonstrated in both acute and chronic stages of a Schistosoma-mansoni-induced liver fibrosis mouse model [38]. Hypertensive patients with non-alcoholic fatty liver disease receiving ARBs had less liver fibrosis than their counterparts not on ARB therapy [70]. Both Telmisartan and Losartan reduced inflammation and oxidative stress in a thioacetamide mouse model of liver fibrosis [24]. Ex vivo and in vivo, it has been demonstrated that ARBs (Losartan) reduce liver fibrosis in a mouse model [20]. In skeletal muscle injury in mice, ARBs were shown to reduce the fibrosis response, ultimately improving the healing process [51]. These drugs also reduced the fibrotic response in mice with normal and dystrophic skeletal muscles [36]. An ARB (Candesartan) significantly reduced TGF-β1 expression and suppressed tumor cell proliferation and stromal fibrosis in a mouse gastric tumor model [52]. Skin scarring in humans undergoing thyroid surgery had less fibrosis in patients on ARBs or ACE-Is [44]. ACE-Is and Fibrosis As a pharmacological class, ACE-Is are a group of drugs that can reduce the availability of angiotensin II in the body. They are primarily utilized for the treatment of arterial hypertension, congestive heart failure, diabetic nephropathy, and many other cardiovascular conditions secondary to hypertension [16]. The influence of ACE-Is in the process of fibrosis has also been demonstrated in many studies. In the heart, not only do ACE-Is inhibit the proliferation of cardiac fibroblasts at various levels, but they also hinder other mitogenic signals from estrogens [60]. The antifibrotic impact of ACE-Is on the heart is due to the suppression of N-acetyl-seryl-aspartyl-lysylproline (Ac-SDKP) hydrolysis, which results in a reduction of myocardial cell proliferation (most likely fibroblasts), inflammatory cell infiltration, TGF-β expression, Smad2 activation, and collagen production [58]. Transient ACE-I administration in hypertensive rats modulated cardiac fibroblast subpopulations and activation, resulting in reduced fibrosis and an overall reduced fibrogenic phenotype [53]. In particular, Captopril was able to reduce the scar area, fibroblast count, and capillary count in spontaneously hypertensive rats [71]. In the liver-bile duct-pancreas system, an ACE-I (Captopril) was shown to reduce TGF-β1 and collagen gene expression, delaying the progression of hepatic fibrosis in a rat model created by bile duct ligation [72]. Captopril was also able to suppress the hepatic stellate cell activation via the NF-kappaB or Wnt3α/β-catenin pathways, thus reducing fibrosis development in the liver [54]. In male WBN/Kob rats, ACE-Is (lisinopril) reduced the fibrosis characterizing chronic pancreatitis [41]. Specifically, Lisinopril inhibited TGF-β1 mRNA expression, preventing pancreatic stellate cell activation. In vitro, it was demonstrated that a combination of perindopril and interferon produces an antifibrosis effect on liver cells [55]. ACE-Is prevented the generation of proinflammatory cytokines in mouse models of colitis and colonic fibrosis, most likely through inhibiting the TGF-β signaling pathway, paving the way for an innovative inflammatory bowel disease treatment [34]. An ACE-I (Ramipril) was effective at reducing inflammation, oxidative stress, and fibrosis in carbon-tetrachloride-treated rat liver [45]. In the skin, the early administration of ACE-Is (Enalapril) reduced the fibrosis and scarring process on a dermal ear rabbit model [49], which was hypothesized to be driven by the downregulation of collagen production. This drug also inhibited the renal fibrosis induced by unilateral ureteral obstruction in rats, hypothesizing a mechanism driven by the inhibition of mast cell degranulation [57]. ACE-Is and ARBs (Ramipril and Losartan) reduced scar formation through hindering fibroblast proliferation, collagen, and TGF-β1 expression, and suppressed the phosphorylation of SMAD2/3 and TAK1, both in vitro and in vivo [18]. Similarly, ACE-Is have been shown to possess antifibrotic properties in scar formation in mice [26], affecting peptides that suppress the TGF-β1/Smad and TGF-β1/TAK1 pathways. The inhibition of both Smad-and TAK1-mediated pathways by ACE-Is could thus lead to new antifibrotic agents' development. ACE-Is reduced skeletal muscle fibrosis in the early phase after streptozotocin-induced diabetes in mice [59]. Ramipril reduced radiation-induced periprosthetic capsular fibrosis and contracture in breast surgery [50]. Discussion We hereby show the implication of previous findings on the role of ARBs and ACE-Is in preventing fibrosis development in various organs of the body. The mechanism by which these drugs act upon pathways of ERM development, especially the TGF-β pathway, is being pointed out here [19]. Nothing is known about the role of ARBs and ACE-Is or TGF-β in ERM's pathogenesis. Figure 2 summarizes the different overlapping ARBs and ACE-Is used in fibrosis modulation in different organ systems, showing ARBs (Candesartan and Losartan) to be the most ubiquitously used drugs affecting fibrosis. ERMs are generated through a fibrosis mechanism involving various molecules, among which integrin β1, cathepsin B, epidermal growth factor receptor, protein-glutamine gamma-glutamyltransferase 2, prolow-density lipoprotein receptor-related protein 1, and TGF-β have been described [73]. In particular, TGF-β has been known to be a versatile cytokine that belongs to the TGF superfamily, and it is considered a major fibrosis modulator [74]. Furthermore, several pathways have been shown to be involved in the interaction between ECM, ECM-related molecules, cells, cell receptors, and intra-or extracellular proteins that can, in the end, contribute to the development of ERMs [73]. The process of ERM development is driven by more than 50 genes, among them being the Tumor Necrosis Factor (TNF), CCL2 (chemokine C-C motif ligand), Metastasis Associated Lung Adenocarcinoma Transcript 1 (MALAT1), TGF-β1, TGF-β2, Interleukin-6 (IL-6), IL-10, VEGF, and glial fibrillary acidic protein (GFAP) [75]. Since TGF-β is involved in other systems, particularly the immune system, direct targeting of TGF-β is unlikely to be therapeutically feasible [74]. ERMs are generated through a fibrosis mechanism involving various molecules, among which integrin β1, cathepsin B, epidermal growth factor receptor, protein-glutamine gamma-glutamyltransferase 2, prolow-density lipoprotein receptor-related protein 1, and TGF-β have been described [73]. In particular, TGF-β has been known to be a versatile cytokine that belongs to the TGF superfamily, and it is considered a major fibrosis modulator [74]. Furthermore, several pathways have been shown to be involved in the interaction between ECM, ECM-related molecules, cells, cell receptors, and intra-or extra-cellular proteins that can, in the end, contribute to the development of ERMs [73]. The process of ERM development is driven by more than 50 genes, among them being the Tumor Necrosis Factor (TNF), CCL2 (chemokine C-C motif ligand), Metastasis Associated Lung Adenocar-cinoma Transcript 1 (MALAT1), TGF-β1, TGF-β2, Interleukin-6 (IL-6), IL-10, VEGF, and glial fibrillary acidic protein (GFAP) [75]. Since TGF-β is involved in other systems, particularly the immune system, direct targeting of TGF-β is unlikely to be therapeutically feasible [74]. It has been previously reported that ERMs can spontaneously resolve in cases of PVD occurrence, and this may happen when the ERM's adhesion to the posterior hyaloid membrane is stronger than its adhesion to the underlying ILM [76][77][78][79]. Since our patient was known to have an already complete PVD prior to the diagnosis of ERM and the intake of ARBs (candesartan) was the only evident discriminating factor that could have led to the spontaneous resolution of ERM, we hypothesize a molecular mechanism through which the fibrosis constituting the ERM could have been affected and resolved by the molecular mechanism of ARBs ( Figure 3). In particular, we speculate that the TGF-β pathway could be the main molecular target among the different pro-inflammatory cytokine pathways that may be involved in the disease process, since it has been shown to be heavily inhibited by ARBs and ACE-Is [19]. These inhibitors, while acting upon the angiotensin receptor system (angiotensin 1 and 2 receptor (AT1R and AT2R respectively)), influence or reduce TGF-β expression and fibrosis in different organs of both animals and humans through modulating the JAK-STAT/MAPK intracellular pathways, which, in turn, influences or reduces the expression of fibronectin, collagen, and TGF-β itself [80][81][82][83]. Conclusions To our knowledge, this is the first report showing a possible correlation between ARBs and ERM resolution, supported by clinical observations and a review of literature. In perspective, both ARBs and ACE-Is should be examined in further clinical studies to confirm their potential in the prevention and treatment of ERM. Conclusions To our knowledge, this is the first report showing a possible correlation between ARBs and ERM resolution, supported by clinical observations and a review of literature. In perspective, both ARBs and ACE-Is should be examined in further clinical studies to confirm their potential in the prevention and treatment of ERM.
v3-fos-license
2019-10-31T09:09:52.196Z
2019-10-19T00:00:00.000
207825782
{ "extfieldsofstudy": [ "Biology" ], "oa_license": "CCBY", "oa_status": "HYBRID", "oa_url": "https://api.intechopen.com/chapter/pdf-download/69639.pdf", "pdf_hash": "599d0e3716f46a31fe95e881bf8cbbe0c8dbcd46", "pdf_src": "Anansi", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:1658", "s2fieldsofstudy": [ "Medicine", "Biology" ], "sha1": "0817ee2c0e2feeb034726580b35804a8a285ec31", "year": 2020 }
pes2o/s2orc
Telocytes: New Connecting Devices in the Stromal Space of Organs Telocytes (TCs) represent a new type of interstitial cells, and were discovered by Prof. Popescu and his collaborators from Bucharest in 2005, and described as Interstitial Cajal-Like Cells (ICLCs). In 2010, Prof. Popescu and Prof. Faussone-Pellegrini from Florence, based on their expertise in morphology, agreed that in fact ICLCs were a brand-new entity and they renamed them telocytes. TCs are characterized by specific veil- or ribbon-like extensions called telopodes. Telopodes aid TCs in forming homo- or hetero-cellular contacts; thus, assembling three-dimensional networks that organizes the stromal and the parenchymal components of the organs. TCs can transfer information to neighbor cells ensuring a short-distance communication, and remotely by the release a wide variety of extracellular vesicles: exosomes, ectosomes, and multivesicular bodies. Here, we reviewed the evolution of the interest regarding TCs in different organs, in normal and pathological conditions. The main focus was on the role of TCs in gastrointestinal tract, urinary bladder, reproductive tract, and heart. This chapter sums up information about the possibilities that TCs are capable to behave as sen-sors/mediators in nervous activity, to represent mesenchymal stem cell precursors in adulthood, and to control and determine the differentiation/maturation of other cell types either during development or in postnatal life. Introduction: the telocytes Discovered 13 years ago, telocytes (TCs) still represent a subject of debate regarding their role.TCs were first described in 2005, in the interstitial space surrounding exocrine acini in the pancreas, and considered as cells closely resembling the interstitial cells of Cajal (ICCs) by Romanian scientists from Carol Davila University of Medicine and Pharmacy in Bucharest, Romania, which entitled their first publication Interstitial Cells of Cajal in Pancreas [1].Subsequent publications of this team described these cells under the name of interstitial Cajal-like cells and characterized them with the aid of electron microscopy and immunoctyo(histo) chemistry [2][3][4][5][6].In 2010, professor Popescu, the Bucharest team leader, along with professor Faussone-Pellegrini, who is considered to be the international leading expert in ICCs (the gut pacemaker cells), have agreed that TCs and ICCs can be regarded as completely different cell populations based on the ultrastructural peculiarities, and went on to give them the name of "telocytes" [7].In the following years, TCs have been described in numerous organs, and appear to be omnipresent in stromal space of humans and laboratory mammals where they form networks [8][9][10] by making contacts with each other (homo-cellular contacts).This is possible due to their unique and exceptionally long (several tens to hundreds of μm) cell prolongations called telopodes [7].As professor Popescu liked to say "the shortest possible definition for telocytes is cells with telopodes" [11].Telopodes are characterized by a succession of thin, filamentous regions (podomers) and dilated areas (podoms) with bead-like appearance [12].TCs were revealed to have a third dimension that was only recently observed by FIB-SEM tomography, telopodes being represented by a veil-or ribbon-like extensions that compartmentalize the interstitial space [13].Apart from the ability to form networks, TCs also can provide local communication with different cells and also to deliver extracellular vesicles to establish distant communication [14][15][16][17][18].A detailed representation of the TCs contacts is illustrated in Figure 1. To find out the function(s) of these cells, many methods of investigation have been employed, ranging from classical microscopy-optical and electronic-to advanced genomics and proteomics techniques.Thus, differentiation of TCs from mesenchymal stem cell adipocytes, fibroblasts, and endothelial cells was achieved [19][20][21][22]. The discovery and description of TCs have given rise to many controversies.From the publication of the first article to the present, the interest generated by TCs can be traced in the graph summarizing the number of articles published annually in PubMed (Figure 2). Though not yet described in speciality treatises as a distinct cellular type, this prospect does not seem to be far, the evidence being the increase in the number of articles published in the most prestigious journals such as Nature [23][24][25].The discussion of these controversies is dealt with in detail by two recent reviews, in which there are enough arguments for and against the existence of TCs as a new cell type [26,27].We believe that in the shortest time new biomarkers will Artistic representation of a 3D view of the contacts of a telocyte.TCs are regarded as interconnection devices due to their homo-and hetero-cellular junctions, as well as to their proximity to structures like blood vessels, nerve fibers, and muscle fibers.Image courtesy of Iurie Roatesi.Reproduced with permission from Ref. [75]. be found to prove the existence of this particular type of cell, and here is just one step to the description of their possible functions. Our attention in this chapter is focused on synthesizing the available information relating to the main categories of interest that emerge from the chart shown in Figure 3, namely gastrointestinal tract, urinary bladder, reproductive tract, and heart. The telocyte network The ability of these cells to form 3-D stromal networks can be considered a discriminant element for their recognition under the light microscope [8], especially because no specific immunomarkers are available [28].Indeed, the cell surface glycoprotein CD34, a marker shared with vascular endothelial cells, is currently considered one of the most suitable for the immunohistochemical identification of the TCs, which are also referred to as CD34+ stromal cells/TCs by some authors [29,30].Through extensive homo-cellular networks, TCs are believed to build the stromal scaffold whose continuity and adaptability guarantees the maintenance of the integrity of tissues/organs every time they are subjected to mechanical forces, such as distension and stretching.Moreover, TCs are universally considered key organizers of the connective tissue and eventually, they may contribute to the production and shaping of the extracellular matrix (ECM) in cooperation with fibroblasts.This has been observed in TCs located in the female genital tract where these cells express both estrogen and progesterone receptors [4,31] whose activation is followed by significant changes of the TCs that acquire fibroblast-like features and become capable to produce the ECM [8].Likely, the homo-cellular TCs contacts are also involved in the intercellular exchange of molecular or ionic signaling.Alongside the aforementioned roles, probably shared by all the TCs, many other roles have been attributed to these cells [30].Therefore, each of the TCs subtypes is likely to play its organ-/tissue-specific role [8]. Although the TCs homo-cellular contacts are commonly observed, a variety of cell-to-cell contacts between TCs and other cell types (referred to as heterocellular contacts) are also observed [8,[32][33][34][35].They consist of minute junctions (point contacts, nanocontacts, and planar contacts) whose mean inter-membrane distance is 10-30 nm, but more often by variably extended simple apposition of the contiguous plasma membranes that might act either as mechanical cell-to-cell attachments or as sites of intercellular communication [18].Among these contacts, there are the so-called "stromal synapses" [36], a term used to describe those contacts occurring between TCs and several types of connective tissue cells such as mast cells, macrophages, myofibroblasts, and fibroblasts [8,18,35,37].The networks built by these hetero-cellular contacts are named "mixed networks."Collectively, the existence of mixed networks in addition to the homo-cellular TCs networks, the morphological and immunohistochemical differences reported for the TCs among organs and tissues, the existence of TCs subtypes, the interactions that TCs make with the ECM and, finally, the TCs vicinity to nerve endings and vascular cells, have substantiated the hypothesis that these cells may be part of integrated systems playing tissue-/organ-specific roles [8,30,32,33,35,38]. Specific roles of the telocyte network: hollow organs A common role proposed for the 3-D TCs scaffold in the hollow organs is to follow organ distension and relaxation avoiding anomalous deformation and controlling blood vessels closure or rheology.However, because of the anatomical complexity of such districts and the great variety of cell populations herein interacting with the TCs, many other roles are conceivable, suggesting that these cells, as connecting devices in the stromal space, might take center stage in the integration of all the information coming from the vascular, nervous, and immune systems, as well as from tissue-resident stem cells. The present overview of the literature focused on the spatial organization, morphological, and histochemical peculiarities of the TCs, according to their location in different organs.This may help to point out the presumptive roles of the homo-and hetero-cellular TCs networks.With this aim, the TCs networks located in some representative hollow organs that have been more intensively studied, such as the gastrointestinal and reproductive tracts, the urinary bladder, and in the Gastrointestinal tract The gastrointestinal tract consists of different hollow organs which have some similar and some different shapes and functions.Every organ modifies its lumen caliber and thickness several times throughout the day, following food transit.Food intake might happen several times per day, with different types and quantities, and the food transit varies according to the different regions, from the stomach to the colon.The cells of the lining epithelium do not change their shape, while microvilli height importantly changes.Under the mucosa, made by the epithelium, the lamina propria, and the muscular mucosae, there is the submucosa that has different morphological organization and function.Finally, the muscle coat is responsible for gastrointestinal contractility.Two motile activities, coordinated by the enteric nervous system and the ICCs, are present: peristalsis, a constant ab-oral movement that does not importantly modify the lumen caliber, and the relaxation/contraction related to food arrival/mixing for digestion and absorption/transit which promotes sustained lumen caliber changes.Region-specific mechanical and functional interrelationships between all the components of this complex apparatus are at the basis of the correct and coordinated behavior of the apparatus. Gastrointestinal telocyte network in healthy condition In the human and mouse gastrointestinal tract, the TCs form widespread networks in the mucosa, submucosa, muscle layers, at the myenteric plexus level, at the submucosal border of the muscular mucosae, in the circular muscle layer, and around nerve strands, blood vessels, funds of gastric glands, and intestinal crypts [32,39].Immunohistochemically, all the TCs residing in the different layers of the gastrointestinal tract wall can be identified as CD34+/PDGFRα+ interstitial cells [32].In the lamina propria and submucosa, the 3D homo-cellular TCs network has a structural role, forming the scaffolding that can direct the collagen fibers/bundles and define the spaces where the several elements of the connective tissue accommodate.Although it cannot be excluded that these TCs could eventually be recruited for the ECM synthesis, the abovementioned structural function is likely to be the main one.However, the role attributed to the TCs lining the basal-lateral surface of the glandular crypts [32,39], where epithelial stem cells are located is particularly intriguing, since these TCs have been proposed to influence the proliferation and differentiation of stem cells due to their ability to produce and secrete a variety of molecules [40], the close relationships they recurrently establish with the "stem cell niches" [34,40], and the expression on their surface of the functional receptor PDGFRα whose activation is critical in mammalian organogenesis [39].In this context, it is worth mentioning that a very recent study demonstrated that the subepithelial plexus formed by PDGFRα+ TCs acts as a crucial source of Wnt proteins, which are essential to support intestinal crypt stem cell proliferation and epithelial renewal [23].In the muscle coat, by both immunohistochemistry (PDGFRα and CD34 immunolabeling) and electron microscopy, the TCs processes were observed to constitute 3D networks intermingling with those of the ICCs and to establish cell-to-cell contacts with them [32,34].Interestingly, within the gut muscle layers the TCs and ICCs networks can be clearly distinguished based on their different immunophenotypes, as the TCs are CD34+/PDGFRα+ and negative for c-kit, and vice versa, the ICC are c-kit+ and negative for either CD34 or PDGFRα [32] (Figure 4A-I).This mixed TCs/ICCs meshwork and areas of simple apposition occurring between the TCs and the smooth muscle cells have suggested that the intramuscular TCs might support the spreading of the slow waves generated by the ICCs, which are electrically coupled to the smooth muscle cells, thus contributing to the regulation of gastrointestinal motility [32,41].In favor of this hypothesis, it has been recently reported that the "smooth muscle cells are electrically coupled to both ICCs and PDGFRα+ cells (i.e., the TCs) forming an integrated unit called the SIP syncytium" [42].Another possible role attributed to the TCs located in the gut muscle coat is that they might eventually differentiate into ICCs.This hypothesis is mainly based on the existence of the ICCs/TCs mixed network, where the two interstitial cell types are often intercalated (Figure 4I) [32,34,37].In support of this hypothesis, despite apoptotic ICCs have been described in the colon of human healthy subjects of different ages, no decrease in the number of ICCs was observed in relation to aging and no ICCs was ever seen undergoing mitosis [43], while mitotic TCs rich in rough endoplasmic reticulum can be detected in the interstitial spaces usually occupied by the ICCs (personal unpublished observation).Taken together, these data suggest that TCs might represent a pool of ICCs precursors being responsible for the physiological replacement of aged ICCs.Furthermore, it has been demonstrated that in culture stromal cells expressing CD34 (a typical marker of the TCs) proliferate and progressively lose their CD34-positivity to acquire the c-kit-positivity (a typical marker of the ICCs) [44].Reasonably, in adulthood the TCs, wherever they are located, might be considered as a pool of mesenchymal stromal cells and, in the gut, to be important for ICCs renewal [37].Telocytes: New Connecting Devices in the Stromal Space of Organs DOI: http://dx.doi.org/10.5772/intechopen.89383 Telocyte network in gastrointestinal diseases Inflammatory bowel diseases (IBD), including Crohn's disease (CD) and ulcerative colitis (UC), are complex disorders in which chronic relapsing and inflammation progressively evolve into extensive fibrosis of the intestinal wall [41,45].Both CD and UC are characterized by abdominal pain and diarrhea, mainly as a result of the progressive fibrotic process that leads to a stiff intestine unable to properly carry out peristalsis and resorptive functions [45][46][47].The evidence that intestinal dysmotility with a reduction in the number of ICCs is a feature of IBD and that the ICCs and TCs networks are intermingled in the gut neuromuscular compartment has prompted an investigation of the TCs distribution either in the terminal ileum of CD patients or in the colon of UC patients [32,41,[48][49][50][51].Interestingly, in both conditions, the gut wall fibrosis was strictly paralleled by a reduction in TCs [50,51].In fact, in the CD intestinal wall, which is histopathologically characterized by discontinuous signs of inflammation and fibrosis (referred to as "skip lesions"), TCs were normally distributed in all layers of the healthy-looking areas from the mucosa to the subserosa, while they were markedly reduced in the fibrotic areas displaying severe architectural derangement [50].In particular, the network of TCs was discontinuous or even completely lost among smooth muscle bundles and around myenteric plexus ganglia [50].As far as UC is concerned, TCs were investigated in tissue specimens from both patients in early and those in advanced phases of colonic wall fibrotic remodeling [51].In the early phase, fibrosis is confined to the muscular mucosae and submucosa, while in the advanced one it extends affecting wide areas of muscle layers and the myenteric plexus.Of note, TCs were significantly reduced in the muscular mucosae and submucosa of both early and advanced fibrotic UC cases [51].On the contrary, the intramuscular and myenteric plexus TCs networks were severely compromised in the advanced but not in the early fibrotic UC cases [51].Through double immunofluorescence, it was possible to further reveal that in both forms of IBD the losses of TCs and ICCs occurred in parallel in the muscle layers and around the myenteric ganglia [50,51].Based on these findings, it has been proposed that the simultaneous reductions in the TCs and ICCs might significantly account for intestinal dysmotility in IBD.Several assumptions have also been made concerning the possible causes and pathophysiologic implications of this TCs impairment [41,50,51].As reported in the failing human heart [41,52], the progressive alteration in the ECM composition and entrapment of TCs in such fibrotic ECM may provoke profound cell sufferance and eventually lead to cell death.Both the ECM accumulation/rearrangement and the parallel reduction of TCs may profoundly impair their hetero-cellular networks with immune cells, fibroblasts, smooth muscle cells, ICCs, blood vessels, and nerve endings, thus hampering the TCs intercellular signaling functions [41].Whether the loss of the TCs might even precede the onset of fibrosis rather than being merely a consequence of tissue fibrotic remodeling is difficult to demonstrate.In line with the proposed role of TCs as a guide for the correct tissue shaping during organ morphogenesis, it cannot be excluded that the loss of TCs might contribute to the altered 3D ECM organization in the fibrotic intestinal wall [46].For instance, it has been proposed that the disappearance of TCs might favor an uncontrolled activation of ECM synthesizing fibroblasts and their transition to profibrotic α-smooth muscle actin (α-SMA)+ myofibroblasts [46].Noteworthy, hetero-cellular contacts between TCs and fibroblasts/ myofibroblasts have been described in different organs, suggesting that the TCs could contribute to tissue homeostasis by controlling the synthetic activity of such partners through inhibitory signals [46].In support of this hypothesis, in UC colonic specimens the loss of TCs was paralleled by an increase in the number of α-SMA+ myofibroblasts [51].This last observation suggests that during pathologic processes a subset of TC could undergo phenotypic changing, possibly contributing to the increase in the profibrotic myofibroblast population [41].However, double immunolabeling for CD34 (as TC marker) and α-SMA did not reveal the presence of CD34+/α-SMA+ transitioning stromal cells in the colonic wall of UC patients, which makes unlikely the aforementioned hypothesis [51].Also, electron microscopy investigations of different pathological tissues (e.g., failing human heart, fibrotic skin) have clearly shown that fibrosis is accompanied by TC degenerative processes rather than by activation/transformation into myofibroblasts [52,53].As the findings in the CD and UC intestinal tissues, a loss of TCs have also been reported in the neuromuscular compartment of the fibrotic gastric wall of patients with systemic sclerosis, where it likely contributes to gastric dysmotility clinically manifesting as delayed gastric emptying or gastroparesis [54]. Besides IBD, recent evidence suggests that the TCs might be a cell source of certain kinds of gastrointestinal stromal tumors (GIST) [55].While ICCs hyperplasia has been identified as a crucial pathogenic feature of KIT-mutant GIST, it has been proposed that the TCs could represent the physiological counterpart of PDGFRa-mutant GIST and inflammatory fibroid polyps [55].Indeed, a pathogenic relationship between TCs hyperplasia and both inflammatory fibroid polyps and PDGFRa-mutant GIST has been suggested.Moreover, the term "telocytoma" was proposed for defining inflammatory fibroid polyps, since it conveys both the pathogenic (neoplastic) and histotypic ("telocytary") nature of this tumor [55]. Human urinary bladder The urinary bladder is a complex organ which modifies its volume and shapes several times during the day following filling and micturition.Filling happens gradually during the time while micturition happens in a unique and sustained emission of the urine.The cells of the lining epithelium (urothelium) dramatically changes their shape and height; under the urothelium, there are two regions: the upper lamina propria (ULP) and the deep lamina propria (DLP), with different morphological organization and function; the detrusor is the muscle coat responsible for organ relaxation and contraction.Region-specific mechanical and functional interrelationships between all the urinary bladder components are at the basis of correct and coordinated behavior. Bladder telocyte network in healthy condition In the bladder, unlike in other organs, a more complex picture comes out [33,35].As is the case in the gut, the bladder TCs form complex networks and make contacts either between themselves or with other cell types; however, depending on their location, they show different immunohistochemical properties and ultrastructural peculiarities.The main differences are detected between the TCs located in the sub-urothelial connective tissue (upper lamina propria, ULP) and those located in the submucosa (also referred to as deep lamina propria) and detrusor.The TCs in the ULP is PDGFRα+ and CD34-while those in the submucosa and detrusor are CD34+ and PDGFRα-(Figure 5A and B).Moreover, while the TCs immediately below the urothelium express only the PDGFRα and form a homo-cellular network, the other sub-urothelial TCs are also α-SMA+ (Figure 5A) and under the transmission electron microscope show a larger body and cell processes possessing attachment plaques with the connective tissue like the fibronexuses typical of myofibroblasts.Further, these TCs establish extended regions of simple apposition with the myofibroblasts, thus forming a 3D mixed network (Figure 5A).This mixed network together with the homo-cellular networks that the TCs make in the remaining portions of the bladder constitutes the scaffolding to guarantee the organ integrity during distention and relaxation [33,35,37,56].However, other specific roles have been assumed for the bladder TCs, particularly for those located in the ULP.The TCs lining the urothelium is quite peculiar, because of their location and immunolabeling [33,38].As already discussed, the PDGFRα+ TCs located immediately beneath the intestinal crypts appear to be cells engaged in controlling the proliferation and differentiation of the stem cells resident in the crypts [23].Likewise, sub-urothelial TCs could play similar functions.Immediately below those TCs, there is the TCs/myofibroblast network and both cell types form gap junctions and express the Cx43 protein, are close to nerve varicosities, express the vanilloid, the ATP, the purine and the muscarinic receptors and contain the cGMP, the target molecule of NO [35].These features support the hypothesis that these cells have a role as intermediaries in propagating chemical or electrical stimuli locally generated, as well as a role as the target of the paracrine activity of the urothelium and the nervous stimuli [35,38,57].The importance of these roles is plenty understood, since the ULP and the urothelium constitute a sensory system capable of perceiving mechanical and chemical stimuli and whose integrated responses control the efferent pathways on the detrusor and the micturition. TCs networks in the human urinary bladder. (A) PDGFRα (green) and α-SMA (red) immunolabeling. A monolayer of PDGFRα+ TC lines the urothelium (U). In the remaining upper portion of the lamina propria (ULP) a mixed network made by PDGFRα+/α-SMA+ TC (hybrid TC) and α-SMA+ myofibroblasts is present. (B) CD34+ TC form a homo-cellular network in the deep lamina propria (DLP) and detrusor (D). Scale bar: A, B = 100 μm. Innovations in Cell Research and Therapy 10 Telocytes in bladder diseases The micturition reflex is the result of a complex integration among involuntary and voluntary nervous mechanisms.Several pathological conditions of the low urinary tract compromise this function causing detrusor dysfunctionality.The most frequent is the idiopathic detrusor overactivity, the neurogenic detrusor overactivity (NDO), the bladder pain syndrome/interstitial cystitis, and the partial bladder outlet obstruction.All these diseases are characterized functionally by excessive sensitivity of the detrusor/bladder to filling [58], and histologically by an intense inflammation especially in the lamina propria [35,38,59,60].Since it is well known that TCs and myofibroblasts produce cytokines and other molecules able to recruit immune cells and express receptors for the cytokines released by the immune cells, both cell types likely intervene in the inflammation intensity, quality, and duration.Furthermore, in the presence of detrusor hyperactivity, both ULP-TC and myofibroblasts showed an increase of the Cx43 protein labeling that was interpreted as an augmentation of the gap junctions and signs of cellular activation (clear nuclei and larger bodies) [35,38,59,61].Additionally, the TCs expressing both PDGFRα and α-SMA were significantly increased in comparison with controls suggesting a shift toward a myofibroblast phenotype [35,38].All these cell changes were considered signs of adaptability because, despite the presence of inflammation, the 3D cell network was preserved [38,59].However, this integrity not necessarily means adequate functionality of the sensory system made by the urothelium and the ULP; in fact, the higher thickness of the ULP, due to the intense cell infiltrate and edema, forcing the net meshes to enlarge, could cause an increase in the distances among the cells, between them and the nerve endings, and between all of them and the urothelium, likely affecting the sensitivity to volume changes and the capability of responding to the molecules released by the nerve terminals and by the urothelium.Finally, because the ULP thickening was uneven alongside the organ [35,38,60], foci of hypersensitivity could alternate to others less responding, further prejudicing the correct integration of the afferent stimuli [57].Finally, it was reported that the TCs forming the monolayer underlying the urothelium did not show any significant changes in hyperactive bladders.These data were explained as follow: the location of these TCs could spare them from the damages caused by the cell infiltrate.Further, if these TCs are engaged in cell proliferation and differentiation of the overlying epithelium, the absence of epithelial cell death signs in NDO might account for their sparing. Reproductive system The female reproductive system includes, besides external sex organs, the internal sex organs: the ovaries, fallopian tubes, and uterus.Immature at birth, these organs continue to develop and reach maturity at puberty when they can produce gametes, and to carry a fetus to full term.Fallopian tubes integrity is capital for fertilization which usually occurs in the external third of the tubes.The traveling zygote will form the blastocyst that will be implanted in the uterine endometrium.To obtain and maintain a pregnancy, the integrity and functionality of these organs need to be at a maximum. Telocytes in uterus and fallopian tubes in healthy condition Currently, TCs are found in uterine tubes and uterus, including endometrium, myometrium, and cervix, and also in the vagina [2, 3, 62-65].Among the first Telocytes: New Connecting Devices in the Stromal Space of Organs DOI: http://dx.doi.org/10.5772/intechopen.89383locations in which TCs were described are the organs of the female genital apparatus: the uterus and the fallopian tubes [2,3,5,10,66].Since the beginning, their characterization was based on conventional microscopy methods and techniques such as methylene blue staining and silver impregnation, in situ and in vitro [2,3], followed by the description of the "gold standard" for their identification with the aid of electron microscopy [5].In parallel, various immunohistochemical markers have been used to enhance TCs better characterization, which has varied over time from vimentin, α-SMA, progesterone receptor, desmin, estrogen receptor, and S100 protein, to stabilize what we consider nowadays to best describe the phenotype of these cells-CD34 and PDGFRα [2,3,67].A book chapter refers to the immunohistochemistry of TCs in female genital organs [28].However, it should be pointed out that TCs in the uterus and fallopian tubes express receptors for estrogen and progesterone [4,31].Nowadays, the most suitable methods for TCs identification are electron microscopy and double staining for CD34 and PDGFRα or PDGFRβ or vimentin [68,69]. TCs can release exosomes (from multivesicular bodies), ectosomes (shredded directly from plasma membrane), and multivesicular cargos (multiple tightly packed endomembrane-derived vesicles) [75].The three types of extracellular vesicles emitted by TCs are evidence of the involvement of these cells in intercellular distance communication.Shed vesicle number and diameter are not correlated with the reproductive state, while the quantity of TCs in the endometrium and the myometrium varies with it [17].Moreover, it was demonstrated that the morphology of telopodes is correlated with the presence or absence of gestation [70]. All these morphological, immunohistochemical, and electrophysiological observations have led to several hypotheses on the TC functions in the uterus and fallopian tubes.The existence of homo-cellular junctions leads to a presumptive function in controlling the shape of the tissues which are subjected to of dynamic changes, such as the pregnant uterus that hypertrophies and expands as the fetus grows [17].In support of this assumption stands the hypothesis that TCs contribute to smooth muscle growth in areas with high mechanical forces due to TCs mechanical sensitivity [76].The mechano-sensing function should also be considered, due to the presence of catenins that make up the junctions [18,71].Moreover, TCs express T-type calcium (CaV3.1 and CaV3.2) channels and smallconductance calcium-activated potassium channels (SK3) and calcium-dependent hyperpolarization-activated chloride inward channels, the levels of expression being dependent on the physiological state, pregnant or non-pregnant [70,77,78].This point to a TCs' involvement in calcium signaling mechanisms with neighboring cells [79].Extracellular matrix remodeling is also emphasized by some studies and can be considered as applicable to the uterus [80,81].The existence of (ERα) and progesterone receptor A (PR-A) on the surface of uterine TCs suggests their involvement as sensors for steroid hormones levels.Although little is known about the existence of stem cells in the uterus, they certainly exist, and the secretome of the TCs could influence the cellular microenvironment, controlling their proliferation, and differentiation [15].Also, TCs secretome factors could participate in decidua formation [17].Some studies pointed to the angiogenic properties of TCs Representative ultrathin section of human pregnant myometrium. Two-dimensional sequenced concatenation from 11 serial electron micrographs showing the 3D network of TCs (blue) interconnected by homo-cellular junctions (dotted circles). SMCs are shown in cross-section and were digitally colored brown. In their vicinity, numerous Tps (blue) establish a network and release extracellular organelles (exosomes and shedding vesicles [arrowheads]) digitally colored purple. One mast cell (green) is in the vicinity of this network. Some vesicles were captured at the moment of being shed from Tps (*). Cav = caveolae; coll = collagen; m = mitochondria; rER = rough endoplasmic reticulum; N¼nucleus. Bar¼2 lm. Reproduced with permission from Ref. [10]. Telocytes: New Connecting Devices in the Stromal Space of Organs DOI: http://dx.doi.org/10.5772/intechopen.89383due to vascular endothelial growth factor (VEGF) expression [15,82,83], while others indicated the anti-oxidative properties of TCs because the TCs specific morphology can be changed by modifying the redox balance of their environment or by aging due to their richness in SOD2 (mitochondrial superoxide dismutase) [21,84]. Recently, TCs were found to activate and "educate" peritoneal macrophages (pMACs) with the aid of telopodes by direct physical contact through heterocellular junctions or using a TCs-conditioned media through paracrine mechanisms [85].This is suggestive for a role in immunosurveillance [85]. In the fallopian tubes, TCs have been described throughout the thickness of the wall, their density decreasing from the mucosa to the muscular, from ∼18 to ∼7.5% [3].TCs are also found in the fimbriae of Fallopian tubes [17,28].A panel of antibodies was used to identify tubal TCs.The telopodes possess all the features described above, creating a 3D network and establishing contacts with different structures, such as blood vessels, nerves, and muscle fibers [86,87].Both, homoand hetero-cellular junctions are described, and also, a new hypothesis was emitted that telopodes that contact the immune cells (plasma cells and lymphocytes) might stimulate antibody production [88,89].As suggested by Cretoiu et al., the tubal peristalsis might suffer influences from TCs which also express PR-A and ERα receptors [31].The tubal movements seem to be amplified by estrogen and decelerated by progesterone [90].Recently, additional markers were tested for TCs identification, such as Podoplanin (D2-40) and Dog-1 but proved to be inappropriate [91]. Telocytes in uterine and tubal diseases The pathogenesis of uterine leiomyomas, the most frequent benign tumors in women might be determined, among other factors, by the loss of TCs [26].Varga et al. proposed three hypotheses regarding TCs' involvement: (i) loss of TCs as steroid sensors leads to an increased density of estrogen receptors at smooth muscle cells level followed by a cell cycle disruption; (ii) considered as progenitor cells, TCs absence can favor the rise of new leiomyoma cells; and (iii) in the absence of antioxidant protection conferred by TCs, leiomyoma cells grow numerically due to local hypoxia that blocks their apoptosis [26]. A recent study shows, for the first time, that there is an interplay between telocytes and autonomic innervation in leiomyomata [92].TCs decreased in numbers in the leiomyomatous myometrium, suggesting a role for these cells in the control of the microenvironment [92]. The integrity of the 3D network of TCs appears to be fundamental in exercising the function of the fallopian tubes that are regarded as a major organ in the reproduction.Several studies showed that by affecting the 3D organization and number of TCs perturbations occur in the local homeostasis, leading to angiogenesis and interstitial fibrosis [88,89].Neo-angiogenesis plays a major role in endometriosis and adenomyosis pathogenesis, and even in tubal ectopic pregnancy [93][94][95].TCs were shown to be involved in all these processes [96,97].In pelvic endometriosis and tubal ectopic pregnancy, the decrease in the number of TCs is probably due to the overproduction of iNOS, COX-2, LPO, and estradiol [49,69,88].Some other pathologies that might affect the 3-D network of TCs were described, such as Chlamydia infection responsible for the activation of macrophages or pelvic inflammatory disease [89,98,99].In inflammation and ischemia, TCs were shown to be lost and to suffer major ultrastructural changes, a process followed by interstitial fibrotic remodeling [99].Abd-Elhafeez et al. proposed a role for tubal TCs in the regulation of the epithelial function necessary for the final gamete maturation, fertilization, and early embryo development [100]. Heart Cardiac TCs are among the most well described in the body [101].Cardiac TCs are unique interstitial cells in the heart [101].Intramyocardial TCs account for less than 1% of interstitial cells in the human heart [56].Cardiac TCs have been identified in heart valves, left and right atrium and ventricle, epicardium, myocardium, endocardium, sub-endocardium, and myocardial sleeves [101,102], in mice, rat, porcine, and human [56,103].The whole ultrastructural anatomy of human cardiac TCs has been reconstructed by focused ion beam scanning electron microscopy (FIB-SEM) [104].The electrophysiology of human cardiac atrial and ventricular TCs has also been reported [105]. Cardiac TCs are completely different from other types of myocardial interstitial cells, especially from cardiac fibroblasts [106].Cardiac TCs and cardiac fibroblasts are completely different in immunophenotypes, as cardiac TCs are positive for CD34/ PDGFR-ɑ, CD34/PDGFR-ß, or CD34/Vimentin while cardiac fibroblasts are only positive for PDGFR-ß and Vimentin [106].CD34/PDGFR-ɑ positive TCs account for one-third of the total cells among TCs enriched rat cardiac interstitial cell population [106,107].Some studies have also reported that cardiac TCs inconstantly express CD34/c-kit [56].Besides, cardiac TCs are also distinct from pericytes, since cardiac TCs are CD34 positive and ɑ-SMA weak positive while pericytes are CD34 negative and ɑ-SMA positive [106].Moreover, cardiac TCs are CD34/PDGFR-ß positive while pericytes are CD34 negative and PDGFR-ß positive [106].Interestingly, cardiac TCs are positive for CD29 (a mesenchymal marker) but negative for CD45 (a hematopoietic marker), suggesting that cardiac TCs could be a source of cardiac mesenchymal cells [106].Interestingly, the telomerase concentration in CD117 and CD34 positive cardiac TCs is significantly higher than that in cardiomyocytes and is 2.5-times and 1.5times lower than that in bone mesenchymal stem cells and cardiac fibroblasts [108]. Cardiac TCs can form tight junctions with all other types of cells within the heart including cardiomyocytes, vascular smooth muscle cells, endothelial cells, and pericytes.The functions of cardiac TCs are not fully known but are proposed as follows: (1) intercellular signaling; (2) mechanoreceptors/transducers; and (3) cardiac homeostasis and repair [101,109].In the heart, telocytes participate in cardiac development and physiology, and diverse cardiovascular diseases (Figure 7).Telocytes: New Connecting Devices in the Stromal Space of Organs DOI: http://dx.doi.org/10.5772/intechopen.89383 Cardiac telocytes in heart development and physiological growth Involvement of cardiac TCs and cardiomyocytes during development have been investigated using myocardium from embryonic (E14, E17), newborn (P0, P6), and adult (2 months) CD1 mice by using transmission electron microscopy and immunohistochemistry [110].It was found that TCs were present from early embryonic to adult life in the mouse heart [110].Besides, cardiac TCs demonstrated immature features in early embryonic hearts while cardiac TCs exhibit a more differentiated phenotype in newborn hearts [110].Cardiac TCs played a fundamental role during cardiac development by forming a correct three-dimensional myocardial architecture and nursing cardiomyocyte precursors [110].Intriguingly, cardiac TCs were found negative for c-kit and CD34 during the embryonic stage [110]; however, CD34 was expressed in a few TCs in the heart of a newborn mouse, and in most TCs in adult hearts [110].This suggests a phenotype switch of cardiac TCs during development. Exercise can induce cardiac physiological growth, which is characterized by increased cell size of cardiomyocytes and the formation of new cardiomyocytes [111,112].Three double-immunostainings, including CD34/PDGFR-ɑ, CD34/ PDGFR-ß-, and CD34/vimentin, have been used to determine the number of cardiac TCs in exercise-induced physiological cardiac growth [81].The number of cardiac TCs was found to be significantly increased in the exercised heart [81].The increased cardiac TCs in exercise might communicate with cardiomyocytes through direct contacts or telocyte-shed vesicles, balance angiogenesis, and maintain the normal 3D-organization of ECM [81].This study suggested a potential role of cardiac TCs in exercise-induced cardiac growth [81]. Cardiac telocytes in cardiovascular diseases Isolated atrial amyloidosis (IAA) is frequently found in long-standing atrial fibrillation patients [113].By electron microscopy, telopodes are found surrounding the amyloid deposits, which limit their spreading into the interstitium [113].This indicates that TCs might participate in amyloidogenesis by gathering masses of amyloid fibrils. Systemic sclerosis represents a complex connective tissue disease featured with fibrosis of the skin and various internal organs [54].TCs, as defined by CD34positivity/CD31-negativity, were checked in the fibrotic areas of systemic sclerosis myocardium and were found to be almost undetectable [54].However, in the control myocardium, numerous TCs were found located in the interstitium surrounding cardiomyocytes [54].This indicates that loss of cardiac TCs contributes to myocardial fibrosis caused by systemic sclerosis. The imbalance between cardiac TCs apoptotic death and cardiac TCs proliferation is responsible for the depletion of cardiac TCs in cardiac diseases leading to heart failure [52].In human heart failure patients' myocardium, the number of cardiac TCs and telopodes decreases over twofold.Additionally, the apoptotic cardiac TCs increases threefold in the diseased heart while the percentage of proliferating cardiac TCs remains unchanged, suggesting that the decreased cardiac TCs population in heart failure is mainly due to increased apoptosis [52].Interestingly, the number of cardiac TCs and telopodes has been found to depend on the composition of the extracellular matrix which is correlated negatively with mature fibrillar collagens and positively with degraded collagens [52]. The changes of cardiac TCs have been determined in an acute myocardial infarction rat model induced by isoproterenol (ISO) [114].It was found that CD117/CD34 positive cardiac TCs were undetectable with immunohistochemical staining 1 day after ISO treatment.Interestingly, treatment with grape seed extract (GSE) could significantly increase cardiac TCs numbers and enhance angiogenesis in myocardia but not in other tissues; in fact, it was found suppressing angiogenesis in tumor tissues instead [114].Thus, GSE was regarded to promote angiogenesis by modulating cardiac TCs, which subsequently stimulated endothelial cells [114]. Also, in a rat myocardial infarction model induced by coronary occlusion, cardiac TCs were reported undetectable in the infarction zone from 4 days to 4 weeks [115].Simultaneous transplantation of cardiac TCs could significantly decrease infarct size and improve heart function 2 weeks after myocardial infarction [115].Moreover, the protective effects of intramyocardial transplantation of cardiac TCs were also observed 14 weeks after myocardial infarction as evidenced by improved heart function, decreased infarct size, increased angiogenesis, and decreased myocardial fibrosis [116]. Currently, no single specific immunophenotype for cardiac TCs has been identified [101].For in-depth studies of cardiac TCs, it is highly needed to identify a specific immunostaining marker for them.Most isolated cardiac TCs are either not pure enough (as cardiac fibroblasts grow much faster than cardiac TCs) or only containing subtypes of cardiac TCs.It would be beneficial to investigate the therapeutic effects of cardiac TCs or cardiac TCs-derived exosomes.Moreover, the immunoregulatory effects of cardiac TCs should be thoroughly investigated.Finally, other organs or tissue-specific TCs are worthy to be studied. Conclusion and future overlook for telocytes' study In conclusion, this review of the literature data indicates that TCs, depending on their location, may display different immunohistochemical properties, ultrastructural peculiarities and form complex networks making contacts either between themselves or with other cell types.Further, our current knowledge of TCs allows the following conclusions on the role(s) that these cells may play, some of which might be common to the different organs and some be organ-/region-specific. 1.In the stromal space of all the organs taken into account, TCs appear as connecting cells.Reasonably, TCs, due to their homo-and hetero-cellular contacts, can be considered as connecting devices playing either common or region-specific roles.These contacts might be merely mechanical or sites of cross-talking between TCs and other cell types establishing intercellular molecular exchanges.Spatial relationships also suggest an involvement of the TCs network in the coordination of tissue homeostasis in response to local functional demands. The involvement in tissue homeostasis might be explained by the heterogeneity of TCs depending on their location. 2. TCs might be engaged in controlling the proliferation and differentiation of the stem cells and, either in the adulthood or during organ differentiation, wherever they are located, these cells might be considered as a pool of mesenchymal stromal cells As an example, in the gastrointestinal tract and the urinary bladder, the subepithelial plexus formed by TCs likely supports the stem cell proliferation and epithelial renewal and, the TCs located in the muscle coat, might differentiate into ICCs and, undergoing to phenotypic changes, become a cell source of gastrointestinal stromal tumors such as GIST.A shift toward a myofibroblast phenotype has been proposed for the TCs located in the urinary bladder lamina propria. In the female genital tract, although there are no reported interactions with stem cells, a role for TCs in this direction cannot be overlooked because neo-angiogenesis undoubtedly accompanies myometrial hypertrophy. Moreover, in the heart, a special inter-relation exists between TCs and cardiac stem cells based on the exchange of information via extracellular vesicles which shuttle miRNA or by direct connections through typical and atypical junctions.The secretome of TCs might enhance the proliferation and differentiation of cardiac stem cells.Suggestions were also made regarding the possibility of their role in the re-activation of dormant myocardial precursors during the repair of the adult heart, while in embryo TCs act as inductors/regulators of differentiation during morphogenesis. 3. The TCs scaffold located in all the hollow organs follows organ distension and relaxation likely to avoid anomalous organ deformation and to control blood vessels closure or rheology.This is a mechanical role whose importance has been demonstrated in some gastrointestinal pathologies, where TCs loss provokes severe architectural derangement and contributes to the altered 3D ECM organization in the fibrotic intestinal wall.In the uterus, TCs can function as a sensor for the mechanical stress exerted on the uterine wall, allowing uniform uterine growth during pregnancy, by mechanosensitive coordination due to the existence of different ionic channels which can be modulated by pharmacological interventions. Moreover, TCs can also be regarded as chemical sensors as it was hypothesized for the human uterus and fallopian tube, where TCs might play an important role in the uterine contraction mechanism due to the presence of estrogen and progesterone receptors at their level. 4. Because of the anatomical complexity of the hollow organs and the great variety of cell populations interacting with the TCs, many different and organspecific TCs roles are conceivable suggesting that these cells might take center stage in the integration of the overall interstitial information from the vascular, nervous, and immune systems, as well as from tissue-resident stem cells In the gastrointestinal tract, a particular role is played by the intramuscular hetero-cellular TCs network in supporting the spreading of the slow waves generated by the ICCs, which are electrically coupled to the smooth muscle cells, thus contributing to the regulation of gastrointestinal motility.In agreement with this hypothesis is the evidence that the simultaneous reductions in the TCs and ICCs account for the intestinal dysmotility characterizing the IBD.In the urinary bladder, the sub-urothelial TCs likely play a role as intermediaries in propagating chemical or electrical stimuli locally generated being the target of the paracrine activity of the urothelium and the nervous system.The importance of these roles is plenty understood since the ULP and the urothelium constitute a sensory system capable of perceiving mechanical and chemical stimuli and whose integrated responses control the efferent pathways on the detrusor and the micturition, responses that are lost in urinary pathologies such as the NDO. In female genital tract, TCs seems to lack in regular slow waves indicating that they are not involved in triggering or supporting the peristalsis of these organs, but more detailed studies are necessary. Figure 1 . Figure 1.Artistic representation of a 3D view of the contacts of a telocyte.TCs are regarded as interconnection devices due to their homo-and hetero-cellular junctions, as well as to their proximity to structures like blood vessels, nerve fibers, and muscle fibers.Image courtesy of Iurie Roatesi.Reproduced with permission from Ref.[75]. Figure 2 . Figure 2. Trends of publications searched in PubMed with "Telocytes" as a key word between 2010 and 2019. Figure 3 . Figure 3. Histogram of publications on telocytes categorized by categories of interest. Figure 4 . Figure 4. TCs and ICCs in the human gastrointestinal tract.TCs and ICCs form intermingled networks in the muscularis propria of the human intestine.Representative images of human colon sections double immune-stained for: (A-C) CD34 (green) and PDGFRα (red), (D-F) CD34 (green) and c-kit (red), and (G-I) PDGFRα (green) and c-kit (red) are shown.Nuclei are counterstained with DAPI (blue).Merge images are shown in the right panels.All the TCs are CD34+/PDGFRα+ (A-C), while the ICCs are c-kit+ and negative for either CD34 (D-F) or PDGFRα (G-I).Scale bar: 50 μm. Figure 4. TCs and ICCs in the human gastrointestinal tract.TCs and ICCs form intermingled networks in the muscularis propria of the human intestine.Representative images of human colon sections double immune-stained for: (A-C) CD34 (green) and PDGFRα (red), (D-F) CD34 (green) and c-kit (red), and (G-I) PDGFRα (green) and c-kit (red) are shown.Nuclei are counterstained with DAPI (blue).Merge images are shown in the right panels.All the TCs are CD34+/PDGFRα+ (A-C), while the ICCs are c-kit+ and negative for either CD34 (D-F) or PDGFRα (G-I).Scale bar: 50 μm. Figure 6 . Figure 6.Representative ultrathin section of human pregnant myometrium.Two-dimensional sequenced concatenation from 11 serial electron micrographs showing the 3D network of TCs (blue) interconnected by homo-cellular junctions (dotted circles).SMCs are shown in cross-section and were digitally colored brown.In their vicinity, numerous Tps (blue) establish a network and release extracellular organelles (exosomes and shedding vesicles [arrowheads]) digitally colored purple.One mast cell (green) is in the vicinity of this network.Some vesicles were captured at the moment of being shed from Tps (*).Cav = caveolae; coll = collagen; m = mitochondria; rER = rough endoplasmic reticulum; N¼nucleus.Bar¼2 lm.Reproduced with permission from Ref. [10]. Figure 7 . Figure 7. Cardiac telocytes are in tandem with different types of cells in the heart, and participate in cardiac development and physiology, and diverse cardiovascular diseases.ECM, extracellular matrix; VSMCs, vascular smooth muscle cells.
v3-fos-license
2020-05-13T13:05:36.544Z
2020-05-13T00:00:00.000
218596699
{ "extfieldsofstudy": [ "Biology", "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.frontiersin.org/articles/10.3389/fmicb.2020.00919/pdf", "pdf_hash": "af64cd0749619e3501ed4b776256c741ded1e7bd", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:1659", "s2fieldsofstudy": [ "Biology" ], "sha1": "af64cd0749619e3501ed4b776256c741ded1e7bd", "year": 2020 }
pes2o/s2orc
Mobile Genetic Elements Harboring Antibiotic Resistance Determinants in Acinetobacter baumannii Isolates From Bolivia Using a combination of short- and long-read DNA sequencing, we have investigated the location of antibiotic resistance genes and characterized mobile genetic elements (MGEs) in three clinical multi-drug resistant Acinetobacter baumannii. The isolates, collected in Bolivia, clustered separately with three different international clonal lineages. We found a diverse array of transposons, plasmids and resistance islands related to different insertion sequence (IS) elements, which were located in both the chromosome and in plasmids, which conferred resistance to multiple antimicrobials, including carbapenems. Carbapenem resistance might be caused by a Tn2008 carrying the blaOXA–23 gene. Some plasmids were shared between the isolates. Larger plasmids were less conserved than smaller ones and they shared some homologous regions, while others were more diverse, suggesting that these big plasmids are more plastic than the smaller ones. The genetic basis of antimicrobial resistance in Bolivia has not been deeply studied until now, and the mobilome of these A. baumannii isolates, combined with their multi-drug resistant phenotype, mirror the transfer and prevalence of MGEs contributing to the spread of antibiotic resistance worldwide and require special attention. These findings could be useful to understand the antimicrobial resistance genetics of A. baumannii in Bolivia and the difficulty in tackling these infections. INTRODUCTION Acinetobacter baumannii is a non-fermenting Gram-negative bacilli and it is the second most common species after Pseudomonas aeruginosa in this group causing bacterial infections (Gonzalez-Villoria and Valverde-Garduno, 2016). While A. baumannii has been isolated from the wider environment such as water, soil, and animals, most studied isolates come from clinical samples, where A. baumannii has become a serious health problem, particularly in the intensive care unit, where it can cause serious and prolonged outbreaks (Gonzalez-Villoria and Valverde-Garduno, 2016). A. baumannii is often multidrug resistant (Peleg et al., 2008;Gonzalez-Villoria and Valverde-Garduno, 2016) making antimicrobial therapy of A. baumannii infections difficult. In some cases, with the advent of resistance to last line antibiotics such as colistin, there are few therapeutic options left (Higgins et al., 2010;Manchanda et al., 2010;Göttig et al., 2014;Cayô et al., 2016). Acinetobacter baumannii is known to have a great genome plasticity, which is the capacity to acquire and disseminate genes, especially those related to antimicrobial resistance which are commonly associated with insertion sequence (IS) elements in transposons and plasmids; this dynamism in the genome of A. baumannii contributed to the rapid evolution of drug resistance (Adams et al., 2010) as has been demonstrated for ISAba1 mobilizing antimicrobial resistance genes (Mugnier et al., 2009). These processes are achieved thanks to mobile genetic elements (MGEs) harboring resistance genes. The simplest MGEs are ISs, that can also form transposons (Tn), and there are more complex structures such as integrons, resistance islands (RI), and plasmids. Antimicrobial resistance genes are often integrated into resistance cassettes related to translocation elements, causing cumulative resistance to multiple drugs (Roca et al., 2012). A diverse range of MGEs have been described in A. baumannii, for example transposons such as Tn2008, Tn2008B, Tn2006, Tn2009, or Tn2007, which represent different transposon configurations carrying the bla OXA−23 gene together with ISAba1 or ISAba4, and additional genes (Nigro and Hall, 2016). Great variability in antimicrobial resistance platforms, including MGEs, have been recorded even within the same international clone (IC), illustrating their contribution to the evolution of drug resistance (Adams et al., 2010). Plasmids in Acinetobacter spp. are unique and unrelated to those from other genera, although they often share the same resistance determinants, such as strA, strB, tet(B) or sul2. In A. baumannii, a diverse array of plasmids have been found, ranging in size from 2 Kb to more than 150 Kb. The larger plasmids normally encode for more than one resistance gene, but up to now little is known about these plasmids (Carattoli, 2013;. The aim of this study was to characterize the MGEs such as plasmids and RI of three different A. baumannii clinical isolates, representing different clonal lineages. Bacterial Isolates Three A. baumannii isolates recovered from two hospitals in Cochabamba, Bolivia, in September 2015, January 2016, and October 2016 (Table 1) representing three different ICs (IC4, IC5. and IC7) were selected for this study. We previously reported their carbapenem resistance mechanisms and molecular epidemiology (Cerezales et al., 2019). Antimicrobial Susceptibility Testing In addition to previously reported carbapenem susceptibility testing results, in the present study we investigated the following antimicrobials by agar dilution: amikacin, azithromycin, chloramphenicol, trimethoprim-sulfamethoxazole, erythromycin, levofloxacin, minocycline, kanamycin, and tetracycline. MICs were interpreted using the European Committee on Antimicrobial Susceptibility Testing (EUCAST) breakpoints 1 . MinION Long-Read Sequencing and Assembly The Oxford Nanopore Technologies (Oxford, United Kingdom) MinION sequencer was used to obtain long reads to span repetitive elements and close genomes and plasmids. DNA extraction was performed using the Genomic-tip 100/G kit 1 http://www.eucast.org/clinical_breakpoints/ (Qiagen, Hilden, Germany). Library preparation was carried out according to manufacturer's indications using a combination of Native Barcoding Kit 1D and Ligation Sequencing Kit 1D; EXP-NBD103 and SQK-LSK108 (Oxford Nanopore Technologies, Oxford, United Kingdom), respectively. The tool Albacore (Oxford Nanopore Technologies, Oxford, United Kingdom) was used for demultiplexing the reads which were later used to perform the Canu assembly (Koren et al., 2017). A hybrid assembly combining previous MiSeq short reads with MinION-generated long reads was performed using a hybridSpades (Antipov et al., 2016). FIGURE 3 | Plasmid pMC1.1 in isolate MC1. Arrows represent predicted ORFs and the direction of the arrow represents the direction of transcription. Resistance genes are shown by orange arrows and transposon-related genes, recombinases and insertion sequences are indicated by green arrows. Transfer protein encoding genes, conjugal transfer protein encoding genes, and genes involved in plasmid partition and replication are shown in pink. The mercury resistance operon genes are indicated by yellow arrows and the BREX type 1 system is shown in purple. Other genes are indicated by gray arrows. Hypothetical proteins are not shown. Plasmid Annotation and Visualization ORFfinder (NCBI) 2 was used to predict the open reading frames (ORF) of the plasmids. A second functional annotation of the genomes was performed using the online tool Rapid Annotation Subsystem Technology (RAST) 3 (Genomics et al., 2008). Subsequently, the tool SnapGene Viewer (GSL Biotech) 4 was used to obtain a circular diagram of the plasmids. Graphic comparisons between similar plasmids, pMC1.1 and pA297-3, as well as pMC23.1 and pAC30c, were carried out with the tool Kablammo (Wintersinger and Wasmuth, 2015). Conjugation Experiments Broth mate conjugation experiments were performed to determine the location of antimicrobial resistance genes using the sodium azide-resistant Escherichia coli J53 and the rifampicin-resistant A. baumannii BM4547 as recipient strains. Selection of E. coli J53 transconjugants was performed using sodium azide (200 mg/L) combined either with amikacin (30 mg/L), streptomycin (30 mg/L), kanamycin (30 mg/L), gentamicin (30 mg/L), or ticarcillin (100 mg/L), and selection for A. baumannii BM4547 was performed using rifampicin (60 mg/L) combined with gentamicin (30 mg/L) or ticarcillin (100 mg/L. Transconjugants were selected with the antimicrobials to select for the plasmids encoding their respective resistance genes. Strain MC1 was resistant to rifampicin, therefore conjugation with A. baumannii BM4547 could not be performed. The transconjugants were tested by PCR for the bla TEM gene. RESULTS AND DISCUSSION MC1 and MC75 were previously tested as carbapenem-resistant and carried the carbapenemase encoding bla OXA−23−like gene (Cerezales et al., 2019). Further testing revealed that MC1 and MC75 were also resistant to amikacin, chloramphenicol, ciprofloxacin, gentamicin, and levofloxacin. MC23 was resistant to amikacin, chloramphenicol, ciprofloxacin, gentamicin, and levofloxacin but was susceptible to carbapenems. All three isolates were susceptible to colistin ( Table 1). The bla OXA−23 encoding gene was located on the chromosome in a Tn2008 vehicle in the isolates MC1 and MC75 ( Table 2). In A. baumannii, the bla OXA−23−like gene is associated with ISAba1, which contributes to its overexpression as well as its mobilization (Nigro and Hall, 2016). Tn2008 has previously been described in Bolivian A. baumannii isolates and this mirrors the spread of this structure among different ICs leading to a carbapenem-resistant phenotype (Nigro and Hall, 2016;Sennati et al., 2016;Chen et al., 2017;Ewers et al., 2017;Cerezales et al., 2018). Resistance Islands In the isolate MC23, the gene strA was located on a resistance island in the chromosome (RI1.MC23) (accession number MK531542), together with other antimicrobial resistance genes such as sul2, floR, and strB. Diverse IS elements were found, with the resistance island bracketed by two copies of a transposase from the IS4 family in reverse orientation (Figure 1). Two genes involved in conjugation were also present in this structure, suggesting a plasmid origin. In addition, a second chromosomal resistance island was also found in this isolate (RI2.MC23) (accession number MK531543), that carried a typical structure from class 2 integrons, FIGURE 5 | Plasmids pMC1.2 and pMC23.2 in isolates MC1 and MC23, respectively. Arrows represent predicted ORFs and the direction of the arrow represents the direction of transcription. Red arrow is used for the replicon. The toxin-antitoxin system is indicated by violet arrows and blue arrows represent virulence genes. Other genes are shown in gray. The gene encoding Apha6 was found on the chromosome of MC75 bracketed by two ISAba125 that is a composite transposon known as TnaphA6 (Matos et al., 2019). pMC1.1 Annotation of pMC1.1 (accession number MK531536), 39% GC content, revealed many different IS elements such as IS1006, IS1007, IS1008, ISAcsp1, IS91 family, ISAha2, ISAba11, ISAba12, and IS17. This plasmid carried a mercuric resistance operon, similar to an already described mercuric Tn in a 200 Kb plasmid (pA297-3) from an IC1 A. baumannii isolate, but it lacks the merP open reading frame . Different antimicrobial resistance determinants such as strA, strB, aac(3)-IIa, and aac(6 )-Ian, conferring resistance to aminoglycosides, sul2 conferring resistance to sulphonamides, and tet(B) conferring resistance to tetracycline were also present. The region of the plasmid carrying strA, strB, and sul2 shared high homology with Tn6172, located in pA297-3 as well (Figure 3), however, in pMC1.1 arsR, tetR, and tet(B) genes were also located within Tn6172 with an ISCR2 transposable element (IS91 family). This ISCR element has been described associated with different antimicrobial resistance genes in A. baumannii, especially with sul2, contributing to their mobilization thanks to a rolling circle transposition mechanism (Toleman et al., 2006), and was similar to other plasmids from Argentina (Vilacoba et al., 2013) and to plasmids found in an ST25 isolate from Australia . However, the location of tetR-tetB genes was different; they were located between glmM and arsR, suggesting a possible later insertion of these genes in (Vilacoba et al., 2013). In addition, the same inverted repeats (IR) generated by the insertion of the transposon were also found in pMC1.1 which together with the similar backbone with pA297-3 (Figure 4) suggest they share a common origin. The genes aac(3)-IIa and aac(6 )-Ian were associated with IS6 family IS and bracketed by two ISCR1 in inverted orientation. ISCR1 belongs to the IS91 family and has been described related to class 1 integrons and antimicrobial resistance genes in diverse Gram-negative species such as Klebsiella pneumoniae, P. aeruginosa, and Citrobacter freundii (Toleman et al., 2006). Different transfer genes (tra) were also found in this plasmid, as well as genes involved in plasmid partition and replication (parB/repB and xerC) that are related to segregational stability of plasmids. This plasmid also encoded a system called BREX type 1 (bacteriophage exclusion) which has been described to be involved in phage resistance (Goldfarb et al., 2015). pMC1.2/pMC23.2 The 8.7 Kb plasmids found in MC1 and MC23 (pMC1.2 and pMC23.2) were identical (accession number MK531537), with a GC content of 34.3% (Figure 5). This small plasmid has often been found in IC1 A. baumannii isolates (Lean and Yeo, 2017). Annotation of this plasmid revealed ORFs encoding for a RepB replicon (Rep-3 superfamily, GR2) (Bertini et al., 2010;Lean and Yeo, 2017) a toxin-antitoxin system (BrnT-BrnA), that is involved in vertical stability; TonB-dependent receptor, related to the transmission of signals from the outside of the cell leading to transcriptional activation of target genes; a septicolysin gene encoding a cytolytic enzyme toward eukaryotic cells and is involved in pathogenesis; as well as sel1 gene that encodes for a protein that has been described in diverse prokaryotic genera and has an important role in virulence. pMC23.1 The largest plasmid in MC23 was the 67.5 Kb pMC23.1 (accession number MK531538) (Figure 6). It belonged to GR6 according to its replicase, repAci6. Its GC content was 33.7% and almost all of its putative protein encoding genes were related to conjugative plasmid transfer in a tra locus, some of them are part of a type IV (T4SS) secretion system. This T4SS is able to secrete or take up both proteins and DNA, and possibly is involved in natural competence, a feature of A. baumannii (Salto et al., 2018). Two toxin encoding genes were present in the plasmid, relE and zeta toxin, but no antitoxins were found, although they were present in a very similar plasmid (pAC30c) in an A. baumannii isolate belonging to ST195 (IC2) (Figure 7; Lean et al., 2016). In addition, the partition genes parA/parB were also encoded on pMC23.1. The backbone of pMC23.1 and pAC30c were very similar, with only a few differences. pMC23.1 lacked some hypothetical proteins present in pAC30c, and the region encoding for tellurite resistance (telA gene and IS66); while traD, a cupin-like protein (that is a superfamily of enzymes including dioxygenases, decarboxylases, hydrolases, or isomerases); HlyD protein, that exports proteins from the cytosol to the outside of the cell, and an ABC transporter were not present in pAC30. pMC23.3 A 6 Kb small plasmid was present in the isolate MC23, pMC23.3 (accession number MK531539), 39.2% GC content, and was found to have 100% similarity with an already described plasmid, pRAY from an isolate in South Africa, encoding resistance to gentamicin, kanamycin and tobramycin (aadB gene) together with mobA and mobC genes, which are thought to encode mobilization proteins (Lean and Yeo, 2017). Many similar plasmids have been found in diverse A. baumannii isolates from different ICs and countries, suggesting a common origin and subsequent diversification in their evolution. Concurrent with other studies, no rep gene was found in the plasmid sequence, supporting the idea of the presence of a mechanism of replication relying in the host RNA polymerase (Lean and Yeo, 2017). pMC75.1 Analysis of pMC75.1 (accession number MK531540) a large plasmid of 150 Kb revealed that it was very similar to pMC1.1 (sharing 80% of their sequences), it also carried a Tn6172, in which antimicrobial resistance genes such as sul2, strB, and strA are encoded, but lacking tet(B) and arsR that were present in pMC1.1 (Figure 8). The mer operon was also found in this plasmid, and many genes encoding conjugative transfer proteins. The BREX type 1 system was also present. A stbA gene was found, the protein encoded by this gene plays a role in plasmid stability as well as parA/parB. Several IS elements were also present, i.e., ISAba1, ISAba125, ISAba14, ISAba42, IS1007, and ISAha2. However, this plasmid lacked the transposon carrying aac(3)-IIa and aac(6 )-Ian. pMC75.2 The 13.9 Kb plasmid, pMC75.2 (accession number MK531541) (Figure 9) with a GC content of 40.3%, carried the broadspectrum β-lactamase bla TEM−1B and the aminoglycoside resistance gene aac(3)-IIa flanked on both sides by IS15DIV; a toxin-antitoxin system, brnT/brnA; a TonB-dependant receptor, a septicolysin gene and mobA/mobS, which are involved in plasmid mobility. Conjugation experiments revealed that FIGURE 8 | Plasmid pMC75.1 in isolate MC75. Arrows represent predicted ORFs and the direction of the arrow represents the direction of transcription. Resistance genes are shown by orange arrows and transposon-related genes, recombinases and insertion sequences are indicated by green arrows. Transfer protein encoding genes, conjugal transfer protein encoding genes, and genes involved in plasmid partition and replication are shown in pink. The mercury resistance operon genes are indicated by yellow arrows and the BREX type 1 system is shown in purple. Other genes are indicated by gray arrows. Hypothetical proteins are not shown. pMC75.2 was transferable into A. baumannii BM4547 but it was unstable and was lost after several passages. The replicon of this plasmid belonged to the RepB (Rep_3) superfamily with 100% homology. This plasmid shares a great homology with pMC1.2/pMC23.2, same RepB, toxin-antitoxin system, TonB-dependant receptor and septicolysin; it seems that one of them has lost or alternatively acquired the integron carrying the antimicrobial resistance genes and the mobility genes. Recently, two similar plasmids to pMC75.1 and pMC75.2 were described in a Brazilian A. baumannii isolate representing the same ST (ST15). This illustrates that these plasmids can be very plastic by acquiring or losing genes, but can also be conserved within a ST (Matos et al., 2019). The two carbapenem-resistant isolates carried the bla OXA−23 gene in Tn2008, which has been previously described in diverse ICs (Nigro and Hall, 2016;Ewers et al., 2017) including IC7 isolates recovered from a hospital in the same city, Cochabamba (Sennati et al., 2016). The Tn2008 contributes to the overexpression of the carbapenemase encoding gene and to its mobilization. In addition, all three isolates harbored three aminoglycoside resistance genes such as aac(3)-IIa, strA, and strB; and sul2 conferring resistance to sulphonamides; MC1 carried tetB conferring resistance to tetracycline as well. All the genes were found to be associated with IS elements, constituting transposons that lead to their mobilization and make genetic rearrangements more likely to happen. These genes were found both in the chromosome and in plasmids, demonstrating the plasticity of the A. baumannii genome and the mobility of these antimicrobial resistance FIGURE 9 | Plasmid pMC75.2 in isolate MC75. Arrows represent predicted ORFs and the direction of the arrow represents the direction of transcription. Resistance genes are shown by orange arrows and insertion sequences are indicated by green arrows. Genes involved in plasmid mobility are shown in pink. The toxin-antitoxin system is shown in violet. Blue represents virulence genes. Other genes are indicated by gray arrows. Hypothetical proteins are not shown. Red arrow is used for the replicon. determinants within MGEs such as transposons or plasmids. CONCLUSION In summary, these data further confirm that A. baumannii has a great ability to acquire antimicrobial resistance determinants and become a threat in hospitals. These are associated with different plasmids and many different IS elements, of which some are found in multiple genera. For these reasons it is important to study the dynamics and resistomes of the bacterial populations in order to understand the situation in each hospital or unit. The fact that some of these plasmids have been found in diverse A. baumannii clonal lineages mirrors the transfer and prevalence of these MGEs contributing to the spread of antimicrobial resistance worldwide. DATA AVAILABILITY STATEMENT The datasets generated for this study can be found in the GenBank, MK531536, MK531538, MK531537, MK531539, MK531540, and MK531541. AUTHOR CONTRIBUTIONS MC, KX, JW, and PH contributed to the design of the experiments. MC, KX, and JW performed the experiments. MC, KX, JW, OK, HS, LG, and PH analyzed and interpreted the data. MC, KX, and PH wrote the manuscript. All authors contributed to critical manuscript revision, read, and approved the submitted version. ACKNOWLEDGMENTS We would like to thank Yvonne Pfeifer for providing the conjugation protocol and also Rémy A. Bonnin for providing the A. baumannii BM4547 strain.
v3-fos-license
2022-05-27T06:22:18.390Z
2022-05-26T00:00:00.000
249063746
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": null, "oa_url": null, "pdf_hash": "75f7d0f6092c1252e2deb09b6c34a513444488d5", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:1660", "s2fieldsofstudy": [ "Medicine" ], "sha1": "5091a7d665142c90f1f11decaf1ac40fec712b5a", "year": 2022 }
pes2o/s2orc
Cut-off value for exercise-induced bronchoconstriction based on the features of the airway obstruction The current cut-off value for diagnosing exercise-induced bronchoconstriction (EIB) in adults—percent fall in FEV1 (ΔFEV1) ≥ 10% after exercise challenge test (ECT)—has low specificity and weak evidences. Therefore, this study aimed to identify the cut-off value for EIB that provides the highest diagnostic sensitivity and specificity. Participants who underwent the ECT between 2007 and 2018 were categorized according to ΔFEV1: definite EIB (ΔFEV1 ≥ 15%), borderline (10% ≤ ΔFEV1 < 15%), and normal (ΔFEV1 < 10%). Distinct characteristics of the definite EIB group were identified and explored in the borderline EIB group. A receiver operating characteristic curve was plotted to determine the optimal cut-off value. Of 128 patients, 60 were grouped as the definite EIB group, 23 as the borderline group, and 45 as the normal group. All participants were men, with a median age of 20 years (interquartile range [IQR:] 19–23 years). The definite EIB group exhibited wheezing on auscultation (P < 0.001), ΔFEV1/FVC ≥ 10% (P < 0.001), and ΔFEF25–75% ≥ 25% (P < 0.001) compared to other groups. Eight (8/23, 34.8%) patients in the borderline group had at least one of these features, but the trend was more similar to that of the normal group than the definite EIB group. A cut-off value of ΔFEV1 ≥ 13.5% had a sensitivity of 98.5% and specificity of 93.5% for EIB. Wheezing on auscultation, ΔFEV1/FVC ≥ 10%, and ΔFEF25–75% ≥ 25% after ECT may be useful for the diagnosis of EIB, particularly in individuals with a ΔFEV1 of 10–15%. For EIB, a higher cut-off value, possibly ΔFEV1 ≥ 13.5%, should be considered as the diagnostic criterion. Introduction Exercise-induced bronchoconstriction (EIB) is a transient narrowing of the lower airway during or after exercise [1][2][3]. Dyspnea and cough during physical activity are the classic symptoms of EIB; however, they have low sensitivity and specificity for predicting EIB [4][5][6]. EIB is a1111111111 a1111111111 a1111111111 a1111111111 a1111111111 diagnosed when lung function declines after an exercise challenge test (ECT). The difference between the lowest FEV 1 value pre-and post-exercise, given as a percentage of the pre-exercise value obtained within 30 min after activity, is referred to as a percent fall in FEV 1 (ΔFEV 1 ) [4,7]. The American Thoracic Society (ATS) suggests a post-exercise ΔFEV 1 � 10% to detect EIB, based on the results of ΔFEV 1 in normal healthy participants without a family history of asthma, atopy, or recent upper respiratory tract infection [4]. However, supporting data comes from studies including children [8,9] or a study involving both children and adults [10]. Although EIB is most commonly reported in schoolchildren, it also affects young adults, including athletes and military recruits [11,12]. Compared with the recent guidelines, several other groups have suggested a ΔFEV 1 � 13% or even up to 15% for diagnosing EIB [13][14][15][16]. Furthermore, a positive challenge result of ΔFEV 1 � 20% is usually required in clinical trials to evaluate a drug for EIB [17]. These various criteria have resulted in a wide range of prevalence estimates for EIB and over-diagnosis of EIB [8,18]. A lower cut-off value of ΔFEV 1 , such as suggested in the current guidelines, will increase the diagnostic sensitivity for EIB but at the expense of accuracy. Patients may be considered to have EIB even when they are clinically unaffected and do not require therapy. A precise diagnosis of EIB is required to identify acceptable levels of physical activity throughout life and reduce the potential impact of the disease on respiratory health. Therefore, in this study, we aimed to determine a cut-off value of ΔFEV 1 with high diagnostic sensitivity and specificity for EIB by identifying and integrating the distinct features of airway obstruction. Patients This retrospective study was performed at Samsung Medical Center (a 1,997-bed tertiary referral hospital in Seoul, South Korea). Participants who underwent ECT due to current (< 1 month) experience of dyspnea on exertion between 2007 and 2018 were included and divided into three groups according to the ΔFEV 1 value after the ECT: the definite EIB (ΔFEV 1 � 15%), borderline EIB (10% � ΔFEV 1 < 15%), and normal (ΔFEV 1 < 10%) groups. Indicators of airway obstruction were identified in the definite EIB group by comparing with the other two groups, and these features were further investigated in the borderline EIB group. Data were retrieved from electronic medical records, including clinical variables and laboratory test results. The institutional review board of Samsung Medical Center approved this study (IRB no. 2019-03-041-002) and waived the requirement for informed consent owing to its retrospective nature. Exercise challenge test and measurements Under the supervision of allergists, the ECT was performed according to the ATS standards [4], using a motor-driven treadmill with adjustable speed and grade in a dry air-conditioned room at 20˚C to 25˚C (< 15% relative humidity) at the specialized center for allergy. On the day of ECT, all patients were first assessed by the allergists before the challenge for any respiratory symptoms, and those with normal lung sounds on auscultation underwent ECT. After the ECT, localized lung sounds were not considered wheezing as they could also indicate central airway obstruction. The participants were instructed not to perform any rigorous physical activity or use short-acting β 2 -agonists for 24 hours before ECT. Spirometry was measured using a Vmax 22 instrument (SensorMedics, Yorba Linda, CA, USA) at baseline and after ECT (5, 10, 15, and 30 min after exercise), according to ATS/European Respiratory Society standards [19]. Absolute values were obtained, with the percent predicted (%pred) values of forced vital capacity (FVC), FEV 1 , FEV 1 /FVC, and FEF 25-75% ) calculated using data obtained from a representative Korean sample [20]. The best value with an appropriately performed flow-volume curve was chosen for the analysis. To assess bronchial hyperresponsiveness (BHR) independently, a methacholine provocation test was performed on a day other than the day of the ECT [21,22]. A positive test was defined as a concentration of methacholine less than 16 mg/mL that caused a 20% decrease in FEV1 (provocative concentration 20, PC20). PC 20 levels between 4.0 and 16 mg/mL were considered borderline BHR, PC 20 levels between 1.0 and 4.0 mg/mL were considered mild BHR, and PC 20 levels below 1.0 mg/mL were considered moderate to severe BHR. The induced sputum, fraction of exhaled nitric oxide (FeNO), and skin prick tests were performed at the discretion of the attending allergist [23]. FeNO was measured using an NO analyzer (NIOX MINO; Aerocrine AB, Solna, Sweden) or NObreath (Bedfont Scientific, Maidstone, UK), according to the ATS guidelines. Statistical analysis The categorical variables were presented as numbers (percentages), and the continuous variables were presented as median (interquartile range [IQR]). The categorical variables were compared using the Pearson x 2 test or Fisher's exact test, and the Kruskal-Wallis test (the nonparametric equivalent of one-way analysis of variance [ANOVA]) was used to compare the differences among the groups for continuous variables. P-values for pairwise group comparisons were obtained using a post-hoc Bonferroni test. Receiver operating characteristic (ROC) curves were plotted to obtain the optimal cut-off values of ΔFEV 1 in determining EIB that yielded maximal sensitivity plus specificity. Statistical significance was defined as a two-sided P-value of < 0.05. All statistical analyses were performed using Statistical Analysis System (SAS) (version 9.4; SAS Institute, Inc., Cary, NC, USA) and R software (version 3.5.1; R Development Core Team, Vienna, Austria). Baseline characteristics Baseline characteristics of the 128 patients included in this study are shown in Table 1. The definite EIB group included 60 patients, borderline EIB group included 23, and normal group included 45. All patients were men, with a median age of 20 years (IQR: 19-23 years). Of 128 patients, 90 (70.3%) were never-smokers, while 38 (29.7%) had a smoking history; 33 (25.8%) were current smokers and 5 (3.9%) were ex-smokers. Concurrent asthma was identified in 59 (98.3%) of the EIB group compared with 16 (69.6%) in the borderline group and 16 (35.6%) in the normal group (P < 0.001). The definite EIB group had the lowest baseline FEV 1 value of 92% (P = 0.038) and FEV 1 /FVC value of 81% (P < 0.001). The FVC values were not statistically different between the groups. FEF 25-75% values were different between the groups, both in L and %pred values (P = 0.001 for FEF 25-75% , L/s, and P < 0.001 for FEF 25-75% , %pred, respectively). The definite EIB group had the lowest FEF 25-75% value compared with those in the borderline EIB or normal group. There was more patients in the definite EIB group with FEF 25-75% <80% or FEF 25-75% <60% than borderline or normal group, but without statistical significance (P = 0.178 for FEF 25-75% < 80% and P = 0.311 for FEF 25-75% < 60%, respectively). Positive methacholine provocation test results were common in the definite EIB group (81.7% for the definite EIB, 69.6% for the borderline EIB, and 26.7% for the normal, P < 0.001). The definite EIB group had a higher proportion of moderate-to-severe BHR than the other groups (P < 0.001) (Fig 1). PLOS ONE Cut-off for exercise-induced bronchoconstriction less than � 25%. Auscultated wheezing, ΔFEV 1 /FVC � 10%, and ΔFEF 25-75% � 25% after the ECT were distinct characteristics of the definite EIB group. Overall, the characteristics of the borderline EIB group were similar to those of the normal EIB group (Fig 2). Optimal cut-off value for EIB The distinct variables of the definite EIB group were further investigated in the borderline EIB group. In eight patients (8/23, 34.8%) in the borderline EIB group, at least one of these variables was identified, and all of them had a ΔFEF 25-75% � 25%; moreover, six (75.0%) of them had either wheezing on auscultation or ΔFEV1/FVC � 10% (Table 3). Discussion EIB occurs because of acute airway narrowing after exercise. In this study, the characteristics of airway obstruction were first identified in the definite EIB group (ΔFEV 1 � 15%), including wheezing on auscultation, ΔFEF 25-75% � 25%, and ΔFEV 1 /FVC � 10%. These three characteristics were not identified in the normal group, which is in line with results from earlier research [24][25][26]. Of the participants in the borderline EIB group with at least one of these characteristics, 87.5% had ΔFEV 1 � 13.6% and an estimated cut-off value of ΔFEV 1 � 13.5% showed high sensitivity (96.9%) and specificity (96.8%). A more significant and urgent treatment for EIB can be identified using this suggested cut-off. With a cut-off value of ΔFEV 1 � 10%, the sensitivity was 100%, but the relatively low specificity would lead to a high false-positive rate. In light of the data from the previous studies, ΔFEV 1 � 15% after the ECT leaves no doubt in diagnosing EIB, whereas ΔFEV 1 < 10% is commonly considered normal, which can exclude EIB. However, there was a gray zone of ΔFEV 1 between 10-15%. Therefore, this group might be classified as either EIB or normal, depending on the arbitrary cut-off point used. We thoroughly evaluated the borderline EIB group in this study. Overall, the characteristics of the borderline EIB group were more similar to those of the normal group than to those of the definite EIB group (Fig 2). Among the patients in the borderline EIB group who showed at least one of the three characteristics of the definite EIB group, the ΔFEV 1 value was more than 13.5% in most of them (7/8, 87.5%). Conversely, the distinct characteristics of the definite EIB group were not observed in most participants with ΔFEV 1 < 13.6%, except for one patient (Patient No. 11 in Table 3). The patient had ΔFEV 1 of 12.7% and exhibited all three features of airway obstruction. Considering the symptoms after the ECT, wheezing on auscultation was notable in the definite EIB group, occurring in up to 80% patients, while coughing showed no significant statistical difference. Previous studies have reported that symptoms such as coughing or wheezing during sports had a lower diagnostic value. In this study, experienced allergists confirmed wheezing through close examination before and after the ECT, whereas in other studies selfreported wheezing was used [4][5][6]. ΔFEF 25-75% is another distinguishing trait of the definite EIB. FEF 25-75% assesses airway flow rates on an FVC segment and represents the initial changes associated with airflow obstruction in small airways [19,27]. Therefore, it is more sensitive than FEV 1 for evaluating EIB [24]. Currently, there is no recommendation on the utility of the percent predicted value of FEF 25-75% , and in this manner, we measured the difference of FEF 25-75% . Several studies have suggested a cut-off value of FEF 25-75% for evaluating small airway disease. Marseglia et al. suggested a cut-off < 80% [28], while Manoharan et al. suggested a stricter cutoff < 60% to define the presence of small airway disease [29]. In the present study, the borderline EIB group had a substantial decrease in FEF 25-75% , even without symptoms. The definite EIB group had a higher proportion of ΔFEF 25-75% � 25% than the borderline EIB or normal group. These findings are in line with prior research, which showed that a decrease in FEF 25-75% serves as an early signal of changes related to airflow obstruction in the small airways [30,31]. The basal FEV 1 /FVC value was the lowest in the definite EIB group (definite EIB 81% vs. borderline 85% vs. normal 86%, P < 0.001) and showed the greatest difference before and after the ECT in the definite EIB group. More than 70% of the patients in the definite EIB group had a ΔFEV 1 /FVC > 10%, while this percentage was lower in the borderline EIB and normal groups (17.4% for the borderline EIB; 0% for the normal, respectively). The FEV 1 /FVC ratio has been used to express the degree of airway obstruction in children with asthma; however, its clinical implication in adults is unknown [32]. Atopic status is an important risk factor for the development of asthma and may contribute to the development of EIB. Atopic athletes are reported to have a higher risk of EIB than nonatopic athletes [33]. In a study by Koh et al., EIB-positive and-negative patients with asthma who underwent methacholine challenge and the degree of atopy between the two groups were compared [34]. The atopy score and skin reaction to house dust mites (Dermatophagoides pteronyssinus) significantly increased in patients with asthma and EIB compared with those without EIB, and the degree of EIB significantly correlated with the atopy score in all participants. Regarding type 2 inflammation, FeNO and sputum eosinophilia were higher in the definite EIB group, although the difference was insignificant. FeNO and sputum eosinophilia were not useful in this population, but they suggest and support the finding that type 2 inflammation is not significant in mild EIB [35]. FeNO, a marker of type 2 inflammation in the bronchial mucosa, has a high predictive value for EIB in patients with asthma, but its relationship with this condition needs to be investigated further [36,37]. This study had several limitations. First, the ECT was performed only once; two tests may be required when using exercise to exclude a diagnosis of EIB [4]. However, this suggestion is based on a criterion for cut-off ΔFEV1 � 10%. Moreover, even when considering ΔFEV1 � 10% as the cut-off, the reproducibility of EIB determined by two separate tests is high [10]. We also performed a methacholine provocation test on all participants. Indirect challenges are more specific in reflecting bronchial hyper-responsiveness, and direct challenges, such as methacholine, are not useful for detecting EIB because they have low sensitivity. However, the methacholine provocation test showed an excellent negative predictive value [38] and may have a supplementary role in excluding ECT, although this was not investigated in this study. Second, because this study was conducted at a single referral center with only young male patients, selection bias may restrict the generalizability of the major findings. All participants with dyspnea during or shortly after exercise were included in the study, regardless of whether they were athletes or with asthma. This study reflects the real-world. In conclusion, the characteristics of airway obstruction, such as wheezing on auscultation, ΔFEV 1 /FVC � 10%, and ΔFEF 25-75% � 25% after ECT, may be useful for the diagnosis of EIB, particularly in individuals with a ΔFEV 1 of 10-15%. For EIB, a higher cut-off value, possibly ΔFEV 1 � 13.5%, should be considered as the diagnostic criterion.
v3-fos-license
2015-08-08T02:06:19.000Z
2015-06-30T00:00:00.000
7706201
{ "extfieldsofstudy": [ "Computer Science", "Medicine", "Mathematics", "Physics", "Biology" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://epjdatascience.springeropen.com/track/pdf/10.1140/epjds/s13688-017-0109-5", "pdf_hash": "b2087a52c668a52e58d9c6e064e6425077cf550f", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:1665", "s2fieldsofstudy": [ "Computer Science" ], "sha1": "edb8feef72708cf56d17c307b757292396dc6033", "year": 2017 }
pes2o/s2orc
A roadmap for the computation of persistent homology Persistent homology (PH) is a method used in topological data analysis (TDA) to study qualitative features of data that persist across multiple scales. It is robust to perturbations of input data, independent of dimensions and coordinates, and provides a compact representation of the qualitative features of the input. The computation of PH is an open area with numerous important and fascinating challenges. The field of PH computation is evolving rapidly, and new algorithms and software implementations are being updated and released at a rapid pace. The purposes of our article are to (1) introduce theory and computational methods for PH to a broad range of computational scientists and (2) provide benchmarks of state-of-the-art implementations for the computation of PH. We give a friendly introduction to PH, navigate the pipeline for the computation of PH with an eye towards applications, and use a range of synthetic and real-world data sets to evaluate currently available open-source implementations for the computation of PH. Based on our benchmarking, we indicate which algorithms and implementations are best suited to different types of data sets. In an accompanying tutorial, we provide guidelines for the computation of PH. We make publicly available all scripts that we wrote for the tutorial, and we make available the processed version of the data sets used in the benchmarking. Electronic Supplementary Material The online version of this article (doi:10.1140/epjds/s13688-017-0109-5) contains supplementary material. Introduction In this tutorial, we give detailed guidelines for the computation of persistent homology and for several of the functionalities that are implemented by the libraries in Table 2 in the main manuscript. We first give some advice on how to install the various libraries, and we then give guidelines for how to compute PH for every step of the pipeline in Fig. 3 of the main paper. We explain how to compute PH for networks with the weight rank clique filtration (WRCF); for point clouds with the VR, alpha,Čech, and witness complexes; and for image data sets with cubical complexes. We then give guidelines for visualizing the outputs of the computations and for computing the bottleneck and Wasserstein distances with Dionysus and Hera. In addition to the bottleneck and Wasserstein distances, there are also other tools (such as persistence landscapes and confidence sets) that are useful for statistical assessment of barcodes, but we do not discuss them here, as there are already comprehensive tutorials [4,5] for the packages that implement these methods. All MATLAB scripts written for this tutorial are available at https://github.com/n-otter/PH-roadmap/ tree/master/matlab. In Fig. 1, we give instructions for how to navigate this tutorial. Many tears and much sweat and blood were spent learning about the different libraries, writing this tutorial, and the scripts. If you find this tutorial helpful, please acknowledge it. Installation In this section, we give guidelines on how to get and/or install the software packages. Dionysus The code for Dionysus is available at http://www.mrzv.org/software/dionysus/get-build-install. html, where one can also find information on dependencies and how to build the library. The library is written in C++, but it also supports python bindings (i.e., there is a python interface to some of the functionalities that are implemented in the library). Depending on the machine on which one is building Dionysus, the python bindings can create some issues. Additionally, from the perspective of performance, it is better to directly use the C++ code. If one wishes to build the library without the python bindings, one needs to delete the bindings directory and also to delete the (last) line add_subdirectory (bindings) in the file CMakeLists.txt. If one seeks to compute the bottleneck or Wasserstein distances with the library (see Section 7), then before building the library, one needs to amend a mistake in the bottleneck-distance.cpp script as follows: in the subdirectory examples, one finds the file bottleneck-distance.cpp. One needs to uncomment the following line (which occurs towards the end of the file): std::cout << "Distance: " << bottleneck_distance(dgm1, dgm2) << std::endl; Now one can build the library as follows (from the directory in which the CMakeLists.txt file is): $ mkdir build $ cd build $ cmake .. $ make DIPHA The DIPHA library is available at https://github.com/DIPHA/dipha. One can build the library as follows (from the directory in which the CMakeLists.txt file is): $ mkdir build $ cd build $ cmake .. $ make GUDHI The GUDHI library is available at https://gforge.inria.fr/frs/?group_id=3865. Information about dependencies and how to build the library is available at http://gudhi.gforge.inria.fr/doc/latest/ installation.html. One can build the library in a similar way as explained for DIPHA. We note that a python interface was released with the most recent version (at the time of this writing) of the library GUDHI; in this tutorial, we give instructions on how to use the C++ implementation, and we point readers who are familiar with python to the documentation available at http://gudhi.gforge. inria.fr/python/latest/. JavaPlex The JavaPlex library does not require installation or to be built, and all implementations that we listed in Table 2 in the main text can be found in the directory matlab-examples_x.y.z (where x.y.z stands for the version number). This directory, and the accompanying tutorial, can be downloaded at https: //github.com/appliedtopology/javaplex/releases/. All of the scripts in matlab-examples_x.y.z are written in MATLAB. Hera The Hera library is available at https://bitbucket.org/grey_narn/hera. The root folder includes two subfolders: geom_bottleneck contains the source code for computing bottleneck distance, and geom_matching contains the source code for computing Wasserstein distance. One can build the library by running the following commands in each of the subfolders geom_bottleneck/ and geom_matching/wasserstein: $ mkdir build $ cd build $ cmake .. $ make jHoles Ideally, the jHoles library should be available for download at http://cuda.unicam.it/jHoles. However, this website is often down, so the best way to obtain the library is to contact Matteo Rucco, who is the corresponding author of the companion paper [3]. The library does not require installation or to be built. Perseus A compiled version of the Perseus library is available at http://people.maths.ox.ac.uk/nanda/perseus/. Those wishing to build the library from source code can find the source code at the same website. Ripser The Ripser library is available at https://github.com/Ripser/ripser. One can build the library by running make in the folder that includes the Makefile. Note that Ripser supports several options that can be passed to make. See https://github.com/Ripser/ripser for more information. Computation of PH for networks In this section, we explain how to compute PH for undirected weighted networks. We represent the nodes of a network with N nodes using the natural numbers 1, . . . , N . Sample network data To create weighted networks, one can use the script fractal_weighted.m available at https://github. com/n-otter/PH-roadmap/tree/master/matlab/synthetic_data_sets_scripts. We recall that a fractal network is determined by three non-negative integers n, b and k (see the main paper for details). The script fractal_weighted.m takes four parameters as input: (1) a natural number n (the total number of nodes of the graph is 2 n ); (2) a natural number b (the number of nodes of the initial network); (3) a natural number k (this is the connection density parameter); and (4) a string which indicates how weights are associated to edges: this is either 'random'or 'linear' (see the main paper for details). Example: The command >> fractal_weighted(4,2,1,'random') saves the files fractal_4_2_1_random.txt and fractal_4_2_1_random_edge_list.txt, where the first file is a text file storing the weighted adjacency matrix of a fractal network with 16 nodes and random weight on every edge, while the second file is a text file storing the weighted edge list of the same network. Adjacency matrix versus edge-list file We assume that a network is given either as an adjacency matrix in a MAT-file or as a text file with a list of weighted edges. A typical entry on one line of such a file is a triple "i j w ij ", where i and j are the nodes incident to the edge and w ij is the weight of the edge. We call such a file an "edge-list file". We provide the script adj_matrix_to_edge_list.m to obtain edge-list files from adjacency matrices. (Note that we also provide the script edgelist_to_point_cloud_dist_mat.m to obtain distance matrices from edge-list files, where the distances between nodes are computed using shortest paths.) PH with the WRCF (with jHoles) Using edge-list files, we compute PH by constructing the weight rank clique filtration (WRCF) with the library jHoles. Here we give instructions on how to compute PH with Version 3 of jHoles. One needs to run the following command in the terminal: $ java -Xmx<value> -jar jHoles.jar input-file output-file1 output-file2 where -Xmx<value> is optional and can be used to set the maximum heap size of the garbage collector (we recommend doing this for networks with a large number of nodes or high density). For example, -Xmx4g sets the maximum heap size to 4 gigabytes. The file input-file is the edge-list file of the network, and output-file1 is a file in which information (e.g., number of edges, average degree, etc.) about the network is saved, and output-file2 is the file in which the intervals are saved. Example: With the command $ java -Xmx4g -jar jHoles.jar fractal_4_2_1_random_edge_list.txt fractal_info.txt \ fractal_intervals.txt one computes PH with the WRCF for the fractal network with parameters (n, b, k) = (4, 2, 1) and random weight on every edge. The persistence diagram is saved in the file fractal_intervals.txt, where for every interval also representative cycles are given, while in file fractal_info.txt information such as average degree, density, or average cluster is given. Note that the backslash in the above command indicates that the command continues on the next line, and should therefore be omitted if one writes the whole command on the same line. Networks as point clouds One can construe a connected weighted network as a finite metric space and then compute PH by using one of the methods from Section 4. We now explain how to compute a distance matrix from an undirected weighted network using information about shortest paths between nodes. If two nodes i and j are connected by an edge with weight w, we set the distance between i and j to be 1/w. Otherwise, we define the distance between i and j to be the minimum of the lengths of all paths between them, where we define the length of a path to be the the sum of the inverse of the weights of the edges in this shortest path. One can compute this distance matrix with the script shortest_paths.m. As input, it takes an edge-list file. (See Section 3.2 for how to obtain an edge-list file from a MAT-file that stores an adjacency matrix.) The output of the script is a text file in which each line gives the entries of a row of the distance matrix. (Note that if a network is not connected, then one sets the distance between nodes in two distinct components of the network to be infinite, and one thereby obtains an extended metric space.) Additionally, using tools like multidimensional scaling, one can convert a distance matrix into a finite set of points in Euclidean space. This is handy if one wants to use a library that does not support distance matrices as an input type. We provide the script distmat_to_pointcloud.m to obtain a point cloud from a distance matrix using multidimensional scaling. Computation of PH for point clouds In this section, we explain how to compute PH for finite metric spaces. Sample point-cloud data We provide scripts to create point-cloud data. These are available at https://github.com/n-otter/ PH-roadmap/tree/master/matlab/synthetic_data_sets_scripts. To create points clouds in R 3 and R 4 , one can use the scripts klein_bottle_imm.m and klein_bottle_emb.m, respectively. Example: >> klein_bottle_imm (5) samples 25 points uniformly at random from the image of the immersion of the Klein bottle in R 3 and saves the point cloud in the text file klein_bottle_pointcloud_25.txt, with each line storing the coordinates of one point, as well as in the MAT-file klein_bottle_25.mat. Distance matrices versus points clouds Given a finite set of points in Euclidean space, one can compute an associated distance matrix. To get a distance matrix from a point cloud, we provide the script pointcloud_to_distmat.m. Example: pointcloud_to_distmat('klein_bottle_pointcloud_25.txt') computes the distance matrix for the 25 points sampled from the Klein bottle and saves it in the text file 'klein_bottle_pointcloud_25_distmat.txt'. Conversely, a distance matrix can yield a finite set of points in Euclidean space by using a method such as multidimensional scaling. We implement such a conversion in the script distmat_to_pointcloud.m. Example: The standard input for the construction of the VR complex is a distance matrix. The software packages that take a distance matrix as input are Perseus and DIPHA, but the other packages do not. Instead, they take a set of points in Euclidean space as input. Note that Perseus can also take a set of points in Euclidean space as input, but the implementation does not allow one to set a bound on the upper dimension in the computation of the VR complex, so this implementation is impractical to use for most data sets. We next compute the VR complex with each library. With most of the libraries, one has to indicate the maximum filtration value for which one wants to compute the filtered simplicial complex. We set this value to the maximum distance between any two points. Note, however, that a smaller value often suffices. To compute the maximum distance given a text file input-file with the coordinates of a point on each line, one can type the following in MATLAB: >> A=load(input-file); >> D=pdist(A); >> D=squareform(D); >> M=max(max(D)); Example: The maximum distance is given by M. Similarly, if one is given a text file input-file that stores a distance matrix, then one can compute the maximum distance by typing >> D=load(input-file); >> M=max(max(D)); Dionysus The library Dionysus takes a point cloud as input. We can use either the standard or dual algorithm. For the former, we use the command $ ./rips-pairwise -s max-dimension -m max-distance \ -d output-file input-file where the file ./rips-pairwise is in the dionysus/build/examples/rips directory, further max-dimension is the maximum dimension for which we want to compute the simplicial complex, max-distance is the maximum parameter value for which we want to compute the complex, output-file is the file to which the intervals are written, and input-file is a text file with the coordinates of a point on each line. To compute the intervals with the dual algorithm, we use the command $ ./rips-pairwise-cohomology -s 3 -m 7.0513 -p 2 -d klein_bottle_25_output.txt \ klein_bottle_pointcloud_25.txt DIPHA The DIPHA library reads and writes to binary files. One can convert the text file that stores the distance matrix (see Section 4.2 for how to obtain a distance matrix from a point cloud) into a binary file of the right input type for DIPHA by using the file save_distance_matrix.m provided by the developers of DIPHA, which can be found in dipha-master/matlab. where N is the number of process and the above options are again available. GUDHI The library GUDHI takes a point cloud as input. We run the command $ ./rips_persistence -r max-distance -d max-dimension \ -p prime -o output-file input-file where max-distance, max-dimension, and prime are as above, output-file is a file to which the intervals are written, and input-file is a text file with the coordinates of a point on each line. JavaPlex The library JavaPlex takes a point cloud as input. Before starting using this library one has to run the script load_javaplex.m which is located in the JavaPlex directory matlab_examples. We wrote the script vietoris_rips_javaplex.m to compute PH with the VR complex in JavaPlex. This script takes four parameters as input: (1) the name of the text file with the point cloud; (2) the maximum dimension for which we want to compute the simplicial complex; (3) the maximum filtration step for which we want to compute the VR complex; and (4) the number of filtration steps for which we compute the VR complex. The script saves text files containing the barcode intervals, one file for each homological dimension. These are the files ending with i_right_format.txt where i indicates the homological dimension. Perseus The library Perseus takes a distance matrix as input (see Section 4.2 for how to obtain a distance matrix from a point cloud). One has to prepare the input file by adding two lines at the beginning of the file that stores the distance matrix. We do this as follows: where N is the number of points (and hence the number of rows (or columns) in the distance matrix), first-step is the value for the first filtration step, step-increment is the step size between any two filtration steps, steps is the number of total steps, and max-dimension is the maximum dimension for which we compute the complex. We now can compute PH with the command $ ./perseus distmat input-file output-file in the terminal, where input-file is the name of the input file and output-file is the name of the file in which the intervals will be saved. Perseus creates a series of files named output-file_i.txt for i ∈ {0, 1, . . . }, where output-file_i.txt contains the intervals for the homological degree i. Ripser The library Ripser takes both a point cloud and a distance matrix as input, and it supports four different format types for the distance matrix. (See https://github.com/Ripser/ripser#description for more details.) One of the supported input types for the distance matrix is the format accepted by DIPHA (see Section 4.3.3). We run the command $ ./ripser --format input-type --dim max-dimension [options] input-file where input-type is a string that indicates the type of the input, max-dimension is the maximum dimension of persistent homology that is computed (note the difference with respect to the other libraries, for which one indicates the maximum dimension of the complex), and options includes --modulus p (with which one can choose the coefficient field F p ). Note that one has to enable this option at compilation (see Section 2.8). The output of the computation is written to the standard output. Example: $ ./ripser --format dipha --dim 2 klein_bottle_25.bin > klein_bottle_25_out.log where klein_bott_25.bin is the input file from Section 4.3.3 and the standard output is saved to the file klein_bottle_25_out.log. Alpha In this section, we explain how to compute PH with the alpha complex with Dionysus and GUDHI. Dionysus One can compute PH with the alpha complex for finite subsets of points in R 2 or R 3 . For points clouds in R 2 , one runs the command $ ./alphashapes2d < input-file > output-file where input-file a text file with coordinates of a point in R 2 on each line and output-file is the file to which the intervals in the persistence diagram are written. For points clouds in R 3 , one runs the command $ ./alphashapes3d-cohomology input-file output-file where input-file and output-file are as above. 1 GUDHI The program GUDHI supports both points clouds in R 2 and R 3 . To compute PH with the alpha complex one can use the script ./alpha_complex_persistence which is in the folder example/Persistent_cohomology. The script takes as input an OFF file, as decribed here http://www.geomview.org/docs/html/OFF.html. Namely, the first lines of the input file are as follows: where embedding-dimension is the dimension d of the Euclidean space, V is the number of points, and all other lines store coordinates xi1,. . . , xid of the points. One then computes PH by running the following command in the terminal $ ./alpha_complex_persistence -p prime -o output-file input-file where p is a prime number and indicates that one does computations over the coefficient field F p . Example: where the first two lines of the file klein_bottle_25_input.txt are as follows: OFF 3 25 0 0 1 There is also a script ./alphashapes3d , but this script has a bug and does not compute. 4.5Čech One can compute PH with theČech complex for a point cloud in Euclidean space using the implementation in Dionysus. One runs the command $ ./cech-complex < input-file > output-file where input-file is a text file of the following form: where embedding-dimension is the dimension d of the Euclidean space, max-dimension is the dimension up to which we compute the complex, and all other lines store coordinates xi1,. . . , xid of the points. Witness One can compute the witness complex using JavaPlex. Recall that before starting using this library one has to run the script load_javaplex.m which is located in the JavaPlex directory matlab_examples. Given a point cloud S, the witness complex is a simplicial complex constructed on a subset L ⊆ S of so-called "landmark" points. As we explained in the main manuscript, there are several versions of the witness complex. The ones implemented in JavaPlex are the weak Delaunay complex, which is also just called the "witness complex", and parametrized witness complexes, which are also known as "lazy witness complexes". Given a point cloud L, one can compute the witness complex or lazy witness complex using the scripts witness_javaPlex.m and lazy_witness_javaPlex.m. The script witness_javaPlex.m takes four parameters as input: (1) the name of the text file with the point cloud; (2) the maximum dimension for which we want to compute the simplicial complex; (3) the maximum filtration value for which we want to compute PH; and (4) the number of filtration steps for which we compute the complex. The script lazy_witness_javaPlex.m takes six parameters as input: (1) the name of the text file with the point cloud; (2) the maximum dimension for which we want to compute the simplicial complex; (3) the number of landmark points; (4) how the landmark points are selected (this is either 'random' or 'maxmin'); (5) the value for the parameter ν; and (6) the number of filtration steps for which we compute the complex. See the scripts for further details on input parameters, and see the main manuscript and the JavaPlex tutorial [2] for further detail on witness complexes. Computation of PH for image data In this section, we discuss how to compute PH for image data using cubical complexes. The packages DIPHA, Perseus, and GUDHI support the construction of filtered cubical complexes from grey-scale image data. As an example of grey-scale image data, we use the data set "Nucleon" from the Volvis repository [1]. This is a 3-dimensional grey-scale image data set; one is given a 3-dimensional lattice of resolution 41 × 41 × 41, where each lattice point is labeled by an integer that represents the grey-scale value for the voxel anchored at that lattice point. The .raw data file from [1] is binary, and it stores 8 bits for each voxel. We read the .raw data file in MATLAB as follows: >> fileID=fopen('nucleon.raw','r'); >> A=fread(fileID,41*41*41,'int8'); >> B=reshape(A,[41 41 41]); so B is a 3-dimensional array of size 41 × 41 × 41 that stores the grey-scale values. Note for this example that the cubical complex constructed in DIPHA and GUDHI has dimension 3 and size 531441, while the cubical complex constructed with Perseus has dimension 3 and size 571787. This is because DIPHA and GUDHI implement the optimized way to represent a cubical complex that was introduced in [7]. However, all three libraries implement the same algorithm for the computation of PH from cubical complexes. When interpreting the results of the computations with GUDHI and Perseus, one needs to take into account the rescaling of the grey values (see Section 5.2). DIPHA To save the array in a file that can be given as input to DIPHA, one can use the MATLAB script save_image_data.m provided by the developers of DIPHA, which can be found in dipha-master/matlab. One gives the array B as input and a name for the input file. One then proceeds in a similar way as for the computation of the VR complex (see Section 4.3). Perseus To compute PH with cubical complexes with Perseus, one needs to rescale the grey values so that all grey values are positive, because Perseus does not allow cells to have negative birth times. We wrote the script save_image_data_perseus.m to save the array in a file that can be given as input to Perseus. This script takes the array B as input and a name for the input file for Perseus. $ ./perseus cubtop input-file output-file where input-file is the text file prepared with the script save_image_data_perseus.m, and output-file is the name of the text file to which the barcode intervals are written. GUDHI To compute PH with GUDHI, one can use the same input file as for Perseus. We run the command $ ./Bitmap_cubical_complex input-file and the output is then saved to a file with name input-file_persistence. Images as point clouds As we discussed in Section 5.1 of the main manuscript, one can construe a collection of images as a metric space, and one can then apply the methods for computing the PH for point clouds that we discussed in Section 4. Barcodes and persistence diagrams Once we have computed the intervals, we plot the barcodes and persistence diagrams. The format of the output files varies widely across the different packages. To address this issue and to interpret the results of the computations, we first need to change the format of the output files to a common format. In the unified format, each homological dimension has an accompanying text file in which we store the intervals. In this file, the entry in line i has the form where x i is the left endpoint of the interval and y i is the right endpoint. If the interval is infinite, we set y i = −1. • Dionysus: We provide the script dionysus_reformat_output.m to obtain the right format. This script takes two parameters as input, namely the name of the text file to which the ouput of the computations with Dionysus were stored, and a string of five letters indicating the type of file: "dcech" for the output of PH computation with theČech complex; "alpha" for the output of PH computation with the alpha complex; "VR-st" for the output of PH computation with a Vietoris-Rips complex and the standard algorithm; and "VR-co' for the output of PH computation with a Vietoris-Rips complex and the dual algorithm. • DIPHA: We provide the script dipha_reformat_output.m to obtain the right format. The script takes as input the name of the binary file to which the ouput of the computations with DIPHA were stored. • GUDHI: We provide the script gudhi_reformat_output.m to obtain the right format. The script takes as input the name of the text file to which the output of the computations with GUDHI are stored. • JavaPlex: We wrote the three scripts vietoris_rips_javaplex.m, lazy_witness_javaPlex.m, and witness_javaPlex.m, which have been written to give this type of output. • jHoles: We provide the script jholes_reformat_output.m to obtain the right format. The script takes as input the name of the text file to which the barcode intervals obtained with jHoles were stored. • Perseus: The output is already in the right format. • Ripser: The script ripser_reformat_output.m gives the right format. We can then plot barcodes using the script plot_barcodes.m and plot persistence diagrams using the script plot_pdg.m. Both scripts take as inputs (1) the name of a text file storing the intervals for PH in a certain dimension and (2) the title for the plot. Example: >> plot_barcodes('klein_bottle_25_output_1.txt','Klein bottle alpha dim 1') produces the plot in Fig. 2(a) and saves it as a .pdf file klein_bottle_25_output_1_barcodes.pdf. In the plots in Fig. 2, there are no infinite intervals, so we give an additional example to illustrate how infinite intervals are plotted with our scripts. Statistical interpretation of barcodes Once one has computed barcodes, one can interpret the results using available implementations of tools (such as bottleneck distance, Wasserstein distance, and persistence landscapes) that are useful for statistical assessment of barcodes. In this section, we give instructions for how to compute the bottleneck and Wasserstein distances with Dionysus and Hera. See the tutorial for the Persistence landscape toolbox [4] and the TDA package [5] for instructions on how to use these packages. For ease of reference, we recall the definition of Wasserstein distance from the main manuscript: Definition 1 Let p ∈ [1, ∞]. The pth Wasserstein distance between X and Y is defined as for p ∈ [1, ∞) and as for p = ∞, where d is a metric on R 2 and φ ranges over all bijections from X to Y . Bottleneck distance The bottleneck distance is the Wasserstein distance for p = ∞ and d = L ∞ (see Definition 1). Dionysus To compute the bottleneck distance between two barcodes, one can use the script bottleneck-distance.cpp (appropriately modified as explained in Section 2.1) in the Dionysus subdirectory examples. This script requires right endpoints of infinite intervals to be denoted by inf ; additionally, if there are intervals of length 0, the script will compute the wrong distance. To make sure that no intervals of length 0 are in the input files and that the intervals of infinite length are in the right format, one can use the script bottleneck_dionysus.m. This script takes as input two text files corresponding to two persistence diagrams in the unified format (see Section 6), with one interval per line, as follows: >> bottleneck_dionysus('pdg1','pdg2') and saves the persistence diagrams to two files called diagram1.txt and diagram2.txt to the current directory. Now one can compute the bottleneck distance as follows: $ ./bottleneck-distance diagram1.txt diagram2.txt Hera To compute the bottleneck distance with Hera, one can use the script bottleneck_dist in the subdirectory geom_bottleneck/build/example. This script requires right endpoints of infinite intervals to be denoted by -1 ; this corresponds to the convention in the unified format (see Section 6). One can compute the bottleneck distance as follows: $ ./bottleneck_dist diagram1.txt diagram2.txt error where diagram1.txt and diagram2.txt are two text files corresponding to two persistence diagrams in the unified format, and error is a nonnegative real number that is an optional input argument. If error is nonzero, instead of the exact distance, an approximation to the bottleneck distance with relative error error will be computed. (See the explanation in [6].) This option can be useful when dealing with persistence diagrams that include many off-diagonal points, as it can speed up computations. Wasserstein distance With Dionysus, one can compute the Wasserstein distance for d = L ∞ and p = 2, and one can compute this distance for other values of p with a straightforward modification of the source code. With Hera, one can compute the Wasserstein distance for any choice of metric d = L q , with q ∈ [1, . . . , ∞], for any p ∈ [1, ∞). Dionysus The script bottleneck-distance.cpp computes both the bottleneck distance and the Wasserstein distance for d = L ∞ and p = 2, so one can follow the instructions in 7.1.1 to compute the Wasserstein distance for these choices. If one wishes to compute the Wasserstein distance for other values of p, one has to modify the script bottleneck-distance.cpp as follows. Towards the end of the file, in the line std::cout << "L2-Distance: " << wasserstein_distance(dgm1, dgm2, 2) << std::endl; one can substitute the third input of the script wasserstein_distance with any number p ∈ [1, ∞). Hera With Hera, one can compute the approximate Wasserstein distance discussed in [6]. One can use the script wasserstein_dist in the subdirectory geom_matching/wasserstein/build. This script requires right endpoints of infinite intervals to be denoted by -1 ; this corresponds to the convention in the unified format (see Section 6). One can compute the approximate Wasserstein distance as follows: $ ./wasserstein_dist power error distance diagram1.txt diagram2.txt where power is the value for p, error is the relative error, distance is the value for q (where d = L q is the employed distance), and diagram1.txt and diagram2.txt are two text files corresponding to two persistence diagrams in the unified format.
v3-fos-license
2022-06-15T15:05:32.516Z
2022-06-13T00:00:00.000
249652296
{ "extfieldsofstudy": [], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.extrica.com/article/22525/pdf", "pdf_hash": "abad5f3169d06f77689ba412e172f21df2b8d0cc", "pdf_src": "Anansi", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:1666", "s2fieldsofstudy": [ "Engineering", "Materials Science" ], "sha1": "bb968bfe606d2a7d8179a47ff5f1a4ae58e2b459", "year": 2022 }
pes2o/s2orc
Modal behavior of post low velocity impact flax/epoxy composite structures Natural fibers are increasingly used for polymer composite intending to minimize the environmental impact. Bio-composite materials are increasingly being used in industrial transport structures, including aerospace and automotive. Natural fiber reinforces composites with equivalent performances of glass fiber composites, have higher amount of fiber, resulting in less pollution and much lighter weight, which reduces the fuel consumption. Also, they offer the ability to design complex parts and high mechanical properties structures. Barely visible impact damage (BVID) represent a serious threat to the efficiency of bio-composite materials. In this paper, modal analysis was used to investigate and evaluate the impact-induced damage of flax/epoxy composite plates. The vibratory behavior is an indicator of the structural health monitoring of composite materials. Natural frequency, damping loss factors and displacement pattern, named mode shapes, are studied in order to detect damage and anticipate perilous consequences through time. Introduction Lately, there has been a major interest and high technological development concerning biocomposites. The call for environmental friendliness has prompted the use of natural fibers as composite reinforcement. These materials are a good alternative to synthetic polymer composites, with less environmental impact and an important light-weight. They are increasingly used in different fields, namely transport, construction, sports [1]. Due to the ecological benefits and mechanical merits, the use of flax fibers has been significantly developed for various applications in recent years. Flax fibers are extracted from the skin of the stem of flax plant. France is the world's largest grower and manufacturer of flax fibers, with 50 % of the global production. Besides the environmental benefits, the interest in flax is explained by the availability of this plant, length of the elementary fibers and their important mechanical properties compared to synthetic fibers, such as glass fiber [2], [3]. These characteristics can be explained by the structural function and the shape of flax fiber. The resistance to loading and bending is ensured by the fiber bundles, distributed outside the stem. Therefore, flax reinforced composite materials are considered as an alternative and are used in industrial transport (automotive, marine and aerospace) and in other sectors such as civil constructions and sporting. Flax stem is composed of three layers. The outer layer of bark protects the plant from external harassment except of water and nutriments [4]. However, the fiber properties depend on several parameters (growth conditions, retting and fiber extraction). Otherwise, controlling these parameters is compulsory for the manufacturing of flax fiber-reinforced polymer composite materials. The impregnability between fibers and matrix is a fundamental parameter for the choice of the resin. The use of flax fibers as reinforcement requires taking into consideration their sensitivity during the manufacturing process [5]. Epoxy is a matrix from thermoset family. It contributes to strength, stiffness, durability and chemical resistance to a composite. Epoxy resin is compatible with flax fibers. It can easily penetrate through these fibers and impregnates it. Similarly, to conventional composite materials, bio-based composite materials have insignificant impact resistance because of their limited strength properties in all directions. In fact, natural fiber composites are reported to exhibit a limited resistance to impact loading. Low-velocity impact is considered as a crucial threat to flax fiber-based composite structures [6], [7]. Impact loading can occur during manufacturing, maintenance and operation, by foreign particles projection. Impact induced damage can be invisible or barely visible, yet it reduces significantly residual mechanical properties and may lead to the failure of the structure. Therefore, identifying damage in composite structures at its earliest possible stage of initiation, to prevent its further propagation is essential, even the smallest structural changes are necessary to be detected. Structural changes can be a local change of mass, damping, stiffness and flexibility of a structure. Recently, vibration based non-destructive testing (NDT) diagnostic is increasingly attractive due to its reliability and contact-less inspection of composite materials. In this paper, an approach using structural vibrations and modal analysis has been developed to inspect and detect impact-induced damage in flax/epoxy plates. This approach is able to detect barely visible or invisible impact damage in composite materials reinforced with natural flax fibers. The proposed detection approach involves the use of an accelerometer to measure the acceleration applied to the system, and a laser Doppler vibrometer to measure the sample's velocity. By following and analyzing the frequency-response functions, impact damage assessed on different types of stratifications of flax/epoxy composites are inspected. Furthermore, samples are impacted at different impact energy level. A drop-weight impact test is used to perform impact on these composite samples. Material FLAXPREG T-UD is a range of pre-impregnated material based on an epoxy resin system and the unidirectional (UD) flax fibers reinforcement developed and supplied by LINEO company and called the FlaxTape™. In this study, FLAXPREG T-UD (110 g/m²) are used for the manufacturing of flax/epoxy samples. The reference adopted is a FlaxTape™ with 110 gr of flax per m 2 and 50 % epoxy of the total weight. Prepregs are stored at -18°C in order to keep the storage conditions during the time allowed for manufacturing parts and slow down the chemical reaction's fiber/matrix. Manufacturing process The manufacturing process used in this study is film stacking by thermocompression. The cure cycle process of flax/epoxy composite materials applied is recommended by the material manufacturer. The cure of the prepreg was also monitored by a semi-automatic thermo-press machine with a 35×35 cm 2 mold. The heating rate used between the isothermal temperatures was 3 °C/min. When the consolidation began, a pressure of 3 bars is applied to the mold. The pressure was maintained during the cooling part. Two different types of stacking sequence are considered in this study, presented in Table 1. In order to qualify and compare post-impact damage mechanisms in different laminates, specimens are manufacturing according to the sequences showed below. In order to highlight the delamination mechanism, essentially due to variation in the tensile modulus between the different layers, S1 is considered [8]. S2 is quasi-isotropic composite formed with 14 films of pre-impregnated flax/epoxy. This stacking sequence is mainly used for carbon composites employed on some aircrafts Impact test The impact tests were performed by an instrumented drop-weight impact testing machine Instron Dynatup (see Fig. 1). The impactor has a hemispherical tip of 20 mm diameter, with an energy range from 0.6 to 40 Joules. Impact energy is obtained by changing the height of release and mass of the impactor. The tests are performed on specimens of 240 mm × 80 mm each, at room temperature. Specimens were clamped circumferentially in a pneumatic actuated clamping fixture. The incident impact energy is given by: , where is mass of the impactor, is gravity and ℎ is height. In order to study the vibratory behavior of impacted bio-composite samples, the same energy is used to all samples from the same batch. Test setup and measurement conditions for modal analysis The test apparatus was similar to the device described by Wojtowicki et al. [9] where the specimen center is drilled in order to be clamped to the shaker with bolts, see Fig. 2. For this device, the holes can disturb the beam behavior due to the induced stress concentration. Therefore, a modified bench allowing fixing the sample between two small aluminum clamps, without external mass addition has been designed for this study. The clamping torque was set to 13 Nm. A PCB accelerometer was fixed to the end of the specimen of the shaker, at point , in order to measure the acceleration imposed to the sample. Then, the vibration velocity ) of the free end of the beam, at point , was measured by a vibrometer sensor head Polytec-OFV-503 coupled to a controller unit Polytec-OFV-500, via a mirror inclined at 45 above M2. A real-time signal analyzer Pulse LabShopB&K gave the frequency resp function between ) and . Fig. 3 provides the mean response spectra obtained by a random excitation for three specimens of the stratification S1 impacted at three different energies, 0.8, 0.9 and 1 J. A shift of the resonant frequencies in the frequency response function (FRF) curves is noticed for the different specimens. Modal analysis results In addition, this shift depends on the applied impact energy. However, the shape of the FRF curves is the same regardless of the applied impact energy. The shift observed in the FRF responses is due to the influence of impact energy on impact-induced damage of flax/epoxy composite samples. Modal damping ratio For modal damping ratio estimation, the half-power bandwidth method has been used. It is applied on the FRF measured from vibration tests of the structure. It has been shown that this method is sufficiently accurate for a number of practical cases in which the damping ratio is less than 0.1, which has been verified in our case. Since the damping ratio is low, normalized damping ratio has been calculated and presented. Fig. 4 provides results of normalized damping ratio (%) for the three first natural modes depending on different impact energies. As shown in Fig. 4, the normalized damping ratio (%) increases when the impact energy increase. In other terms, the energy stored in the oscillation is more dissipated when the impact energy is more important Modal stiffness To see how the damage can affect the structure, another modal parameter has been studied. Modal stiffness indicates the capacity of the structure to resist deformation. This parameter is obtained using damping ratio values. For every modal testing, modal stiffness has been calculated and presented. Fig. 5 provides results of modal stiffness processing according to stratification and impact energy. As shown in this figure the normalized modal stiffness for S1 decreases when the impact energy increase. So that, when the impact energy is higher, the induced damage is more significant and the structure is, as well, more deteriorated. Thus, mechanical properties are considerably attenuated and the capacity of the structure to resist deformation is weakened. Conclusions Modal analysis techniques have been used for damage identification in composite-based structural systems. Several parameters are studied to show the correlation between the modal response and a damage. Natural frequency, damping ratio and modal stiffness are indicators of the health integrity of a structure. The main goal of this study was to verify if vibratory analysis could testify the presence of an invisible or a barely visible impact damage in a flax/epoxy composite structure. Eventually, the goal was reached. The offset of frequency response curves confirms the influence of the impact energy on flax/epoxy composite samples. As a matter of fact, a decrease in natural frequencies is observed, when the impact energy increases. For the damping ratio, results are more conclusive. The damping ratio (%) increases when the impact energy increase. So, when the impact energy is more important the dissipation energy increases as well. For modal stiffness, modal stiffness decreases when the impact energy increase. Namely, flax/epoxy composite samples capacity to resist deformation diminishes with the damage severity in the structure. This study emphasizes on the influence of the damage severity in terms of modal analysis parameters. However, the current feed-back of the study proves to be deficient for the localization of these induced-impact damages, which will be the purpose of future work.
v3-fos-license
2022-05-31T13:47:55.251Z
2022-05-31T00:00:00.000
249184505
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://bmcinfectdis.biomedcentral.com/track/pdf/10.1186/s12879-022-07473-5", "pdf_hash": "aeb723b8f2839a5a4fc9956c537bcbba8d6ca05b", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:1667", "s2fieldsofstudy": [ "Medicine", "Psychology", "Sociology" ], "sha1": "aeb723b8f2839a5a4fc9956c537bcbba8d6ca05b", "year": 2022 }
pes2o/s2orc
Psychosocial and organizational barriers and facilitators of meningococcal vaccination (MenACWY) acceptance among adolescents and parents during the Covid-19 pandemic: a cross-sectional survey Background This study aimed to identify differences and similarities among adolescents and parents in various psychosocial factors influencing meningococcal ACWY (MenACWY) vaccination acceptance. Besides, the impact of the Covid-19 pandemic was assessed as well as resulting organizational adjustments. Methods We conducted a cross-sectional survey among adolescents that attended the appointment for the MenACWY vaccination in South Limburg between May and June 2020, and their parents. Independent t-tests and χ2 test were performed to explore differences in psychosocial and organisational factors between adolescents and parents. Results In total, 592 adolescents (20%) and 1197 parents (38%) filled out the questionnaire. Adolescents scored lower on anticipated negative affect towards MenACWY vaccination refusal [t (985.688) = − 9.32; ρ < 0.001], moral norm towards MenACWY vaccination acceptance [t (942.079) = − 10.38; ρ < 0.001] and knowledge about the MenACWY vaccination and meningococcal disease [t (1059.710) = − 11.24; ρ < 0.001]. Both adolescents and parents reported a social norm favouring accepting childhood vaccinations, but adolescent scored higher [t (1122.846) = 23.10; ρ < 0.001]. The Covid-19 pandemic did barely influence the decision to accept the MenACWY vaccination. Only 6% of the participants indicated that Covid-19 influenced their decision. In addition, the individual vaccination appointment was rated very positive. Most adolescents (71.5%) and parents (80.6%) prefer future vaccinations to be offered individually rather than having mass vaccinations sessions. Conclusions This study provides an indication of which psychosocial and organisational factors should be addressed in future MenACWY vaccination campaigns. Individual vaccination appointments for adolescents should be considered, taking the costs and logistical barriers into account. Introduction Following an outbreak of meningococcal disease caused by serogroup W (MenW:cc11) between 2015 and 2018 [1,2], MenACWY vaccination was offered to just over one million 14 to 18 year-olds in 2018 and 2019 in the Netherlands. This resulted in 865.000 vaccinated adolescents (86%), 84% within the vaccination campaign and 2% outside the campaign [3,4]. Since 2020, adolescents aged 14 years old have been offered MenACWY vaccination in the standard Dutch National Immunization Programme (NIP) [3,4]. Despite the Covid-19 pandemic in 2020, MenACWY vaccination was offered with organizational adjustments in accordance with national and international guidelines. In the Netherlands, vaccination campaigns for adolescents are normally organized group wise at public venues, such as sport centres [5]. Because of the social distancing recommendation, alternative organisation of the campaign was needed. In addition, children and parents were not allowed to come to the appointment if they had for example a mild cold or if someone in the family had a fever. These factors might have had an impact on people's willingness to accept a vaccination. A discussion has been initiated about whether this global experience will solve the problem of vaccine hesitancy and vaccine refusal [6]. On the one hand, people might want to avoid other disease outbreaks on top of Covid-19 and the topicality of this infectious disease threat might strengthen people's experienced need for vaccinations preventing other infectious diseases. On the other hand, Covid-19 might be a reason to refuse or delay vaccines because of the fear of getting infected by contacts with others during the vaccination process. The pandemic might thus disrupt ongoing health care programs and the delivery of important health services, including vaccination campaigns [7]. In addition to these Covid-19 related factors, vaccination acceptance is affected by many other aspects and remains a complex phenomenon. Whether people decide to refuse, accept or delay vaccinations, involves multiple contextual, vaccine-specific, and psychosocial factors [5,8]. Contextual factors include factors such as socio-economic status, policies and geographic barriers. Vaccine-specific factors include factors such as mode of administration and vaccination schedule. Psychosocial factors include influences arising from personal perception or influences of the social environment, such as past experiences and perceived social norm [5,8]. Examining public attitudes about the vaccine and the targeted disease can contribute to achieving high coverage of a new vaccine [9]. Studies help us to understand motivations, facilitators and barriers that affect vaccine acceptance among different population groups [9]. The most common reasons for accepting or refusing the MenACWY vaccination have been indicated in previous studies. Common factors related to acceptance are accessibility, recommendations (from healthcare professionals), social responsibility, perceived risk, having enough knowledge and having a positive attitude [9][10][11][12][13]. Common reasons related to refusal include not receiving enough information, low perceived risk, and infrastructural barriers [12,14]. Both parents and adolescents are often inadequately informed about the importance of vaccination during adolescence, such as the MenACWY vaccination [14]. Nowadays, more and more online resources are used to acquire information about vaccinations [14]. Studies examining the differences between adolescent and parental decision-making mostly used qualitative methods, including smaller sample sizes. These studies conclude that many adolescents adjust their beliefs and values to those of their parents when considering a vaccination [14,15]. Studies suggest that factors influencing these decisions were mostly the same among adolescents and parents, but some indicated less knowledge of meningococcal disease among adolescents [9,15,16]. To provide tailored information and inform future campaigns we need to quantitatively examine whether beliefs and other related factors, such as organizational barriers, are different between adolescents and parents and which factors influence the decision-making in both groups. In this study, we first identify differences and similarities in various psychosocial factors influencing vaccination acceptance (e.g. attitude, knowledge and barriers) among both adolescents (aged 14 years) and parents. Second, we examine the possible impact of the Covid-19 pandemic on vaccination attitude and acceptance in both groups. Third, we study how adolescents and their parents experienced the newly introduced organizational aspects of the MenACWY vaccination due to ruling Covid-19 regulations. Study design We performed a cross-sectional questionnaire study among adolescents (born in 2006) who attended the Keywords: Meningococcal disease, MenACWY vaccination, Vaccination acceptance, Vaccination campaign, adolescent, parents appointment for the MenACWY vaccination between May and June 2020, and their parents. Because of the necessity of the MenACWY vaccination, public health services in the Netherlands have ensured that the vaccination continued in an appropriate and Covid-19 adapted way. Alternative to a mass event, adolescents received an invitation for a specific time to receive the MenACWY vaccination, with five minutes between appointments. About 3000 adolescents and their parents, living in South Limburg received an invitation to fill out the online questionnaire. Invitations with information about the study were sent by e-mail or text message. Participants were assured of their privacy and the confidential handling of their answers. Completing the questionnaire was voluntary and based on informed consent. Parents needed to give informed consent for themselves and for their child. After the invitation, two reminders were sent. The study was approved by the medical ethical committee of Maastricht University Medical Centre in Maastricht, the Netherlands (METC 2020-2261). The questionnaires were based on a theoretical framework developed by Visser et al. [17]. We removed two determinants that especially play a role among healthcare professionals and added a measurement of omission bias (Fig. 1). The questionnaires comprised items measuring psychosocial (including attitudinal), personal and organizational factors as well as the influence of Covid-19 and satisfaction vaccination appointment. Respectively seven and eight items in the questionnaire for adolescents and parents addressed the influence of the Covid-19 pandemic on their decision to receive the MenACWY vaccination, importance of vaccinations due to the coronavirus, fear of infection, willingness to get other vaccinations and whether they talked about their decision with others. Additionally, participants were asked to evaluate their vaccination appointment for the MenACWY vaccination. This part of the questionnaire consisted of three multiple choice questions and two open questions focusing on the benefits of individual and mass vaccination. Parents only received these questions if they indicated that they had been present at the vaccination appointment. The questionnaire for the parents included overall the same constructs as the questionnaire for the adolescents. The parents' questionnaire included more demographic variables, such as employment status. Factors were measured with 5-point Likert scales with end-points labelled as 1 = totally disagree and 5 = totally agree, unless otherwise indicated. In case of sufficient internal consistency (Cronbach's Alpha α > 0.60 or Pearson correlation r > 0.50), items were combined into one single concept. Statistical analysis Frequencies were performed on sociodemographic characteristics. The data consisted of a number of childparent dyads and therefore we performed multivariate Fig. 1 Theoretical model of Visser [17] including the changes we made analysis of variance (MANOVA) to check independence of both study samples. The MANOVA results showed no effect of the dyads and thus the data from parents and adolescents were treated as independent. T-tests and χ 2 test (adequate knowledge) were used to explore statistical differences in factors between adolescents and parents. Significance was set at ρ < 0.05. Hedges' g effect sizes were calculated to determine the magnitude of the differences between adolescents and parents. Differences between 0.2 and < 0.5 were interpreted as a small effect, between 0.5 and < 0.8 as moderate, and ≥ 0.8 as large [18]. Questions about immunisation services were analysed using descriptive analyses. SPSS statistical software (IMB, Armonk, USA, version 26) was used to analyse the data. Study population The MenACWY vaccination campaign in South Limburg led to a vaccination coverage of 79%, 3999 out of 5072 adolescents accepted the vaccination in May and June 2020. We were able to approach 2943 adolescents and 3186 parents. In total, 592 adolescents (response rate 20%) and 1197 parents (response rate 38%) filled out the questionnaire. Of the adolescents who returned the questionnaire, 318 (53.7%) were girls. Most adolescents were of Dutch origin (96.3%), had a mother and father of Dutch origin (respectively 88.5% and 91.7%) and were enrolled in high level education (69.6%). The majority of adolescents received previous DTP and MMR vaccinations at the age of 9 years (n = 558, 94.3%), 133 (22.5%) adolescents received a travel vaccination and 84% of the girls received the HPV vaccination. Only a small number of adolescents (9.5%) knew someone with meningitis. Of the parents, the majority were mothers (85.4%). Nearly half of the parents were between the age of 36 and 45 years (49.3%) or between the age of 46 and 55 years (44.9%). In most cases both parents were employed (97.6%) and of Dutch origin. Almost half of the mothers and fathers had a high educational level (respectively 48.1% and 47.0%) meaning at least higher professional education. The majority of the parents accepted the vaccinations offered to their child in the NIP (98.8%) and 17.3% of the parents indicated that their child had received a travel vaccination. Almost one third of the parents (28.3%) knew someone who suffered from meningitis. Table 1 provides an overview of the psychosocial, attitudinal and organizational factors measured in this study and their internal consistency. Independent t-tests showed no differences between adolescents and parents on attitude, decisional uncertainty, moral norm for others, omission bias, and general beliefs ( Table 2). Parents as well as adolescent indicated positive attitude towards the MenACWY vaccination, were satisfied with the decision to accept the vaccination, believed that vaccination is important to protect others, and reported positive beliefs about vaccination in general. Perceived control, risk perception and outcome expectations We established small differences on perceived control, risk perception and outcome expectations. Both parents and adolescents reported a positive score towards perceived control. Both feel that it is their own choice to accept a vaccine (autonomy) and that they had enough information to make an appropriate decision (capacity). Adolescents scored slightly lower on both items: perceived capacity [t (975.676) = − 5.09; ρ < 0.001] and perceived autonomy [t (1036.151)= − 7.22; ρ < 0.001]. Even though adolescents scored somewhat higher on outcome expectations [t (1074.312) = 6.13; ρ < 0.001], both parents and adolescents believe that vaccination will lead to a positive outcome. In general, participants shared the belief that meningitis had a high severity, but scored lower on susceptibility of the disease and side effects. Meaning that they worry less about the spread of the disease and the side effects of the vaccine. Adolescents scored slightly lower on all three factors of risk perception: susceptibility of side effects Anticipated negative affect, moral norm and knowledge We assessed moderate differences on anticipated negative affect, moral norm for oneself, and knowledge. Adolescents scored lower on anticipated negative affect [t (985.688) = − 9.32; ρ < 0.001] and moral norm [t (942.079) = − 10.38; ρ < 0.001]. However, high scores among both groups indicated that they expected to experience regret if not accepting the meningococcal vaccination, and that they feel they are expected to accept a vaccination for themselves (adolescents) or for their child (parents). Social norm Both groups reported that the social norm towards accepting the MenACWY vaccination was positive. Adolescents reported more positive social norms compared to parents [t (1122.846) = 23.10; ρ < 0.001], which indicates that the feeling that important others, such as friends, are positive about vaccination in general is greater among adolescents. Organizational factors Both adolescents and parents were highly positive about the accessibility, time, and provider of the vaccination (Table 4). Of the adolescents, 12.3% thought the location was not easily accessible, 10.3% was not satisfied with the time of the appointment and 2.0% thinks that vaccinations should not be provided by the public health services. Of the parents, 10.2% thought the location was not easily accessible, 12.9% was not satisfied with the time of the appointment and 2.9% thinks that vaccinations should not be provided by the public health services. Results show differences between adolescent and parents individual vaccination appointments (Table 5). Efficiency, choice in time, seeing friends, benefits for the provider and receiving emotional support are mentioned as advantages of mass vaccinations. However, advantages of mass vaccination were mentioned less often. Discussion This study provides insights into differences in factors related to MenACWY vaccine acceptance between adolescents and their parents. Future interventions should target both adolescents and parents, in order to improve knowledge and risk perception. Moreover, this study suggested little influence of the Covid-19 pandemic on decision-making among MenACWY vaccine acceptors and their parents. Additionally, results indicated that adolescents and parents prefer individual vaccination appointments over mass vaccinations, because of less waiting time, less tension, and more personal attention. Psychosocial factors In this study, we reported multiple differences between adolescents and parents on psychosocial and attitudinal factors. Adolescents scored lower on risk perception and perceived control, and higher on outcome expectations. However, effect sizes indicated that these differences were small. These small, and often meaningless, differences might occur due to the large sample size [19]. However, adolescents scored lower on moral norm for oneself and anticipated negative affect. This might be explained by the fact that adolescents worry less about the spread of the meningococcal disease [11,16]. If someone does not feel susceptible to the disease that the vaccination is preventing, refusing a vaccination might lead to less anticipated negative outcomes. Because of the success of vaccination, adolescents have little experience with vaccine-preventable diseases, which influences their perceived risk [16]. Notwithstanding the differences between adolescent and parents on risk perception, the perceived susceptibility of meningococcal disease was low among both groups. In 2020, the incidence of meningococcal W disease in the Netherlands decreased significantly [20,21]. It is expected that this decrease was partly caused by the measures that were taken to control the spread of Covid-19, and thereby also control the spread of other infectious diseases. The declining incidence may lead to a decreased perceived risk of meningococcal disease and a decreased necessity of vaccination [22]. Therefore it remains important to communicate about and address the risks of meningococcal disease and the necessity of the MenACWY vaccination. We also assessed lower scores on knowledge about the MenACWY vaccination and meningococcal disease among adolescents. Around half of the adolescents (54.1%) had adequate knowledge, while 79.0% of the parents had adequate knowledge. This is in line with other studies that have identified limited knowledge among adolescents and teenagers [9,16]. High levels of knowledge among parents are associated with socio-economic status, household income and high educational level [9]. In our study, almost half of the mothers (48.1%) and fathers (47.0%) were highly educated, and in most cases, both parents were employed (97.6%), which might be an explanation for the number of parents having adequate knowledge. Most studies reported parents as the decision-makers regarding vaccination [23,24]. Nevertheless, 40.1% of the parents in our study indicated that they made the decision together with their child, and 68% of adolescents indicate that they ask their parents for information about vaccinations. Parent-child communication about vaccination might increase knowledge about the vaccine and the disease among adolescents [11,15], and make adolescents feel empowered about future healthrelated decisions [23]. Future interventions might focus on parent-child discussions and increasing parents' confidence in discussion vaccinations with their child. Adolescents scored significantly higher on social norm. Previous studies have reported that adolescents are more sensitive to peer influence than adults when it comes to food intake [25]. It is not clear if this is also applicable to vaccination acceptance. However, in both groups, positive social norms toward MenACWY vaccination uptake were assessed. This is in line with a previous study, indicating more positive social norms among acceptors compared to refusers or partial acceptors [26]. Covid-19 related factors We reported almost no influence of Covid-19 on the decision to vaccinate against meningitis. Only 6% of the participants indicated that the pandemic influenced their decision. One explanation for these results might be the fact that the study was only able to include vaccine acceptors. Most of the participants indicated that they would Accessibility location (82) Accessibility location (22) Covid-19 measures (e.g. hygiene) (33) Costs (15) also accept other vaccinations during the pandemic. Another explanation might be the effective communication about the importance of the MenACWY vaccination during the pandemic. The invitation contained additional information explaining the importance of the vaccination and the measures that were taken related to Covid-19. Moreover, attention was paid to the MenACWY vaccination campaign in regional press [27]. Little influence of Covid-19 is confirmed by the difference in vaccination rates between 2019 and 2020 in South Limburg. In 2020, 79% of the 14-year-olds was vaccinated against meningitis. In 2019, the vaccination rate among the same age group was 87%. This difference of 8% can be caused by the influence of Covid-19. Also, in 2019 the MenACWY vaccination was offered at several times during the year as the adolescents in 2019 received more invitations or reminders for the vaccination, probably explaining part of this difference. Organizational factors The results of this study indicated positive scores on the organisational factors time, location and provider. In general, participants did not report any perceived organisational barriers. This can be explained by the fact that the study only included vaccine acceptors. Vaccine acceptors perceive in general fewer practical barriers compared to vaccine refusers and lower perceived organisational barriers is associated with higher vaccine acceptance [26,28]. In addition, the MenACWY vaccination was organized at Youth Health Care locations instead of public venues and adolescents received an invitation for a specific time because of the Covid-19 measures. Most participants were very positive about their appointment. Most participants preferred the individual appointment and want future vaccinations to be offered individually rather than having mass vaccinations sessions. Adolescents and parents experienced less stress and tension during the individual appointment. Besides, less waiting time and more personal attention were mentioned as advantages. A report from the WHO indicated that mass vaccination campaigns may contribute to immunization stress-related responses (ISRR) [29]. A crowded waiting area, lack of privacy and negative communication might be environmental causes of ISRR during mass vaccination campaigns [29]. Since 2018, the Public Health Service of Groningen implemented more individual consultations [30]. The Public Health Service South Limburg might consider individualizing the vaccination appointments for children aged 9 years and older. This means that also the costs of individual appointments need to be considered. A cost-benefit analysis of the Public Health Service of Amsterdam showed that individual appointment, including calling refusers and organizing home visits, were 40% more expensive than mass vaccinations [personal communication by Public Health Service Amsterdam]. Limitations Some limitations of this study need to be addressed. First, despite efforts to reach vaccine refusers, only vaccine acceptors were included. Therefore, it was not possible to study differences between vaccine acceptors and refusers. Second, selection bias is not ruled out as response was lower (20%) in adolescents than in parents (38%). This might be due to the long questionnaire (20 min) [31]. Responses are comparable to other studies, but employed parents (98%), higher educated parents (48%) and mothers (85%) seem to have participated more. This might have overestimated knowledge scores and results need to be interpreted with caution for unemployed and low educated parents and fathers. Conclusions While increasing vaccination coverage remains challenging, this study provides insights into which psychosocial, attitudinal and organisational factors should be addressed in future MenACWY vaccination campaigns. The results indicate that both adolescents and parents should be targeted to improve knowledge and risk perception. Individual vaccination appointments for adolescents should be considered, while taken into account the costs and logistical barriers. To further assess psychosocial, attitudinal and organisation factors, related to MenACWY vaccination decision-making, a study among vaccine refusers should be conducted.
v3-fos-license
2021-01-29T05:26:46.443Z
2021-01-28T00:00:00.000
231721430
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://bmcpulmmed.biomedcentral.com/track/pdf/10.1186/s12890-020-01390-6", "pdf_hash": "29b1f9a27525bed9ea9c514bc4bb3c52b856b14a", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:1671", "s2fieldsofstudy": [ "Medicine" ], "sha1": "29b1f9a27525bed9ea9c514bc4bb3c52b856b14a", "year": 2021 }
pes2o/s2orc
Successful treatment with alectinib after crizotinib-induced hepatitis in ALK-rearranged advanced lung cancer patient: a case report Background Besides the clinical benefit of crizotinib in ALK-rearranged metastatic non-small cell lung cancer (NSCLC), concerns about its hepatotoxicity have arisen. It is not clear whether this is a drug class side effect or if the use of other selective ALKs inhibitors is safe after this serious adverse event. While evidence from clinical trials is scarce, reports of treatment after crizotinib-induces hepatitis may add to clinical decision. Case presentation Herein, we report a case of acute hepatitis induced by crizotinib in a 32-years-old female diagnosed with metastatic NSCLC, harboring the ALK-rearrangement. After 60 days of crizotinib therapy, the patient presented with acute hepatitis, diagnosed after investigation of non-specific symptoms, such as nausea and fatigue. Serum aspartate aminotransferase and alanine aminotransferase levels had increased from baseline to 3010 IU/L and 9145 IU/L, respectively. Total bilirubin increased up to 7.91 mg/dL, but she did not develop liver failure. After crizotinib discontinuation, a gradual hepatic function recovery occurred. Unfortunately, during the period without specific oncology treatment, her disease showed an unequivocal progression. Therefore, she started on alectinib with great response, and no liver function alteration recurred. Conclusions This case suggests that alectinib, even belonging to the same drug class, could be used as an alternative agent when crizotinib is the etiology of liver damage, but more robust evidence has awaited. Background Non-small-cell lung cancer (NSCLC) accounts for 80% of lung malignancies, the leading cause of cancer deaths worldwide. Unfortunately, the majority is already unresectable or metastatic upon its diagnosis [1] and will require systemic therapy. Adenocarcinoma is the most common NSCLC histologic subtype, and nowadays, its treatment relies on the molecular signature, tailored by specific driver mutations [2]. Besides its clinical benefit, concerns about crizotinib hepatotoxicity have arisen. In the phase 3 trial PROFILE 1014, which granted the drug approval in Brazil and many other countries for ALK-positive NSCLC patients, 14% of patients in the crizotinib arm developed grade 3 transaminases elevation [5]. Since then, case reports have been published describing crizotinib potential liver injury and its management. However, it is not clear whether this is a drug class side effect or if the use of other selective ALKs inhibitors is safe after this severe toxicity. Herein, we report a real-world case of acute hepatitis induced by crizotinib in an ALK-rearranged positive NSCLC patient, in whom the treatment shift for a second-generation ALK-inhibitor after the event recovery. A complete metabolic response was achieved, and no serious adverse events occurred. Case presentation A 32-years-old female, non-smoker, had 3-months onset symptoms of dyspnea and cough. There were no comorbidities or medicine use. Her computer tomography (CT) scans showed multiple bilateral nodules associated with lymphangitis signs, enlarged bilateral mediastinal lymph nodes, and thoracic vertebral bone lesion. She underwent a right inferior lobe segmentectomy, whose path report showed a lung carcinoma. No brain metastasis was identified by MRI, and the PET-CT has not performed at initial staging. As the symptoms were getting worse, the treatment based on carboplatin plus paclitaxel initiated, while complimentary histopathologic and molecular investigations have performed. After two cycles, the immunohistochemistry confirmed a pulmonary adenocarcinoma, and the ALK rearrangement (2p23q in more than 15% of the specimen) was detected by fluorescence in situ hybridization test. Moreover, PDL-1 was 40%, EGFR, ROS-1, MET, RET, ERBB2, and BRAF were negative. Since the ALK-positive stage IV pulmonary adenocarcinoma diagnosis, the treatment has promptly adjusted for crizotinib-the only TKI approved in Brazil for this scenario at that moment-250 mg twice daily, on November 11th, 2018. After almost 60 days of therapy, despite respiratory symptoms improvement, the patient presented some non-specific complaints such as nausea and fatigue. Physical examination with no relevant finding, including jaundice. By December 21st, her laboratory review showed a severe liver dysfunction, shown in graph 1. Due to acute hepatitis, crizotinib therapy has halted. Viral involvement and other etiologies were investigated and excluded. Thenceforth, a gradual liver function recovery has occurred. On February 18th, 2019, blood tests did not show any significant alteration. Meanwhile, during the period without specific oncology treatment, dyspnea and cough recurred, and the patient developed a headache onset. Central nervous system (CNS)-multiple small lesions along brain parenchyma-and lung progression were detected on February 4th. Since then, she started on alectinib 600 mg orally twice daily. A few weeks later, she completely recovered from her respiratory symptoms, and no liver function alteration recurred (see Fig. 1). The patient is still under full dose alectinib therapy with excellent tolerability and no adverse effects. PET-CT performed on May 8th did not show any metabolic activity as well as the brain MRI did not show any evidence of CNS involvement. Discussion and conclusions This report describes a successful case of treatment with alectinib after crizotinib-induced hepatitis. This serious adverse event may be ascribed to crizotinib due to the temporal relationship between drug beginning and the transaminases elevation and due to its resolution after the medication interruption. So far, the mechanism of crizotinib liver toxicity is not clear, and specific risk factors or clinicopathologic predictors for crizotinib-induced liver injury have not yet been identified. The reported general risk factors of druginduced hepatotoxicity include older age, female gender, HIV infection, HBV or HCV infection, pregnancy, excessive alcohol intake, smoking, and genetic variability [7,8]. According to its prescribing information, the hepatotoxicity generally occurs within the first 2 months of the treatment, which was compatible with the case reported. Considerations about the pharmacodynamic properties are important regarding drug side effects. The liver metabolizes crizotinib, and CYP3A plays a major role. Therefore, we should avoid concomitant use of CYP3A inducers and inhibitors, which may alter crizotinib plasma concentrations [9]. However, there was no concomitant drug used by our patient. Alectinib, a second-generation TKI targeting ALK, is also associated with elevations of AST and ALT, as showed by clinical trials. Among the 405 patients enrolled in intervention arms in NP28761, NP28673, and ALEX studies, AST and ALT elevations greater than five times the upper limit of normal (ULN) occurred in 4.6% and 5.3% respectively, bilirubin levels more than three times the ULN occurred in 3.7%. In the majority of patients, these events occurred in the first 3 months of treatment. Ten patients discontinued alectinib due to Grades 3-4 AST/ALT (n = 6) and bilirubin (n = 4) elevations. Thus, monitoring liver function tests, including ALT, AST, and total bilirubin every two weeks during the first 3 months of treatment, then once a month, or whenever clinically indicated, is strongly advisable [10]. This case report harbors some limitations. We did not perform a liver biopsy, and the association between the crizotinib with the liver damage was established only based on clinic and temporal criteria, which reflects real-world practice. Moreover, there is no specific recommendation regarding the use of alectinib after recovering from crizotinib-induced hepatitis. However, the patient had disease progression after crizotinib interruption, and the alectinib was the best option in the second-line setting at that time, based on phase II studies. At that time, other TKI targeted to ALK-rearrangement was not available in Brazil [11]. It is important to highlight the contribution of the present report: although the described occurrence of liver toxicity with both TKIs, hepatitis induced by one drug does not exclude the possibility of treatment with another specific ALK-TKIs. While evidence from clinical trials is scarce, experiences like that, in a real-world scenario, may add to clinical decision. Once this class of drugs changed the natural history of the disease, its definitive discontinuation could impact the patient´s overall survival. In conclusion, this case suggests that alectinib could be an alternative agent when crizotinib is the etiology of hepatitis. Therefore, patients might still derive benefit from target therapy.
v3-fos-license
2018-02-03T05:20:42.955Z
2018-01-19T00:00:00.000
8454245
{ "extfieldsofstudy": [ "Computer Science", "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://papers.phmsociety.org/index.php/ijphm/article/download/2590/1548", "pdf_hash": "47e600bfc97cd1f2c04ab80b11f3b6f079a60812", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:1673", "s2fieldsofstudy": [ "Engineering", "Materials Science" ], "sha1": "5fc8f75260e8f1bda90a8e751a269b7963a216a5", "year": 2017 }
pes2o/s2orc
Experimental Validation of Model-Based Prognostics for Pneumatic Valves Because valves control many critical operations, they are prime candidates for deployment of prognostic algorithms. But, similar to the situation with most other components, examples of failures experienced in the field are hard to come by. This lack of data impacts the ability to test and validate prognostic algorithms. A solution sometimes employed to overcome this shortcoming is to perform run-to-failure experiments in a lab. However, the mean time to failure of valves is typically very high (possibly lasting decades), preventing evaluation within a reasonable time frame. Therefore, a mechanism to observe development of fault signatures considerably faster is sought. Described here is a testbed that addresses these issues by allowing the physical injection of leakage faults (which are the most common fault mode) into pneumatic valves. What makes this testbed stand out is the ability to modulate the magnitude of the fault almost arbitrarily fast. With that, the performance of end-of-life estimation algorithms can be tested. Further, the testbed is mobile and can be connected to valves in the field. This mobility helps to bring the overall process of prognostic algorithm development for this valve a step closer to validation. The paper illustrates the development of a model-based prognostic approach that uses data from the testbed for partial validation. Introduction Valves, and pneumatically-actuated valves in particular, play a critical role in many systems, in cryogenic propellant loading systems for controlling the flow of propellant (Daigle & Goebel, 2011), in aircraft carrier steam catapults (Shevach et al., 2014), the residual heat removal system in a nuclear power plant (Lin, Li, & Zio, 2014), and air bleed systems in aircraft (Lorton, Fouladirad, & Grall, 2013). In these kinds of systems, valve failures can have an adverse impact on system safety and availability. Hence, there is a critical need for valve health monitoring and failure prediction, and to develop prognostic methods for computing end of life (EOL) and remaining useful life (RUL). The contributions of this work are twofold. In the first phase a hardware-in-the-loop testbed was developed for pneumatic valves used for cryogenic propellant loading operations. The testbed allows four different kinds of leakage faults to be injected and their magnitude controlled to any desired fault progression function. The setup is similar to the one with actual propellant loading systems in the field. The approach is extended to enable prognostics in real time, and demonstrated using real data from the pneumatic valve testbed. In the second phase a model-based prognosis framework is implemented to two types of pneumatic valves. Unlike earlier work based on particle filters (Daigle & Goebel, 2011), in this paper a new model-based method using measurements of valve open and close times is discussed, recently developed in (Daigle, Kulkarni, & Gorospe, 2014). In real valve operations, typically only valve position is measured, from which the only meaningful information for prognostics are the valve open and close times. Valve open/ close times are computed based on defined fully open/close position thresholds which are defined based on respective valve operation. Difference between completely open and completely close positions is measured to compute time. Given that the valves are operated discretely, and we measure position, we can just compute the time difference of when these positions are measured. No "air leakage" measurement sensor is available in the field, hence this is inferred in order to perform prognostics. While the information is sparse compared to an environment with rich sensor information. The overall model is simpler and requires significantly less computation to isolate and identify faults, and predict EOL and RUL. However, tradeoffs are made with regards to prognostic horizon. The approach follows the general estimation-prediction framework for model-based prognostics (Orchard & Vachtsevanos, 2009;. The paper is organized as follows. Section 2 describes the valve prognostics testbed. Section 3 explains the valve models. Section 4 provides the valve prognosis framework, and Section 5 presents prognosis results using testbed data. Section 6 discusses related work. Section 7 concludes the paper. through a power supply that has a fail-safe mode which in turn isolates the valve prognostics testbed from the field cryogenic loading system interface in case of an emergency. The testbed contains a discretely-controlled valve (DV), a solenoid valve (SV), a continuously-controlled valve (CV), a current-pressure transducer (IPT), and a number of proportional valves for injecting leakage faults. The components are described in the following subsections. The fault injection testbed is portable. That is, it can be moved from the lab environment and it can be connected to the actual propellent loading system on field. This gives the testbed the unique ability to test faults on any of the discrete and continuously-controlled valves not only during development in the lab but also for validation in the production environment. DV Operation The discrete-controlled valve (DV), illustrated in Fig. 3 is a normally-open valve with a linear cylinder actuator. The valve is closed by filling the chamber above the piston with pressurized air up to the supply pressure, and opened by evacuating the chamber to atmosphere. The spring returns the valve to its default position. A three-way two-position solenoid valve (SV), illustrated in Fig. 4, is used for controlling the operation of the DV valve. The cylinder port connects to the DV valve, the normally closed (NC) port connects to the supply pressure, and normally open (NO) port is left unconnected, allowing venting to atmosphere. When the solenoid is energized, the path from the NC port to cylinder port is open, allowing pressurized air to pass from the supply to the valve, thus actuating the valve. When de-energized, the supply pressure is closed off and the path from the cylinder port to the NO port is opened, thus venting the actuation pressure in the DV valve, allowing the valve to open due to the return spring. The solenoid is powered by 24 V DC either through the power supply or by a backup battery. DV Fault Injection Pneumatic valves can suffer from leaks, an increase in friction due to wear, and spring degradation (Daigle & Goebel, 2011). Because friction and spring faults cannot easily be injected or their rate of progression controlled, only to leak faults are discussed in this work. However, leaks are the most common faults found in pneumatic valves. For the DV, two different leak faults may be considered: (i) a leak to atmosphere, and (ii) a leak from the supply. In the former, this can be manifested as a leak across the NO seat of the solenoid valve, or a leak in the pressure line going to the pneumatic valve. For the latter case, the fault can be manifested as a leak across the NC seat of the solenoid valve. To emulate these faults, two remotely-operated proportional valves, V1 and V2, were installed as shown in Fig. 1. One valve, V1, leaks to atmosphere (henceforth called the vent valve), while the other, V2, is installed on a bypass line around the solenoid valve (henceforth called the bypass valve). The position of the vent and bypass valves can be controlled through a current signal, continuously between 0 and 100% open. In this way, one can control the fault progression (growth of leak size) according to various progression profiles. Fig. 5 illustrates a leak to atmosphere using the vent valve (V1). The leak through V1 emulates a leak at the cylinder port or across the NO seat. Similarly, Fig. 6 illustrates a leak from the supply using the bypass valve (V2). The leak through V2 emulates a leak across the NC seat. CV Operation The CV, illustrated in Fig. 7, is a normally-closed valve with a linear cylinder actuator with dual pressure chambers. The valve is positioned by a pressure difference between the primary pressure chamber which is at standard operational pressure and the secondary chamber which can vary in pressure as controlled through the IPT. The IPT output pressure is regulated down from the input pressure and is directly proportional to the applied control current supplied to the transducer. Thus, a low current will create a lower output pressure and a higher current will increase the output pressure. CV Fault Injection As shown in Fig. 8, two different leak faults for the CV are considered: (i) a leak to atmosphere from the signal line, through valve V3; and (ii) a leak to atmosphere from the supply line, through valve V4. Like V1 and V2, V3 and V4 are proportional valves that can be controlled from 0 to 100% to implement any desired fault progression profile. Valve Modeling In this work, a model-based approach to valve prognostics (Daigle & Goebel, 2011) is developed and implemented, which requires dynamic models of the components that describe both nominal and degraded operation. A physics-based approach is adopted where the model is described using ordinary differential equations. For implementation purposes, discrete-time versions are converted using a sample time of 1 × 10 −4 s. Models for both discretely-and continuously-opening pneumatically actuated valves are developed, which were were originally presented in Kulkarni, Daigle, Gorospe, & Goebel, 2015), and are summarize here for completeness. Along with providing dynamics of the respective components, the section presents how EOL is defined for these valves. In (Daigle & Goebel, 2011) the authors concluded that corrosion-based leaks are not a function of usage, i.e., cycling, but are correlated to environmental conditions only. Usage would have an effect on other fault modes which are out of scope for this study. The work discussed herein focuses on faults that can be controlled in experiments. Discrete Valve Modeling A normally-open discretely-opening valve (as seen in Fig. 3) is considered in this work. Normally, the chamber above the piston is open to atmosphere, and so the piston is forced up by the return spring. The valve is closed by filling the chamber up to the supply pressure. The pressure force overcomes the spring force, moving the piston downward, closing the valve. The valve is opened by evacuating the gas in the chamber to atmosphere. The valve model is based on mass and energy balances. The system state includes the position of the valve, x(t), the velocity of the valve, v(t), the mass of the gas in the volume above the piston, and the mass of the gas in the pipe connecting the solenoid valve to the pneumatic valve port: . (1) The position is defined as x = 0 when the valve is fully closed, and x = L s when fully open, where L s is the stroke length of the valve. The derivatives of the states are described by where a(t) is the valve acceleration, f t (t) is the mass flow going into the pneumatic port from the pipe, and f p (t) is the total mass flow into the pipe. The single input is considered to be where u t (t) is input pressure to the pneumatic port, which alternates between the supply pressure and atmospheric pressure depending on the commanded valve position. The acceleration is defined by the combined mass of the piston and plug, m, and the sum of forces acting on the valve, which includes the force from the pneumatic gas, F p = (p t (t)-p atm )A p , where p t (t) is the gas pressures on the top of the piston, and A p is the surface area of the piston; the weight of the moving parts of the valve, F w = −mg, where g is the acceleration due to gravity; the spring force, F s = k(x(t)+x o ), where k is the spring constant and x o is the amount of spring compression when the valve is open; friction, F f = −rv(t), where r is the coefficient of kinetic friction, and the contact forces F c (t) at the boundaries of the valve motion, where k c is the (large) spring constant associated with the flexible seals. Overall, the acceleration term is defined by The pressure p t (t) and the pipe pressure, p p (t), are calculated as: where an isothermal process is assumed in which the (ideal) gas temperature is constant at T, R g is the gas constant for the pneumatic gas, V t 0 is the minimum gas volume for the gas chamber above the piston, and V p is the pipe volume. The gas flows are given by: where f p,in is the flow into the pipe from the supply or atmosphere, f p,leak is a leak term with p leak being the pressure outside the leak, f p,t is the flow from the pipe to the chamber above the piston, and f g defines gas flow through an orifice for choked and non-choked flow conditions (Perry & Green, 2007). Non-choked flow for p 1 ≥ p 2 is given by where γ is the ratio of specific heats, Z is the gas compressibility factor, C s is the flow coefficient, and A s is the orifice area. Choked flow for p 1 ≥ p 2 is given by f g, c p 1 , p 2 = C s A s p 1 γ ZR g T 2 γ + 1 γ + 1 γ − 1 . (14) Choked flow occurs when the upstream to downstream pressure ratio exceeds . The overall gas flow equation is then given by As shown by Eq. 13 and Eq. 14, the leak rate is determined by pressure differences, gas properties, and valve parameters C leak and A leak . As the leak grows (by the corresponding leak valve opening), this is reflected as a change in A leak . Based on the developed testbed experimental data, it is observed that the leak area is proportional to the square of the valve position, i.e., for some proportionality constant K leak . As the leak area increases it directly affects the position travelled by the valve during operation. The relationship in Eq. 16 is specific to the valves under test. A generalized relationship is discussed in (Richer & Hurmuzlu, 1999) The only available measurement is the valve position, given by Fig. 9 shows an example nominal valve cycle. The valve starts in its default open state. The valve is commanded to close at 0 s. Supply pressure (75 psig) is delivered to the pipe and to the valve, causing the piston to lower, closing the valve just after 1 s. At 4 s, the valve is commanded to open, and the pipe is opened to atmosphere. The pipe pressure and valve pressure drop, and once the pressure drops low enough, the spring overcomes the pressure force and the piston moves upwards. The valve completes opening just after 6 s. The valve parameters were identified from known valve specifications, and unknown parameters estimated to match the nominal opening and closing times, which for the actual valve, are both around 3.5 s. As discussed in Section 2, two different leak faults are considered, one in which there is a leak from the supply pressure input to the valve (p leak is the supply pressure), emulated using the bypass valve, and one in which there is a leak out to atmosphere (p leak is atmospheric pressure), emulated using the vent valve. In the former case, the valve will close more slowly and open faster, and in the latter, the valve will open more slowly and close faster. With a large enough leak, the valve may fail to open or close completely. Fig. 10 shows the changes in valve timing with the leak from the supply, and Fig. 11 shows the changes in valve timing with the leak to atmosphere. In this work a damage progression model is considered where the leak hole area increases linearly with time (Fontana, 1986;Ahammed, 1998). The growth curve used in this work is completely based on assumed operating corrosion conditions like humidity, salt in air, temperature which stay more or less constant over the experiment cycles. Any fluctuations like seasonal effects are averaged out because the degradation phenomena is progressing at a very slow rate and does not change with each operating cycle. This growth curve can be controlled systematically through the developed tested by injecting specific profile of damage progression. With additional knowledge the damage progression provided based on corrosion type similar profile can be programmed into the system. End of life (EOL) is defined through open/close time limits of the valves, as in real valve operations (Daigle & Goebel, 2011). The valve in the testbed is required to open within 8.5 s and close within 6 s. Continuous Valve Modeling The actuator has two pressure ports, one for the supply pressure, and one for the signal pressure as seen in Fig. 7 for a normally-closed continuously-controlled valve. External to the valve, the signal pressure is controlled between 3-15 psig in order to move the valve between fully closed and fully open. A pressure regulator maintains a loading pressure on top of the valve piston, and the piston moves by modulating the actuating pressure via the pilot valve. The pilot valve, balanced by the spring and the diaphragm assembly, moves up or down according to the signal pressure. When moving up, the volume below the piston is opened up the atmosphere, and when the pilot moves down, the volume below the piston is opened up to the supply pressure. Similar to the DV, the CV model is based on mass and energy balances. The system state includes the position of the piston, x p (t), velocity of the piston, v p (t), position of the pilot/ spring assembly, x s (t), velocity of the pilot/spring assembly, v s (t), mass of gas in volume below the piston m b (t), mass of gas in the pipe connecting to the supply input, m sp (t), and mass of gas in the pipe connecting to the signal input, m sg (t): The piston position is defined as x p = 0 when the valve is fully closed, and x p = L s when fully open, where L s is the stroke length of the valve (about 20 mm). When fully closed, the pilot/spring assembly position is also defined as x s = 0. The derivatives of the states are described by where a is acceleration and f is mass flow. The two inputs are considered to be where u sp (t) is input pressure to the supply port, which is nominally 75 psig, and u sg (t) is the input pressure to the signal port, which varies between 3-15 psig, depending on the commanded valve position. The acceleration of the piston is defined by the combined mass of the piston and plug, m p , and the sum of forces acting on the piston, which includes the force from the actuating pressure, F a = p b A p , where A p is the area of the piston in contact with the actuating pressure; the force from the loading pressure, F l = A l p l , where A l is the area of the piston in contact with the loading pressure; friction, F f = −r p v p (t), where r p is the coefficient of kinetic friction; the spring force, F s = k(x p + x o − x s ) where x o is the spring compression at the closed position; the weight, F w = −m p g, and the contact forces, F c (t), at the boundaries of the valve/piston motion, where k c is the (large) spring constant associated with the flexible seals. Overall, the acceleration term is defined by The pressures p l is assumed to be constant and known, and the pressure p b is computed as where an isothermal process is assumed in which the (ideal) gas temperature is constant at T, R g is the gas constant for the pneumatic gas, and V t 0 is the minimum gas volume for the gas chamber below the piston. The acceleration of the pilot/spring assembly is defined by their combined mass, m s , and the sum of forces acting on the assembly, which includes the force from the spring F s (as defined above); the force from the signal pressure, F sg = (p sg − p atm )A d , where A d is the area of the diaphragm in contact with the signal pressure and p atm is atmospheric pressure; friction, F fs = r s v s (t), where r s is the coefficient of kinetic friction; the force from the supply pressure, F sp = (p sp − p atm )A sp , where A sp is the area of the pilot in contact with the supply pressure; the weight, F ws = m s g; and contact forces F cs (defined as above but with L ss , the stroke length of the pilot/spring assembly). The pressures p sg and p sp are computed as where V sg is the volume of the pipe containing the signal pressure, and V sp is the volume of the pipe containing the supply pressure. The mass flows f b (t), f sp (t), and f sg (t) are defined by f sp (t) = f g u sp (t), p sp (t) − f sp, leak (t) − x s < 0 ⋅ f g p sp (t), p b (t) , f sg (t) = f g u sg (t), p sg (t) − f sg, leak (t), where f sg,leak and f sg,leak are leak terms (both leaks to atmosphere). Note also that the flows into and out of the underside of the piston depend on the position of the pilot/spring assembly. Here, f g defines gas flow through an orifice for choked and non-choked flow conditions (Eq. 15). The only available measurement is the valve position, given by Fig. 12 shows an example nominal valve cycle. The valve starts in its default closed state. The valve is commanded to 50% open using a signal pressure of 9 psig. The pilot valve moves, allowing gas from the supply line to enter below the piston, increasing the mass of gas below the piston and increasing the pressure. When there is enough pressure, the piston begins to move up, and when the valve reaches 50% open, the forces balance and the pilot valve closes. Due to small fluctuations in pressure the pilot intermittently moves up and down to keep the pressures balanced, causing sllight disturbances in position. Leak faults will cause an effect on the behavior of the valve. With a leak from the supply line, trends observed are seen in Figs. 13 and 14. Due to the decrease in effective supply pressure, it takes longer to close the valve and the steady-state position decreases because the valve is set up based on a nominal supply pressure. With a leak from the signal line, the effect on valve timing is not very significant, but since signal pressure will be lower due to the leak, its steady-state position will decrease. End of life (EOL) is defined through the use of timing limits on the valves, as is done in real valve operations (Daigle & Goebel, 2011), and also the error in its steady-state position. The valve in the testbed is required to open within 7.5 s, close within 5 s, and when commanded to open to 100% it must open up at least to 98.5%. Valve Prognosis In this section the prognosis framework developed for the valves, following the general estimation-prediction framework of model-based prognostics as defined in the scientific literature (Luo, Pattipati, Qiao, & Chigusa, 2008;Orchard & Vachtsevanos, 2009;. However, since only valve timing values are used for prognosis, a simpler estimation approach , similar to that developed in (Teubert & Daigle, 2013) is implemented, as opposed to more complex and computationally intensive filtering approaches used in previous works (Daigle, Saha, & Goebel, 2012;Orchard & Vachtsevanos, 2009). Section 4.1 formulates the prognostics problem, followed by a description of the estimation approach and a description of the prediction approach. Problem Formulation The system model assumed may be generally defined as y(k) = h(k, x(k), θ(k), u(k), n(k)), where k is the discrete time variable, x(k) ∈ ℝ n x is the state vector, θ(k) ∈ ℝ n θ is the unknown parameter vector, u(k) ∈ ℝ n u is the input vector, v(k) ∈ ℝ n v is the process noise vector, f is the state equation, y(k) ∈ ℝ n y is the output vector, n(k) ∈ ℝ n n is the measurement noise vector, and h is the output equation. 1 In prognostics, the key factor is in predicting the occurrence of some event E that is defined with respect to the states, parameters, and inputs of the system. The event is defined as the earliest instant that some event threshold T E : ℝ n x × ℝ n θ × ℝ n u B, where B ≜ 0, 1 changes from the value 0 to 1 (Daigle & Sankararaman, 2013). That is, the time of the event k E at some time of prediction k P is defined as k E k P ≜ inf k ∈ ℕ: k ≥ k P ∧ T E (x(k), θ(k), u(k)) = 1 . The time remaining until that event, Δk E , is defined as In the context of systems health management, T E is defined via a set of performance constraints that define what the acceptable states of the system are, based on x(k), θ(k), and u(k) . In this context, k E represents end of life (EOL), and Δk E represents remaining useful life (RUL). As described in Section 3, for the valves, timing and steady-state position requirements define T EOL . The prognostics problem is to compute estimates of EOL and/or RUL. This is done is two steps, an estimation step that computes estimates of x(k) and θ(k), followed by a prediction step that computes EOL/RUL using these values as initial states. For the case of the valve, the future inputs are known, i.e., the valve is simply cycled open and closed, so there is no uncertainty with respect to future inputs. Fault Detection Since valve position is measured, only valve timing values and steady-state position values are useful for prognostics. Timing information is obtained from the continuous position measurement data by extracting and computing the difference in time between when the valve is commanded to move, and when it reaches its final position. As discussed in Section 3.1, open and close times are used for faults in the DV, and, as discussed in Section 3.2, close times and steady-state position are used for faults in the CV. To detect faults, predefined threshold are set on the opening times, closing times, and steady-state position. If the mean value, averaged over the last 3 cycles, is over the threshold, then a fault is detected. Estimation Using the model, measurements from valve timing and steady-state position are mapped back to the fault size (i.e., equivalent leak area). In order to perform the estimation, an offline lookup table is constructed using the simulation models of the valves to compute, for different values of leak size in the expected ranges, the open and close times (for the DV) and close times and steady-state position (for the CV) (Teubert & Daigle, 2013;Daigle et al., 2014). With a fine enough granularity, a lookup table will provide accurate estimates but at a fraction of the computational cost of online estimation methods. The developed testbed allows for modular use of different corrosion propagation models. If a alternative corrosion growth is deemed to be a more desirable choice, it can be swapped in easily through replacement of a function call in the governing program. The prognostics approach is similarly flexible, because the open/close times are mapped to leak sizes. While it is assumed here that the leak sizes grow linearly, different leakage behavior can be used without impacting the rest of the prognostics framework The calculated equivalent leak area is mapped back to the position of the leak valve. According to Eq. 16, the leak area increases linearly with the square of the leak valve position, hence square root of the leak size is calculated, i.e., x leak = A leak /K leak . The leak valve position, x leak , is assumed to be increasing linearly, so as to estimate the linear coefficients (where the slope is lumped with K leak . Given the estimated values of damage progression, a regression step is performed to find the line that fits this data, using the last N cycles. For the leak to atmosphere of the DV, only closing times can be used . This is because, in the presence of this leak, the valve may not get up to the full supply pressure when the valve closes in time for the next cycle, so since the internal valve actuator pressure is not measured, a correct initial condition is not available for the simulation with which to estimate the leak parameter value for the following opening time. For the supply leak of the DV, analogous situation arises and can use only opening times for leak parameter estimation. For the signal line leak fault of the CV, steady state values are used. The signal pressure controls the open/close position of the valve while the supply pressure is used for regulating the pressure inside the valve. When this fault is injected, there is no change in the supply pressure but the signal pressure decreases and so the valve is not able to reach its desired steady state final value. For the supply line leak fault of the CV, open time values are used. When this leak is injected, there is a decrease in the supply pressure, which leads to an increase in the valve opening time (since the corresponding pressure forces take longer to develop). As the leak increases the open time increases accordingly, while the steady state values remain relatively constant. Fault Isolation Faults are isolated by inspecting open/close timing and steady-state position trends (see Fig. 11, Fig. 10, Fig. 13, and Fig. 14). For the DV, since the two faults produce different qualitative changes on the valve timing, the observed trends tell us which fault is actually present. For the CV, both faults have the same qualitative effects; they produce an increase in valve opening time and a decrease in steady-state position. However, their quantitative effects are different; the signal pressure leak has a greater effect on steady-state position and the supply pressure leak a greater effect on opening time. Therefore, based on the more significant trend faults can be isolated. For a signal leak, the deviation in nominal behavior will be observed first in steady-state position, and for a supply leak, the deviation will be observed first in the opening time. Depending upon the fault isolated the predictions for RUL are computed. Prediction Given the current estimated leak parameter value, and the regression parameters, leak parameter value at any future time can be calculated, using the damage progression equation (i.e., linearly progressing leak valve position). Using the lookup table, maximum valve open/ close times and/or steady-state position values to maximum leak parameter values for the leak faults are mapped, and this defines the EOL thresholds in the leak parameter space. Using the relationship between leak size and leak valve position, obtain corresponding maximum values, and then solve for the time at which that threshold is crossed, given the fitted line, and thus compute EOL. Prediction is not performed until a fault is detected. The regression is performed only over the data obtained since fault detection, so that nominal valve behavior is not used to estimate the fault progression parameters. The use of a filter on the data for fault detection introduces a slight lag, however in practice fault progression is very slow so this lag is negligible relative to the true EOL. In general, more robust fault detection strategies may also be used, but for our purposes a simple threshold works well. Experimental Results In this section, experimental results using the valve prognostics testbed are discussed. The valve is continually cycled open and closed in each experiment, with one cycle every 10 seconds, until the end of life condition is reached. For fault injection, the leak valve is opened at an increment of 1% at each cycle. The time 10 seconds is chosen such that the value has sufficient time to perform given operations under normal operating conditions. In the following sections results for the discrete valve and the continuous valve are presented respectively. To evaluate the experiments, two metrics, prognostics horizon and relative accuracy (Saxena, Celaya, Saha, Saha, & Goebel, 2010) are computed. Relative accuracy is computed as the difference in the true and predicted values divided by the true value (in this case, for EOL): where k E * denotes the true value. We define prognostics horizon, k PH , as the first time point after fault detection (k d ) in which the relative accuracy remains within a fraction α of the true value, in this case α = 0.15 is used. To compare experiments with different detection times and EOLs, the metric is normalized by computing it as the fraction: where a smaller value, which means accurate results earlier, is better. An averaged relative accuracy is computed over all prediction points from k d to k E . Discrete Valve For faults in the discrete valve leak to atmosphere and leak to supply faults are discussed. Leak to Atmosphere-A total of 5 experiments were performed for this fault. As described in Section 2, the leak to atmosphere fault is injected by controlling the position of the leak valve V1. This emulates a leak across the NO seat of the solenoid valve. As described in Section 3, this fault causes an increase in closing times and a decrease in opening times. Fig. 15 shows the open times of the valve during the fault progression, with a noticable downwardward progression, in agreement with the model. Fig. 16 shows the close times, but any trend is masked by the noise in the computed closing times. A fault is detected at the 59th cycle based on the opening times. The estimated leak parameter values, based on the open times of the DV, are shown in Fig. 17. In order to estimate the fault progression parameters, the all values since detection are used. The EOL predictions are given in Fig. 18 and the RUL values in Fig. 19, where α = 0.15 represents a desired accuracy constraint, EOL* denotes the true EOL, and RUL* denotes the true RUL. The predictions converge soon after the fault is detected, with PH = 47.83%. RA averages to 98.33%. Over all experiments, PH averages to 66.31% and average RA to 95.42%. For this fault, the progression of the fault is not very large relative to the nominal opening times, and so predictions are accurate only after halfway to EOL. Leak from Supply-A total of 6 experiments were performed for this fault. As described in Section 2, the leak from supply fault is injected by controlling the position of the leak valve V2. This emulates a leak across the NC seat of the solenoid valve. As described in Section 3, this fault causes an increase in opening times and a slight decrease in closing times. Fig. 20 shows the open times of the valve during the fault progression, with a clear upward progression, in agreement with the model. Fig. 21 shows the close times, but any trend is masked by the noise in the computed closing times. A fault is detected at the 52nd cycle based on the opening times. The estimated leak parameter values, based on the open times of the DV, are shown in Fig. 22. In order to estimate the fault progression parameters, the last 15 values are used. The EOL predictions are given in Fig. 23 and the RUL values in Fig. 24, where α = 0.15 represents a desired accuracy constraint, EOL* denotes the true EOL, and RUL* denotes the true RUL. The predictions converge relatively quickly after the fault is detected, with PH = 13.04%. RA averages to 99.07%. Over all experiments, PH averages to 14.83% and average RA to 98.22%. For this fault, the progression of the fault is relatively clear in the opening times, and so predictions are very accurate and are accurate early. Continuous Valve For CV leak from the signal line and the leak from the supply line faults are dicsussed. Leak from Signal Line-A total of 4 experiments were performed for this fault. As described in Section 2, the leak from signal line fault is injected by controlling the position of the leak valve V3. As described in Section 3, this fault causes an increase in opening times and an increase in steady-state position error. Fig. 25 shows the open times of the valve during the fault progression, without a clear trend. Fig. 26 shows the steady-state position values, with a clear downward trend. A fault is detected at the 48th cycle based on the steady-state position. The estimated leak parameter values, based on the steady-state positions of the CV, are shown in Fig. 27. In order to estimate the fault progression parameters, the last 10 values are used. The EOL predictions are given in Fig. 28 and the RUL values in Fig. 29. The predictions converge more slowly than other faults, with PH = 60.00%. Due to the slower convergence, RA over the period from fault detection to EOL averages to 88.82%. Results are similar for the other experiments. Over all 4 experiments, PH averages to 63.15%, and average RA to 83.90%. Leak from Supply Line-A total of 6 experiments were performed for this fault. As described in Section 2, the leak from supply line fault is injected by controlling the position of the leak valve V4. As described in Section 3, this fault causes an increase in opening times and an increase in steady-state position error. Fig. 30 shows the open times of the valve during the fault progression, with a clear trend. Over all 6 experiments, PH averages to 29.85%, and average RA to 93.06%. Related Work Despite their prevalence in many domains, and their criticality in many kinds of system operations, applying prognostics to valves has only recently received attention in the scientific literature. In (Gomes, Ferreira, Cabral, Glavão, & Yoneyama, 2010), a valve in a pressure control system was investigated. The probability integral transform was used to compute a dissimilarity measure for the identification of anomalies and trends in anomalous behavior. However, no prediction method was developed. The unscented particle filter is used by (Tao, Zhao, Zio, Li, & Sun, 2014) for the estimation of the health state of a pneumatic valve. Based on the predicted health distribution a replacement strategy is developed. The approach is validated only in simulation. The prognostics of a launch valve in the steam catapult of an aircraft carrier is considered in (Shevach et al., 2014). A risk-sensitive particle filter is used for state estimation, and an exponential moving average filter is used for prediction. Like our approach, valve timing data is used for fault detection and as the basis for prediction. However, our approach predicts EOL/RUL based on a dynamic model, whereas this approach uses a trend learned from data with the moving average filter. Pneumatic valves for air bleed systems in aircraft are considered in both (Lorton et al., 2013) and (Ribeiro, Yoneyama, Souto, & Turcio, 2015). In the former, a piecewise-deterministic Markov process (PDMP) modeling framework is used, with a Monte Carlo-based prediction approach. In the latter, only degradation level is identified and no prediciton is performed. A PDMP modeling framework with Monte Carlo-based prediction is also used in (Lin et al., 2014), but for a pneumatic valve in a nuclear power plant residual heat removal system. Conclusions This paper described development of a model-based prognostics approach to two types of pneumatic valves, for which a custom testbed provided run-to-failure data. The system health management functions exercised included fault detection, fault isolation, damage estimation, and remaining life prediction. The algorithms were validated on experimental results from the testbed, that allowed for faults to be injected and fault magnitude to be modulated according to a fault progression model. The function governing the fault progression model can be updated based on preferred fault propagation model choice. Faults were detected late (with a prognostic horizon bar ~0.2) due to masking of the fault signatures in indirect sensor measurements (a fairly common problem in systems health management). Prediction after detection was quite accurate for the DV valve (with all predicted estimates falling within the 20% alpha cone at PH=0.28) but not as accurate for the CV valve (most values outside of alpha-lambda cone), owing to a convolution of sensor noise and model shortcomings. Nonetheless, the convergence performance was very high for all valves and fault modes A limitation of the current approach is that the fault progression was carried out using a linear increase of the valve leakage. Although there is a nonlinear relationship between percent open and leak size/flow, this behavior does not necessarily represent the progression of a fault due to corrosion really well. A better model reflecting that relationship can be imposed on the testbed without any other change to the model of the valve or the testbed. Additional fault progression profiles representing other fault modes (besides corrosion) could easily be implemented since it would just be a change of the opening times of the proportional valves. A CV valve would typically be opened to different positions. Whereas this information should be used (and possibly help to further improve performance), the work here only considered open/close information. Initial work in that direction for a rotary valve has been performed in (Daigle, 2015). A further direction for future work is to consider uncertainty (Sankararaman, Daigle, & Goebel, 2014). Currently, uncertainty is ignored, although there is substantial uncertainty in the fault estimates and in the future valve operation, which can result in corresponding prediction uncertainty that should be captured. Another aspect to look into is correlating accelerated aging of the components with real life aging. Additional field usage data may help in mapping accelerated aging experimental data with real usage data. Shevach G, Blair M, Hing J, Venetsky L, Martin E, & Wheelock J (2014, September). Towards performance prognostics of a launch valve. In Annual conference of the prognostics and health management society. Tang L, Hettler E, Zhang B, & DeCastro J (2011). A testbed for real-time autonomous vehicle phm and contingency management applications. In Annual conference of the prognostics and health management society 2011. Tao T, Zhao W, Zio E, Li Y-F, & Sun J (2014). Condition-based component replacement of the pneumatic valve with the unscented particle filter. In Prognostics and System Health Management Conference (pp. 290-296). Teubert C, & Daigle M (2013, October). I/P transducer application of model-based wear detection and estimation using steady state conditions. In Proceedings of the annual conference of the prognostics and health management society 2013 (p. 134-140). Figure 1. Prognostics demonstration testbed schematic. Kulkarni et al. Page 22 Int J Progn Health Manag. Author manuscript; available in PMC 2020 August 03.
v3-fos-license
2017-06-25T17:43:56.288Z
2014-07-15T00:00:00.000
18214766
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://aidsrestherapy.biomedcentral.com/track/pdf/10.1186/1742-6405-11-20", "pdf_hash": "65643e0840d515e5367393c153ef642fed044535", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:1674", "s2fieldsofstudy": [ "Medicine" ], "sha1": "efeb467b913b63567772ccfe35b87db434604b54", "year": 2014 }
pes2o/s2orc
Cerebral microsporidiosis manifesting as progressive multifocal leukoencephalopathy in an HIV-infected individual - a case report Microsporidia have become increasingly recognized as opportunistic pathogens since the genesis of the AIDS epidemic. The incidence of microsporidiosis has decreased with the advent of combination antiretroviral therapy but it is frequently reported in non-HIV immunosuppressed patients and as a latent infection in immunocompetent individuals. Herein, we describe an HIV-infected male (46 years) with suspected progressive multifocal leukoencephalopathy that has not responded to optimal antiretroviral therapy, steroids, or cidofovir. Post-mortem examination revealed cerebral microsporidiosis. No diagnostic clue however, was found when the patient was alive. This report underscores the need for physicians to consider microsporidiosis (potentially affecting the brain) when no other etiology is established both in HIV, non-HIV immunosuppressed patients and in immunocompetent individuals. A 46-year-old homosexual male presented at the emergency room on February 22, 2002 for a 5-month progressive visual impairment, headache and occurrence of right hemiparesis in the last week. He was diagnosed with HIV infection in December 1987. His past history was not significant except for varicella in July 1996. At this previous time, his CD4 + count was 650 cells/μL (24%) with a CD4 + /CD8 + ratio of 0.37. In July 2000 and November 2001, the CD4 + cell counts were 420 and 330 cells/μL, respectively. No HIV-1 viral load measurements were available for these dates. Until he was admitted to our hospital, the patient had declined any antiretroviral therapy. At admission, he was afebrile and his physical examination was unremarkable except for right hemi paresis and left homonymous hemianopsia. His complete blood count (CBC), liver function tests and routine biochemistry were within normal limits. The CD4 + count was 340 cells/μL (16%) with a CD4 + /CD8 + ratio of 0.20. A brain CT-Scan and a magnetic resonance imaging (MRI) revealed multifocal coalescent lesions with no mass effect and very little or no enhancement in the white matter of upper left parietal and left occipital, right temporal and frontal lobes with cerebral atrophy, suggestive of progressive multifocal leukoencephalopathy (PML) (Figure 1, panels a and b). Serologic tests for syphilis (RPR and Treponema/ TP-PA), EBV-VCA IgM, toxoplasmosis (IgG and IgM) and search for cryptoccoccal antigen were negative.The patient had positive results for EBV-EBNA-1 IgG, CMV IgG, hepatitis C (anti HCV), hepatitis A, anti HBs, antiHBc. improved, and on June 10, 2002, the CD4 + count showed 440 (26%) with a CD4 + /CD8 + ratio of 0.38, whereas the viral load decreased to 91 HIV-1 RNA copies/mL. Moreover, the patient reported a subjective improvement of his right hemiparesis. On June 26, 2002, AZT was replaced by D4T to avoid anemia due to AZT and probenecid interaction. The patient was again hospitalized on August 22, 2002 for fever (39°C), seizures and status epilepticus, necessitating admission to the Intensive Care Unit (ICU), for intubation and mechanical ventilation. Anti-convulsive therapy with phenytoin and lamotrigine was then initiated. Laboratory analysis revealed this time a CD4 + count of 380 (21%) with a CD4 + /CD8 + ratio of 0.38 and a HIV viral load below the limit of detection (<50 HIV-1 RNA copies/mL). The brain lesions had not changed since the previous brain MRI. An ophthalmologic examination confirmed bilateral blindness of the central origin, but showed no retinitis, keratoconjonctivitis or deep corneal stromal infection. He was discharged on September 19, 2002, and was seen every two weeks at the Ambulatory Unit. On September 23, 2002 his HIV viral load was again below the limit of detection; CD4 + count was 350 cells/μL (22%) and CD4 + /CD8 + ratio was 0.36. Despite an optimal control of HIV infection and continuous combination antiretroviral therapy (cART), the patient's status did not improve and he was re-admitted to the ICU for status epilepticus on April 28, 2003, intubated and mechanically ventilated. A subsequent brain MRI showed no change as compared with the previous examinations, but was still suggestive of PML. At admission, lactic acid and CK levels, as well as platelet count, remained within normal limits. AST and ALT were slightly elevated: 65 and 67 U/L respectively. While in the ICU, the patient developed multiorgan failure with rhabdomyolysis (CK 47,500), elevated liver enzymes (AST 2 557 U/L), elevated LDH (4 070 U/L) and disseminated intravascular coagulation: thrombocytopenia (12 × 10 9 /L), diminished fibrinogen levels, increased prothrombin time (INR). Lactic acid levels rapidly increased to 17.11 mmol/L on April 29, 2003 before he expired. Rhabdomyolysis and lactic acidosis were probably the consequences of repeated muscular convulsions. Blood a. b. c. d. and urine cultures remained negative. Search for the cryptococcal antigen was again negative. Permission for post mortem examination was obtained. At macroscopy, the cerebral hemispheres were unremarkable. The circle of Willis showed a normal architecture without significant arteriosclerotic lesions. Leuko-encephalopathic lesions associated with secondary atrophy predominantly localized in the white matter were noted. These lesions were multifocal and bilateral, the largest observed in the orbito-frontal lobes. Another grey lesion measuring 0.8 × 0.6 cm, which differed from the leuko-encephalopathic lesions was noted in the right internal pallidal nucleus. The corpus callosum had secondary atrophy. Transverse sections of the brain stem showed atrophy of the right bulbar pyramid. Sagittal sections of cerebellum revealed ill-demarcated grey areas within and around dentate nuclei. At microscopic examination, cerebral microsporidiosis was documented affecting predominantly the white matter (mostly on the right side) of the orbito-frontal lobes with central, right temporo-parietal and cerebellum extension and a right internal pallidal abscess. Intracellular clusters of microsporidial spores were found, some of them with tubular extensions (Figure 1 panels c and d). In addition, a generalized severe anoxic ischemic encephalopathy was noted. Numerous microscopic foci of wallerian degeneration of the perivascular white matter, predominantly fronto-temporo-parietal, compatible with cerebral arterial ischemia were evidenced. No suggestive criteria for PML, such as oligodendrocytic inclusions (Papova type) or reactive dysmorphic gliosis were noted; the immunoreactivity for simian virus (SV) 40 was absent. Furthermore, the absence of giant multinucleated cells rendered a diagnosis of HIV encephalopathy improbable. Nonetheless, several extensively calcified small arteries as noted in HIV encephalopathy were present. No immune reactivity to anti Toxoplasma antibodies was present. The Grocott and PAS stainings were negative. The PCR for JCV on brain tissue was not performed. Discussion Microsporidia are widely recognised pathogens in both invertebrates and vertebrates [1]. Microsporidia belong to the phylum Microsporidia, with more than 144 genera and 1200 species [1,2]. The most common human pathogens are: Encephalitozoon, Enterocytozoon, Pleistophora and Nosema. Microsporidial hepatitis, sclerosing cholangitis, peritonitis, cardiac, sinusal, urinary, pulmonary, renal or ocular involvement have been reported [3]. Microsporidia have been detected in clinical samples from intestines, livers, muscles, corneas, kidneys, adrenals, gonads, ganglia, small arteries, biliary tracts, urine, sinuses, and brain [4,5]. Whereas the incidence of microsporidiosis has decreased in HIV-infected people since the availability of cART, this infection has been increasingly reported in non-HIVinfected individuals, such as solid organ and bone marrow transplant recipients, as well as in cancer, diabetic and elderly patients [6]. Furthermore, microsporidiosis has even been reported in immunocompetent persons [6,7] and in solid organ transplant recipients of latently infected donors [8]. We decided to report this case 12 years later because of this new emerging evidence and increased interest. Moreover, we now seek to alert physicians to potentially include microsporidiosis in the differential diagnosis not only in HIV-infected patients. Cerebral microsporidiosis was first reported in 1959 [cited by reference 5] and 12 cases due to E. cuniculi, all in HIV-infected persons, can be found in the medical literature from 1991 to 1998 [9]. Several other cases were described in immunosuppressed, transplant recipients and HIV-infected individuals [6,10]. In addition, one case was reported in an immunocompetent patient, displaying hemiparesis and epilepsy [7]. Some diagnosed patients benefited from treatment with albendazole, which is active against E. cuniculi [11]. In this case report, cerebral microsporidiosis was documented post mortem by morphologic examination of brain samples. Interestingly, this diagnosis was not initially considered when the patient was living. The patient presented with no other clinical manifestations such as diarrhea, keratoconjunctivitis, sinusitis, cholangitis, hepatitis, renal injury, which may have suggested a microsporidial infection. In addition, his CD4 + count at admission and 2 months prior was greater than 330 cells/μL in the absence of antiretroviral therapy. Moreover, the brain CT-Scan and MRI findings were suggestive of PML [12]. Unfortunately, no tests for microsporidia were performed and no treatment was initiated while the patient was living, thus, there was no logical reason to suspect microsporidiosis. At necropsy, no other techniques, such as tissue culture, monoclonal antibodies staining, PCR amplification of ribosomal RNA or DNA were performed to identify and characterize the implicated microsporidian species. It is unclear as to how and when the patient acquired this infection. Microsporidiosis can be transmitted by a respiratory route, contaminated water or food, contact with animals (such as dogs and rabbits), birds, invertebrates or by contact with an infected person [2]. In spite of well-preserved CD4 + counts and CD4 + /CD8 + ratios, this patient presented with diminished CD16 + 56 + cell counts (10-60 cells/μL; normal 130-700) -the main subpopulation of natural killer (NK) cells. This reduced cell count may have, in part, contributed to his illness. Unfortunately, this finding was not considered during his hospitalisations. Although the T-cell mediated responses are the main protective mechanisms against microsporidiosis, the NK cells may contribute to the immune response and control of this infection [13]. There is growing evidence that latent microsporidiosis is common in immunocompetent individuals and could, therefore, be reactivated during immunosuppression, such as in HIV-infected and immunosuppressed persons, the elderly, transplant recipients, as well as in patients with malignancies or diabetes [6,14]. It is, therefore, possible that our patient experienced a reactivation of latent microsporidiosis that he had acquired before becoming HIV-infected. The diminished CD16 + 56 + cell counts may likely be responsible, at least in part, for the reactivation. We would suggest that cerebral microsporidiosis should be considered in the differential diagnosis of brain lesions in HIV-infected as well as in other immunossupressed patients or transplant recipients, particularly when the etiology is unknown. We would suggest that urinary and CSF specimens should be submitted for detection of Microsporidia. A pre-emptive treatment with albendazole may be considered when the brain lesions do not improve despite optimal HIV control, improved immunity and treatment for other suspected brain lesions. Collectively, given the ubiquitous nature of microsporidia; their multiple routes of transmission; the potential that a latent infection may be reactivated or transmitted through donated organs; and the multitude of clinical manifestations, this infection should be considered in the differential diagnosis, when no definite etiology is established. Consent Written informed consent for autopsy was obtained from his mandatory and friend, the only next of kin to the patient. A copy of the written consent is available for review by the Editor-in-Chief of this journal. At the time of manuscript writing (11 years after patient's death) we were unable to identify an individual from whom to seek consent for publication. We informed the Ethical Research Committee and a waiver was granted for consent to publish this case report.
v3-fos-license
2023-01-26T16:13:14.408Z
2023-01-24T00:00:00.000
256260879
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://downloads.hindawi.com/journals/bmri/2023/3913351.pdf", "pdf_hash": "c9e6cdb865c1db53e286a9f457ffba66e2abb4c7", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:1675", "s2fieldsofstudy": [ "Medicine", "Engineering", "Computer Science" ], "sha1": "883da608fbad1820a27081459974554866aca184", "year": 2023 }
pes2o/s2orc
Diagnosis of Prostate Cancer Using GLCM Enabled KNN Technique by Analyzing MRI Images , Introduction The prostate is a somewhat unremarkable organ in the human reproductive system, yet it plays an essential role. Sperms are carried throughout the male reproductive system by the fluid that is generated by the prostate gland and known as semen. It is situated between the urinary bladder and the upper urethra, which is the conduit via which urine is passed from the urinary bladder. Prostate cancer (PC) is the most common nonmelanoma cancer in men, and it has emerged as one of the most pressing issues facing public health on a worldwide scale. An uncontrolled growth of cells inside the prostate gland is what leads to the development of prostate cancer [1]. Cancers that originate in the peritoneal cavity may advance in one of two different ways, gradually or rapidly. The prostate is almost often the only organ that is affected by tumours with a sluggish growth rate. It is estimated that around 85 percent of all cases of pancreatic cancer are brought on by types of tumours that develop slowly. In the treatment of these circumstances, active monitoring is an absolutely necessary component [2]. The second kind of pancreatic cancer, in contrast to the first, grows swiftly and metastasizes to other areas of the body via a process called spread. Monitoring techniques that can be relied on are required in order to accomplish the task of differentiating between these two types of evolution. In most cases, the early detection of PCs is accomplished by the performance of routine physical tests. The first thing that has to be done in order to devise a treatment plan is to pinpoint the precise location of the prostate. In order to achieve a high survival rate, screening approaches that are both effective and dependable are used. The PSA test, transrectal ultrasonography, and magnetic resonance imaging (MRI) are the three types of prostate cancer screening that are being used the most often [3]. The primary focus of the initial set of recommendations was entirely on the categorization of clinical relevance; however, the primary focus of the modifications to the original prostate MR guidelines was on the development of worldwide standards for MRI. This is in direct opposition to the major focus that the first guideline placed on. The level of picture capture and reporting is meant to be kept up to date with each new release, which is the goal of this endeavour. Recent research has undertaken a number of studies that assessed the effect of proposals that were made based on these criteria. These investigations were done to explore recent research. For the purpose of classifying a clinically relevant PC lesion, any one of the following approaches may be utilized. When identifying lesions that are fairly small but rather severe, there are, nonetheless, some limits that must be taken into consideration. It has been shown that a PI-RADS guideline may be of assistance in the process of detecting cancer that has spread outside of the prostate, which is a factor that has a substantial impact on the staging of cancer. This is due to the fact that the sickness has spread to other parts of the body [4]. The biological databases include a tremendous amount of information for researchers to peruse [5]. It is getting more challenging to gain insights from the massive amounts of data that are being collected. Machine learning is a kind of learning in which a machine utilizes examples, comparisons, and past experience to improve itself. This type of learning came about as a result of the fact that data mining has become such an important component of knowledge mining. The fundamental concept behind machine learning is pattern recognition in data and the ability to draw quick conclusions based on a variety of different datasets. Using methods derived from machine learning, automated screening of ligand libraries may be carried out [6,7]. Histopathology, which is used for the diagnosis and study of illnesses that damage the body's tissues, requires the careful examination of tissues and/or cells using a microscope. Histopathologists provide diagnoses based on the analysis of tissue samples in order to assist other medical professionals in the treatment of patients. Through the examination of MRI images, the machine learning approaches that are presented in this article may identify prostate cancer. Histogram equalization is used during the preprocessing stage of image creation. It results in a higher overall picture quality. The fuzzy C means algorithm is used to carry out the process of image segmentation. The method known as the Gray Level Cooccurrence Matrix is used in order to extract features. The KNN, random forest, and AdaBoost classification algorithms are used in the process of classification. Literature Survey In order to accomplish the findings that they did, Rampun et al. [8] used a combination of an anisotropic diffusion filter and a median filter. Due to the fact that noise and edges both produce uniform gradients, it is more challenging to remove noise from photographs that have a low signal-to-noise ratio. A noise gradient can be recognized by using a thresholding technique, but the edges of the gradient are smoothed down. Samarasinghe et al. [9] stated that the researchers carried out their work with the use of a three-dimensional sliding Gaussian filter. Because this filtering strategy is unable to eliminate the noise distribution in MPMRI photos, more complex and innovative alternative strategies have been offered as a means of addressing these kinds of problems. MPMRI images make advantage of the sparsity that is provided by the wavelet decomposition, which means that these pictures may gain benefit from the wavelet decomposition and shrinking techniques. One example of an orthogonal transformation that may be seen in action is the wavelet transform. The Rician distribution, on the other hand, maintains the unwanted noise signal even when applied to the wavelet transform domain. As a consequence of this, the wavelet and scaling coefficients have to be adjusted in part due to the distribution of noise in the data. Therefore, in order to filter out the noise in T2W photos, Lopes et al. [10] used the joint detection and estimation approach. In order to calculate the noise-free wavelet coefficient, a maximum a posteriori estimate of the noisy wavelet coefficients is used. After being normalized, each picture was adjusted such that the PZ region had a mean value of one and a standard deviation of zero. After that, the normalized MPMRI pictures were used for the purposes of instruction and evaluation within the study. As a consequence of carrying out this method, the dynamic ranges of the various MPMRI sequence intensities have been brought into alignment, which has led to an increase in the segmentation stability. Raw images are distorted not only by noise but also by a bias field that is produced by an endorectal coil [11]. BioMed Research International Variation in signal intensity may be attributed to the bias field, which can be detected in MRI images. As a consequence of this, the intensity of similar tissues changes greatly depending on where they are located in the image. This causes succeeding stages of the computer-aided design system to be more challenging. Because there is a learning component involved in both the process of segmentation and the process of classification, training images are necessary. Therefore, in order to make a diagnosis that is correct and can be performed automatically, it is essential to collect signal intensity images from patients whose readings are comparable to those of one another and who are members of the same group (cancerous or noncancerous). There is still some degree of variation in the photographs that are generated, even when all of the patients are examined with the same scanner, using the same technique, and using the same settings. Viswanath et al. [12] used the piecewise linear normalization strategy to normalize T2W photos in order to eliminate the variability across patients and assure repeatability. This was done in order to normalize the images. During the course of this inquiry, piecewise linear normalizing techniques were used in order to locate and extract the original foreground. Atlas-based segmentation is the method that is employed the most often in medical image analysis. This is due to the fact that it works better with pixel intensities and regions that are poorly defined. When analyzing prostate data obtained from MRI, Tian et al. [13] used the graph cut segmentation strategy with the superpixel notion to get their desired results. Cut-and-paste segmentation is helpful since it reduces the amount of computing and memory resources that are required. Due to the fact that it is only partially automated, the procedure has to be set up manually. Martin et al. [14] separated the prostate from the MRI using an atlas-based deformable model segmentation technique. In order to move the contour closer to the borders of the prostate, an atlas-based technique was used, and a deformable model was used, a probabilistic depiction of the location of the prostate. A totally automated technique for segmenting the prostate in MRI images was developed by Vincent et al. in paper [15]. This method made use of an active appearance model. Through the use of a multistart optimization process, the model is meticulously matched to the test photographs. An atlas-based matching strategy was utilized by Klein et al. [16] to automatically segment the prostate. They did this by using a nonrigid registration and comparing the target image to a large number of prelabeled atlas photographs with hand segmentation. Following the completion of registration, the matching segmentation photographs are concatenated in order to provide an MR image segmentation of the prostate. In order to accomplish segmentation of the prostate, deformable models make use of both internal and external energies. Internal energy is used to smooth the boundaries of the prostate, while external energy is used to propagate the shape. Chandra et al. [17] developed a method that can swiftly and automatically segment prostate images that were scanned without the use of an endorectal coil. During the training phase of this case-specific deformable system's initialization process, a patient-specific triangulated surface and image feature system is developed. The initialization surface of the picture may be changed with the help of an image feature system by using the concept of template matching. In recent years, there has been an increase in the use of multiatlas techniques and deformable models to the process of automatic prostate segmentation. In the research carried out by Yin et al. [18], a prostate segmentation method that is both fully automated and very reliable was used. When a normalized gradient field has been cross-correlated with the prostate, the graph-search approach is used to enhance the prostate mean shape system. This helps to better understand how the prostate develops over time. Deformable models are helpful in situations when noise or sampling irregularities are to blame for the appearance of unwanted prostate boundaries. The simplest strategy to achieve a comprehensive response while also overcoming challenges with segmentation is to make use of a technique that involves graph cutting. For the purpose of segmenting the prostate, Mahapatra and Buhmann presented the graph cut strategy [19], which makes use of the semantic information that was collected. Random forests were used as part of a super-voxel segmentation strategy in order to provide an estimate of the volume of the prostate as well as its location. The volume of the prostate was further optimized with the help of random forest classifiers that were trained on photos and the signals from its surroundings. In order to optimize the graph cuts used for prostate segmentation, a Markov random field is used. Puech et al. [20] created a set of rules for predicting test results by making use of the data that was obtained via medical support systems. It is feasible to categorize data by making use of similarity measures and the fundamental method of supervised machine learning known as k-nearest neighbor (k-NN). The k-means clustering technique is an unsupervised algorithm that splits the data into k-numbers of groups in an iterative manner. k is the number of iterations. Every point in the feature space is given an identifier that corresponds to the k-number of centroids that is geographically closest to it. After that step has been completed, a new mean is calculated for each cluster, and the positions of each cluster's centroid are modified so that they are consistent with the new mean. The procedure of assigning and updating centroids will continue until such time as the centroids will no longer undergo any changes. The number of classes that make up a cluster is often denoted by the letter k. The method of classification known as linear discriminant analysis (LDA) is used in order to establish an ideal linear separation between the two classes. This results in an increase in the difference between the interclasses and a decrease in the difference between the intraclasses. The Naive Bayes classifier is the one that is used most often. It is a probabilistic kind of classification since it is based on the assumption that each dimension of the features being analyzed is independent. Using this method, it is thus feasible to classify photographs with the greatest possible posterior probability. BioMed Research International Another widely used approach to classification is known as adaptive boosting, or AdaBoost for short. AdaBoost is an ensemble learning technique that was created in [21]. Using this approach, many weak learners are merged to produce a single powerful classifier. The AdaBoost (AdB) classifier is superior to the random forest classifier in terms of performance. This classifier gives preference to weak learners such as decision stumps, classification trees, and regression trees. During the course of their research, Lopes et al. used an AdaBoost classifier to complete the classification procedure. Using Gaussian processes to label classes is one way to do class labelling within the context of an approach to classification that is based on a sparse kernel. This approach is known as the kernel strategy, and it derives its name from the fact that it generates new labels by making use of the whole training dataset. In order to assign a category to an unlabeled image, sparse kernel classification algorithms rely on a restricted number of samples that have been tagged from the dataset that is used for training [22]. The support vector machine (SVM), which is an example of a sparse kernel technique, is used to select the best linear hyperplane to split up into two label classes with the largest margin of error. This is accomplished by comparing the data to determine which linear hyperplane produces the best results. Choosing the most appropriate linear hyperplane on which to categorize the data enables this goal to be realised. Support vector machines are useful classifiers in applications that take place in the real world because they are trustworthy and can be extended. This makes them helpful in applications that take place in the actual world. Methodology This section presents machine-learning techniques for prostate cancer detection by analyzing MRI images. Image preprocessing is done using histogram equalization. It improves image quality. Image segmentation is performed using the fuzzy C means algorithm. Features are extracted using the Gray Level Cooccurrence Matrix algorithm. Classification is performed using the KNN, random forest, and AdaBoost algorithms. Figure 1 shows the machine-learning techniques for prostate cancer detection by analyzing MRI images. Pictures that are clearer and more detailed may be obtained from medical imaging procedures such as digital X-rays, MRIs, CT scans, and PET scans by using the basic image processing method of histogram equalization. For the purpose of determining the pathology and arriving at a diagnosis based on these pictures, high-definition photographs are required. After all of the processing is done, applying histogram equalization to the image will make any noises that were previously hidden in the picture audible again. This method is often used in the field of medical imaging analysis [23]. After determining the image's gray mapping by the use of gray operations, the approach generates a gray-level histogram that has levels of gray that are perfect, consistent, and smooth. Clustering is a strategy that groups together patterns that are similar to one another in an effort to find the underlying links that exist between the pixels in a picture. This approach's goal is to uncover the underlying linkages that exist. The word "clustering" refers to the practise of grouping objects into groups based on the fundamental features they share with one another. When using the FCM approach, the data objects are sorted and categorized into groups based on the membership values that they have. During the process of maximising the function of the object, the technique of least squares is used, and the division of the final data is carried out once the computation has been completed [24]. Feature extraction is a method of image processing that may be used to lessen the amount of data stored on a computer by deleting dimensions from a collection of feature subsets that are deemed unnecessary or irrelevant. The GLCM approach is used to recover the properties of the texture and preserve a connection among the pixels. This is accomplished by calculating the cooccurrence values of the gray levels. The general linear model (GLM) is constructed by applying the conditional probability density functions p(i, j|d, ș) and the selected direction of ș = 0, 45, 90, or 135 degrees, and the distances d ranging from 1 to 5. The GLCM algorithm is used in order to accomplish this goal. For instance, the probability that two pixels with the same gray level I and/or j) are spatially connected may be found by using the function p(i,j|d,ș), and the distance in question is referred to as the intersample distance (d). The GLCM places a strong emphasis on contrast, correlation, energy, entropy, and homogeneity among its many significant qualities [25]. KNN is a kind of supervised method that is used particularly for classification purposes. When using this method, the most important thing to remember is that it always produces the same results, even when using the same training data. It is possible to give a class to all of the samples or only one or two of them based on the value that is closest to it in the population. The Euclidean distance is specified in the equation that was just presented as a way to quantify how similar two-pixel places are to one another. Therefore, the BioMed Research International pixels wind up in the same group, which is where they should have been all along given the odds. In KNN, the letter K represents the neighborhood with the shortest distance between any two neighbors. The number of homes that are located close is the most essential consideration. If there are just two courses, the number of courses will almost always be an odd number. At that stage in the algorithm, the calculation known as the nearest neighbor calculation is K = 1. This is the simplest of all the conceivable scenarios to take place [26]. The model creates random forests, thus the name "random forest," and this is precisely what it does. RF stands for "random forest." With the help of this approach, it is possible to construct a forest of decision trees, each of which is educated in a distinct way. This method was used in the construction of the current forest of trees, which depicts all of the feasible responses to the questions including multiple choice options. As a direct consequence of this, they were included into the calculations in order to create even more accurate estimations [27]. There is a method known as AdaBoost that may be used to classifiers that are not very effective in order to increase the accuracy with which they classify data. The algorithm AdaBoost will be used to distribute the initial weights for each observation. After a few iterations, observations that have been incorrectly categorized will be given greater weight, while observations that have been successfully classified will be given less weight. The efficacy of the classifier is significantly improved as a result of the weights on the observations being measures of the class to which the observation belongs. This helps to decrease instances of incorrect categorization. When using the strategy of "boosting," many pupils who are struggling academically are successively fitted in an adjustable manner. In each subsequent model in the series, observations that were given insufficient weight in earlier models are given a greater amount of emphasis in that model [28]. Result Analysis In this experimental set up, PROMISE dataset [29] is used. 80 MRI images are used in the study. 55 images are used in training of model and 25 images are used for testing of model. Image preprocessing is done using histogram equalization. It improves image quality. Image segmentation is performed using the fuzzy C means algorithm. The Gray Level Cooccurrence Matrix technique is used in the process of feature extraction. The KNN, random forest, and Ada-Boost classification algorithms are used in the classification process. Accuracy, sensitivity, and specificity are the three characteristics upon which the performance of a variety of distinct algorithms is evaluated and compared during the course of this research. Performance is shown in Figures 2-4. From the figures, it is clear that the accuracy, Conclusion Cancer is the leading cause of mortality among those over the age of 65. If a diagnosis of the patient's condition can be made as quickly as possible, it will significantly improve the patient's chances of surviving the illness. Medical imaging, much like traditional diagnosis, is analyzed by skilled specialists who search for any indicators that the body may be expressing malignant tendencies. These professionals seek for any signals that the body may be displaying cancerous tendencies. On the other hand, manual diagnosis may be time-consuming and subjective owing to the wide range of interobserver variability that is caused by the huge quantity of medical imaging data. This variability is a result of the vast amount of data that is included in medical images. Because of this, providing an appropriate diagnosis to a patient might be challenging. In order to accomplish tasks that required the use of machine learning and the processing of intricate pictures, it was necessary to make use of the most cutting-edge computer technology. Since many decades ago, efforts have been made to create a computer-aided diagnostic system with the intention of supporting medical professionals in the early diagnosis of various types of cancer. It is expected that one man in every seven will be diagnosed with prostate cancer at some point throughout their lives. An unacceptably high percentage of men are being told they have prostate cancer, and each year, this illness claims the lives of an increasing number of people. Due to the high quality and the multidimensional nature of MRI pictures, it is necessary to make use of a suitable diagnosis system in conjunction with CAD tools. I am now engaged in the process of developing a project that is based on the goals that we have in common. Because it has been shown that the computer-aided design (CAD) technology that is already in use is beneficial, researchers are presently focusing their efforts on creating strategies to increase the accuracy, specificity, and speed of these systems. This research presents a model that is effective with regard to the processing of images, the extraction of features, and the acquisition of new skills using machine learning. Data Availability The data shall be made available on request. Conflicts of Interest The authors declare that they have no conflicts of interest.
v3-fos-license
2019-04-07T13:05:18.361Z
2016-01-01T00:00:00.000
101848023
{ "extfieldsofstudy": [ "Materials Science" ], "oa_license": "CCBY", "oa_status": "HYBRID", "oa_url": "http://jes.ecsdl.org/content/163/3/A522.full.pdf", "pdf_hash": "8f95c36e2ec5d7b56c1a7dcdfd6d842bfa5a156d", "pdf_src": "ScienceParsePlus", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:1676", "s2fieldsofstudy": [ "Materials Science" ], "sha1": "b6b42406a9be8b980bb5272bac3637f642743f2c", "year": 2016 }
pes2o/s2orc
Exploring Impedance Growth in High Voltage NMC/Graphite Li-Ion Cells Using a Transmission Line Model A discrete transmission line model (TLM) for the impedance of the positive electrode in a Li-ion cell was studied to investigate causes of impedance increase for Li[Ni 0.42 Mn 0.42 Co 0.16 ]O 2 (NMC442) positive electrodes operated at high voltage ( > 4.4 V vs. Li/Li + ). The TLM included contact resistance between the conductive carbon and the active particles (R c ), electrical path resistance through the carbon network (R e ), ionic path resistance through the bulk electrolyte (R i ) and transfer resistance/capacitance (R s , C) through the SEI layers formed on the active particles. It was found that an increase in any of R e , R i or R c was necessary to increase the high frequency intercept of the impedance spectra. A limited increase in the spectrum diameter of the TLM was achievable by increasing R e or R i , but an unlimited increase was only possible by increasing the SEI resistance R s . Comparison with experiment concluded that the high voltage impedance growth observed in NMC442/graphite cells is primarily due to increases in R s , while minor increases in R e , R i or R c may occur. A brief investigation of inhomogeneous SEI capacitance/resistance produced impedance spectra with a range of heights and asymmetries. This can explain in part the variety of shapes of impedance spectra from real impedance measurements of Li-ion cells. Lithium-ion batteries have enjoyed widespread use in portable electronics for over two decades, and are increasingly relevant for electric vehicles as operational life and energy density continue to increase. The use of high voltage LiMO 2 (M = Ni, Mn, Co), or NMC (Li[N x Mn y Co 1-x-y ]O 2 ), positive electrode materials improves the energy density of the cell, but difficulties arise in maintaining calendar life and coulombic efficiency, especially at voltages greater than 4.3 V. 1 The internal impedance of a cell is closely associated with its health, 2-4 so electrochemical impedance spectroscopy (EIS) is a powerful diagnostic technique. It is important to connect the features seen in EIS spectra with the correct internal processes occurring within a cell. A comprehensive circuit model for the impedance of a cell is needed to understand these processes. Figure 1a shows capacity vs cycle number for two NMC442/ graphite Li-ion pouch cells (240 mAh). The high magnitude of the high frequency impedance is due to the two-wire measurement apparatus. The impedance of the wires contributes to the overall series resistance measured. The exact electrode composition in the studied cells was: Positive electrode -96.2%:1.8%:2.0% = active material : carbon black : PVDF Binder; negative electrode -95.4%:1.3%:1.1%:2.2% = active material : carbon black : CMC : SBR. The pouch cells used were so called 402035 size which were 40 mm long by 20 mm wide by 3.5 mm thick. The positive electrode coating had a total (both coatings and current collector) thickness of 105 μm and was calendared to a density of 3.55 g/cm 3 . The negative electrode coating had a total thickness of 110 μm and was calendared to a density of 1.55 g/cm 3 . The positive electrode coating had an areal density of 16 mg/cm 2 (one side) and the negative electrode had an areal density of 9.5 mg/cm 2 (one side). Both the positive and negative electrodes were coated on both sides, and the electrodes were spirally wound yielding an active area of around 100 cm 2 . One was cycled between 2.8 V and 4.4 V with a 24 h hold at 4.4 V, and the other was cycled between 2.8 and 4.5 V with a 24 h hold at 4.5 V (see Figure 1b). Additional information on the preparation of these cells is outlined by Nelson et al. 1 The 4.5 V cell shows considerable capacity fade after 30 cycles, whereas the 4.4 V cell has good capacity retention for the first 80 cycles. Figure 1c shows the impedance spectra of the 4.4 V cell taken at 4.4 V and Figure 1d shows spectra for the same cell at 3.8 V. Cycle number increases from red to blue in both Figures 1c and 1d. As cycle number increases, the spectra shift to the right but their diameters do not grow significantly. Figure 1e shows the impedance spectra of the 4.5 V cell taken at 4.5 V. The shifting of the spectra is seen, accompanied by large growth of the diameter of the semicircle in the mid-frequency regime at 4.5 V. Figure 1f shows impedance spectra for the same cell taken at 3.8 V. The diameter of the semicircles are much smaller than in Figure 1e, indicating that the impedance growth at high voltage is largely reversible. However the diameters of the spectra are still increasing slowly with cycle number irreversibly. The spectra in Figures 1c-1f have inductive tails at high frequency due to the internal inductance of the measurement device used to collect the data. It has been previously established by Chen et al. that this impedance growth exemplified by Figure 1 is mainly due to degradation of the positive electrode. 5 To support this claim, Figure 2 plots symmetric cell impedance plots for LaPO 4 -coated Li[Ni 0.42 Mn 0.42 Co 0.16 ]O 2 /graphite (NMC442) pouch cells. 6 Figure 2a plots the positive electrode symmetric cell impedance and Figure 2b plots the negative electrode impedance. The green stars signify the 1 Hz points. The positive electrode has greater impedance than the negative electrode by a full order of magnitude. Additionally, the positive electrode symmetric cells exhibit extremely high impedance at frequencies lower than 1 Hz, whereas the negative electrode symmetric cells exhibit much lower impedance at frequencies higher than 1 Hz. This indicates that the high impedance seen in Figure 1e is because of positive electrode effects, both due to the order of magnitude and the frequency regime in which it is found. The exact cause for the reversible and irreversible increases in impedance at high voltage is still a matter of discussion. The work of Kerlau et al. postulates electrical contact resistances between the carbon black particles and the active particles in the cathode develop as the cell cycles. 7 These additional resistances are claimed to cause a substantial increase in the diameter of the Nyquist spectra without significantly affecting the high frequency intercept. Metzger et al. claim anodic oxidation of carbon black in the cathode and ethylene carbonate in the electrolyte prevents the cell from operating close to 5 V. 8 Metzger et al. also claim the oxidation of the carbon black leads to degradation of the electronic conduction path in the electrode. Nelson et al. suggest reversible impedance growth is caused by surface compounds that form on the cathode particles at high voltage and return to solution at low voltage, 1 i.e. a dynamic solid electrolyte interphase (SEI). Irreversible impedance growth was suggested to be primarily due to continued SEI growth on the positive electrode surface during cycling as well as electrolyte degradation. The transmission line model (TLM) theory of the internal impedance of an electrode mimics the geometry and interfaces in the electrode. Figure 3b shows a simplified diagram of a cell positive electrode and Figure 3a transmission line model. 9 R e represents the electronic path resistance through the carbon black. R i represents the ionic path resistance through the electrolyte solution in the electrode pores. In this model, R e and R i have the same overall effect on the circuit. C represents the capacitance of the SEI double layer that forms on the surface of the active particles as the cell cycles. R s represents the associated charge transfer resistance of this SEI layer. R c represents the contact resistance between the carbon black and the active particles, as outlined by Kerlau et al. in Figure 7b of their paper. 7 Their model placed R c resistors in series with the RC pairs that represented the SEI of the active particles. The R c resistors in this TLM model have been added in the same place in order to make a direct comparison with their results. Further complexity can also be included such as a parallel RC pair to represent the contact impedance between the electrode and current collector as outlined by Atebamba et al., but that will not be included here. 9 In addition, Warburg impedances can be added to mimic solid state diffusion, 10,11 which affects the lowest frequency sections of the impedance spectra, but Warburg impedances will not be considered here. "Constant phase elements" which have questionable physical significance are often used to replace transmission line circuits with less complex equivalent circuits. 12,13 In this paper, constant phase elements are not used to try to maintain simplicity and physical reality. Continuous transmission line models for cylindrical pores have already been developed and explored. [14][15][16][17] While they are mathematically concise, they often assume constant resistivity and permittivity for the associated components in the model. As such, they cannot account for inhomogeneities in the positive electrode particles. In this study, a discrete transmission line model is solved using circuit dynamics for various circuit parameters. By systematically altering the circuit parameters and observing the changes in the impedance spectra, a guide to understanding changes in measured impedance spectra is developed. Theoretical The complex impedance of the transmission line model shown in Figure 3 was calculated both analytically and numerically via SPICE simulation. 18 Figure 4 shows the process by which the transmission line was reduced from 5 links to 4 links using Y-transforms. 19 This process was executed repeatedly until the circuit was solved. In the homogeneous case, the value of each circuit component was constant from link to link. In the inhomogeneous case, each individual circuit element could have a unique value. The number of links in the TLM was chosen to be five for the following reasons: SEM images of NMC positive electrode particles in Figure Table I Circuit components were consolidated to simplify the circuit to a ladder geometry (1-3). Y-transforms were implemented to simplify the circuit (3)(4), and further consolidation reduced the complexity of the circuit (4)(5). This process was repeated until the circuit was reduced to one element. found to be in agreement. In the homogeneous case, the equation for the impedance of the five link TLM circuit is: and j is the imaginary unit. For the inhomogeneous case, or in the case of more links, the impedance was solved procedurally using a computer algorithm written in Python. 23 This TLM excludes the resistance or capacitance due to the movement of electrons through the aluminum current collector. 9 The surface area of the aluminum current collector is much lower than the total surface area of the SEI on the active particles. Since the resultant capacitance is lower, any capacitive effect due to the metal substrate will be seen at high frequency. 9 This offers an explanation for the small peak at high frequency in Figures 1e and 1f. During manufacturing of a Li-ion cell, the positive electrode material is calendared at high pressure to the current collector, minimising the resistance. Its contribution is evidently small compared to the mid-frequency impedance growth and has been omitted from the model for the sake of simplicity. Figure 5 shows impedance spectra generated by the discrete TLM. The spectra were generated to roughly coincide with the measured data in Figures 1c and 1e. The red curve in Figure 5a corresponds to R e = 0.26 . R e increases linearly for the intermediate spectra and ends at 0.32 , represented by the blue spectrum. If R e were held at 0.25 and R i were allowed to vary between 0.26-0.32 , the resultant spectra would be the same. This is reflected in the fact that R i and R e have identical effects on the circuit. As R e increases, the diameters of the spectra increase slightly, but the dominant effect is an appreciable increase in the real part of the impedance intercept at high frequency. In the context of Figure 1a, this indicates that the electronic path or the ionic path resistance is increasing in the cell during the aggressive charge-hold-discharge cycling in Figure 1. A change in the charge transfer resistance is not necessary to explain the shift in the real part of the cell's impedance. Figure 5b was generated similarly to Figure 5a with R e increasing from 1.25 (red) to 1.5 (blue). Additionally, the red curve corresponds to R s = 0.3 and the blue curve corresponds to R s = 3.5 . The black curves correspond to intermediate values of R s and R e . An increase in the SEI resistance of the double layer was necessary to produce the large diameter growth between the red curve and the blue curve. Changing R s alone was ineffective in shifting the high frequency intercept. This is consistent with the model outlined Figure 1c. R e increases from red to blue. Only a change in Re is necessary to produce good agreement. (b) Impedance spectra engineered to coincide to the spectra seen in Figure 1e. A change in both R e and R s is necessary to produce good agreement. in Figure 3 because the capacitors have near-zero impedance at high frequency. This causes the SEI resistors to be in parallel with a short circuit at high frequency, rendering them ineffective. As a result, the position of the high frequency intercept is not affected by changes in the SEI resistance. This indicates that the dominant process causing the shift in the high frequency intercept was still present in the cell that was cycled to 4.5 V. An additional effect causing the diameter increase seen in Figures 1e and 1f is attributed to an increase in the charge transfer resistance, R s , in the TLM. This agrees well with the hypothesis of Nelson et al. 1 A closer examination of the individual effects of each circuit component was conducted. Figure 6a shows the transmission line model used to examine the effects of different circuit elements. Figure 6b shows Nyquist plots for impedance spectra. The base values for the circuit parameters were R s = R c = 1 , R e = R i = 0.25 and C = 1 F. The Nyquist spectrum of the circuit with these parameters is shown in black. The blue Nyquist curves represent the impedance of the TLM when R s was doubled to 2 and quadrupled to 4 respectively with all other parameters held constant. The red impedance spectra were produced by increasing R e or R i to 0.5 and 1 with R s = 1 and all other parameters held constant. The spectra shown in green represent R c being increased to 2 and 4 with all other parameters held constant. R s has the most significant effect on the diameter of the spectra. An increase in R e or R i also increased the spectrum diameter, but it was also accompanied by a significant increase in the high frequency resistance. A change in R c had no effect on the spectrum diameter. However, it caused an appreciable increase in the real part of the high frequency impedance. This is in disagreement with the hypothesis of Kerlau et al., who make the spurious claim that contact resistances have no effect on the high frequency intercept, but significantly increase the spectrum diameter. 7 To understand the individual effects of each resistor element on the spectrum diameter, one must examine the high and low frequency intercepts in the context of DC resistance of the TLM. The value of the low frequency intercept is the DC resistance of the TLM if the capacitors were removed from the circuit. The value of the high frequency intercept is the DC resistance of the TLM if the capacitors were replaced with short circuits. Because a short circuit in parallel with a resistor is equivalent to a short circuit, this is equivalent to shorting the charge transfer resistors and removing the capacitors. In this context, it is clear that increasing R s in the TLM will have the most significant effect on the spectrum diameter. Increasing R s alone cannot cause the high frequency intercept to shift, but it will cause the value of the low frequency intercept to increase because the DC resistance of the TLM is now larger. This increases the distance between the two intercepts and causes the diameter of the spectrum to increase accordingly. Increasing R e has a small effect on the spectrum diameter because current in the circuit favors certain charge transfer paths, specifically ones that minimize transit through the increased resistance of the electronic path resistors. Because the bulk of the current travels through fewer charge transfer pathways which are in parallel, the charge transfer resistance increases. This is analogous to the fact that 2 resistors in parallel constitute a more resistive system than 5 resistors of the same value in parallel due to the reduction in electronic pathways. In the limit of large R e , all the current will travel through the closest charge transfer pathway. Thus, if R s is held constant and R e increased, the total charge transfer resistance of the circuit (R ct ) will asymptotically approach R s . The presence of contact resistances has no effect on the spectrum diameter, because it is present in the circuit at both low and high frequency, and it does not increase the conductivity of some electronic pathways over others. The effect of each of these components on the impedance of the TLM is summarized in Figure 7. Figure 7a plots total charge transfer resistance (the spectrum diameter or R ct ) vs. R e or R i , R s and R c . The base parameters of the circuit were R c = R s = 1 , R e = R i = 0.25 and C = 1 F. The order of magnitude of these values was motivated by empirical observation of the experimental resistance and frequency data in Figure 1. As one parameter varied, the others in each graph were held constant. The effect of R s (shown in blue) is the most significant. Changes in R e or R i have a small effect on R ct and R c has virtually no effect at all. This indicates that a large increase in the spectrum diameter, as seen in Figure 1e for example, must be attributed to an increase in the charge transfer resistance of the active particles in the positive electrode. Figure 7b shows the position of the high frequency intercept vs. R e or R i , R s and R c . The base parameters are the same as in Figure 7a. Changes in R s have no effect on the high frequency intercept as previously discussed. However, R e , R i and R c can all potentially contribute to the shift on the real axis. Results and Discussion (a) (b) Figure 7. (a) R ct (the spectrum diameter) plotted vs R e or R i (red), R s (blue) and R c (green). R c has no effect on R ct ; R e and R i have a moderate effect that is asymptotically bounded and R s has a large effect. (b) Position of the high frequency intercept plotted vs R e or R i , R s , and R c . R s has no effect on the intercept; R c has a moderate, unbounded effect while R e and R i have the largest effect. Learning how to determine experimentally which of these components is responsible for increases in the high frequency intercept is a project we are now pursuing. One method currently under investigation is to measure impedance as a function of temperature, as the underlying mechanisms behind carbon resistance and electrolyte resistance have temperature dependencies that are distinct from each other. As of yet, no publishable data has been collected. The TLM was also used to investigate the effect of inhomogeneities in a cell cathode. Realistically the cathode particles have a distribution of sizes, surface species have non-uniform thickness, and some active particles have stronger contact with the carbon black network than others. This suggests that each circuit component in the TLM can take on a range of values rather than just one. Figure 8a examines inhomogeneity in the electrical path resistance. The curve in blue is for a TLM with no electrical path resistance. The other circuit parameters are R i = 0.25 , R s = 0.5 and C = 1 F. The red curves each represent a TLM where each electrical path resistor is a random value between 0-0.2 picked from a square probability distribution. The overall DC resistance of the circuit increases with the addition of electrical path resistors, both at high and low frequency. More current pathways are favored with the addition of electrical pathway resistors, so the diameter necessarily decreases. Figure 8b shows the impedance spectra of the TLM when electrical contact resistors are inhomogeneous. The blue curve represents a TLM with no R c resistors. The circuit parameters are R s = 0.5 , R e = R i = 0.25 and C = 1 F. The red curves show impedance spectra for the TLM with each R c resistor having a random value between 0-1 . The red spectra are all shifted on the real axis for the same reason as in Figure 8a. An increase in the charge transfer resistance is possible because inhomogeneity in the electrical contact resistors can increase favorability for some electronic pathways. As discussed previously in the context of increasing R e , this effect limits the total charge transfer resistance to being less than or equal to R s . Thus, while the development of inhomogeneous contact resistances can increase the spectrum diameter, it is not sufficient to explain impedance growth seen on the scale of Figure 1e. Inhomogeneities in R s and C create a distribution in RC times for each charge transfer pathway. This can cause a "spreading" of the high and low frequency intercepts without a proportional change in the peak position, or it can cause the peak position to depress without a proportional change in the charge transfer resistance. Figure 9a shows the variety of spectrum shapes that can result from inhomogeneity in the double layer capacitance of the active particles. The TLM circuit parameters are R s = 0.5 , R e = R i = 0.25 and R c = 0 . 100,000 trials were conducted where each SEI capacitor was assigned a random value between 0.1-0.4 F. The red curve is a perfect semicircle. The region highlighted in black is the space that the 100,000 Nyquist spectra occupied. Two sample curves in green have been plotted to demonstrate how this black region can be occupied by a Nyquist plot. Depending on the set of C values, the Nyquist spectrum could be near semicircular, or asymmetric and depressed. A "depression coefficient" is defined to quantify the degree of "flattening" of the Nyquist spectrum: 0% denotes a straight line along the real axis and 100% represents a perfect semicircle, where the height and width of the Nyquist spectrum are used for the calculation of the depression coefficient. Figure 9b shows a histogram of the depression coefficients for the 100,000 trials conducted. Spectra as flat as 76% were observed, with the most common depression being close to 90%. Figure 9c shows a similar 100,000 trial graph where R e = R i = 0.25 , C = 1 F and the charge transfer resistors were each assigned a random value between 0.4-1.0 . The upper red curve and the lower red curve represent the R s arrangements that gave the highest and lowest R ct respectively. The black region is the space that the intermediate Nyquist spectra occupied, and two example spectra have been plotted in green. Figure 9d shows a histogram of the spectra depression for all 100,000 trials. Depression is evenly centered close to 89%. In Figure 9e, the range of capacitances was increased to a factor of 20. The black region terminates at a significantly lower value of -Im{Z}, and the example curves in green show that the spectra can exhibit large asymmetries and multiple peaks. Figure 9f plots the spectrum depressions for this trial. Spectra that reached depression under 60% were observed. In Figure 8g the range of R s was increased to a factor of 20. Depression down to 80% was observed in Figure 8h. The purpose of this figure is to explain why real impedance spectra can have asymmetries and can be measurably flatter than perfect semicircles. This effect can be compounded further if the RC time constant of the SEI in the negative electrode differs moderately from that of the positive electrode. The two peaks will overlap and resemble a flattened spectrum. Conclusions A discrete transmission line model for the impedance of a Li-ion positive electrode was explored. The impedance of the TLM circuit was solved analytically using Y-transforms and the solutions were verified using SPICE. The TLM model was applied to example data that showed large growth in R ct at high voltage during aggressive charge-hold-discharge cycling. It was found that in order to increase the real resistance at high frequency, any of R e , R i or R c had to increase. However, only R s could be responsible for the large increase in R ct . It was found that any effect other than changing R s that was able to change the total charge transfer resistance was asymptotically limited by R s . This suggests that high voltage impedance growth is due to continued SEI growth on the active particles, or the presence of surface species at high voltage. An inhomogeneous set of electronic path resistances increased the high and low frequency resistance, but reduced the total charge transfer resistance of the circuit. An inhomogeneous set of contact resistances was able to cause an increase in the spectrum diameter, but it was limited to the value of R s and could not explain high voltage impedance growth seen in aggressively cycled real cells. A range of random SEI capacitance/resistance values were used to simulate surface inhomogeneity in the positive electrode. The simulations produced "flattened" and asymmetric spectra in comparison to a semi-circle. This agreed qualitatively with the flat and asymmetric spectra seen in impedance measurements of real cells. These simulations and comparisons to real data suggest avenues for improving the positive electrodes in NMC/graphite cells destined for high voltage operation: 1) The electrode/electrolyte interface must be improved by changes to the electrolyte or the electrode surface. 2) The integrity of both the electronic path and the ionic path must be maintained. The conducting diluent cannot be oxidized and pores cannot be blocked by electrolyte decomposition products. 3) The current collector/electrode interface must also be maintained. Based on the work shown here, dealing with the electrode/ electrolyte interface is most critical, but all of 1), 2) and 3) must be solved to guarantee success.
v3-fos-license
2017-09-10T02:30:27.412Z
2012-03-29T00:00:00.000
44766825
{ "extfieldsofstudy": [ "Mathematics" ], "oa_license": "CCBY", "oa_status": "HYBRID", "oa_url": "https://ccsenet.org/journal/index.php/jmr/article/download/16073/10827", "pdf_hash": "f0bcea5e95babce437e61715a045381746532472", "pdf_src": "Anansi", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:1679", "s2fieldsofstudy": [ "Mathematics" ], "sha1": "f0bcea5e95babce437e61715a045381746532472", "year": 2012 }
pes2o/s2orc
Designing a Pseudo R-Squared Goodness-ofFit Measure in Generalized Linear Models The coefficient of determination is a function of residuals in the General Linear Models. The deviance, logit, standardized and the studentized residuals were examined in generalized linear models in order to determine the behaviour of residuals in this class of models and thereby design a new pseudo R-squared goodness-of-fit measure. The Newton-Raphson estimation procedure was adopted. It was observed that these residuals exhibit patterns that are unique to the subpopulations defined by levels of categorical predictors. Residuals block on the basis of signs, where positive signs indicate success responses and negative signs failure responses. It was also observed that the deviance is a close approximation of the studentized residual. The logit residual is two times the size of the standardized residuals. Borrowing from the Nagelkerke’s improvement of Cox and Snell’s goodness-of-fit measure in generalized linear models and the coefficient of determination counterpart of the general linear model, a new pseudo R squared goodness-of-fit test which uses predicted probabilities and a monotonic link function is here proposed to serve both the linear and Generalized Linear Models. Introduction A generalized linear model is one in which each component of the response variable Y has a distribution in the exponential family, taking the form for some specific function a(•), b(•) and c(•, •) (McCullagh & Nelder, 1990).The functions a and c are such that a(ϕ) = ϕ/w and c = c(y, ϕ/w), where w is a known weight for each observation.The model can be stated as where z i is the adjusted dependent variate, x i j is the (i, j)th element of the design matrix, h(µ i ) is the link function and e i is the residual error.The link between y i and z i is in the expression. Where y i is a binomial random response variable. From (1), a residual in generalized linear model can be defined as e i , so defined is called Pearson residual. Standard theory for this type of distribution expresses the mean and variance of the response y as: where V is the variance function. The log-likelihood function, a goodness-of-fit measure is defined for the following exponential family models: Generally, the log-likelihood function is of the form L(y, µ, ϕ) = Σ i log( f (y i , µ i , ϕ)) with individual contribution for the binomial function as 2. The Newton-Raphson Method The Newton-Raphson estimation scheme is given as where H, the Hessian matrix is given as l, the loglikelihood for a binary response variable can be written as W, the weight matrix is given as W = diag{m i ( dµ i dη i ) 2 /µ i (1 − µ i )}.m i is row subtotal in the cross tabulation table.The gradient vector g is given as where the response or fitted probability µ i is defined as An alternative estimation procedure is the Iterative Weighted Least Squares method which often adopted in order to avoid the computational tedium associated with the Hessian matrix. Residuals in Generalized Linear Models The coefficient of determination R 2 , is a function of the residual.It was originally developed for the normal-theory model.Cameron and Windmeijer (1996) designed an R 2 for the Poisson and related count data after observing that it was rarely used for count data.Nagelkerke (1991) generalized the definition of R 2 in what is called the generalized R 2 .The generalized R 2 is consistent with the classical R 2 and is also maximized by the maximum likelihood estimation of a model.The generalized coefficient of determination is given as follows: where L(0) is the likelihood of the model with only intercept.L(θ) is the likelihood of the estimated model and n is the sample size.Residuals in a logistic model can be defined as the difference between y i and the predicted probability θ for y i .We define the predicted probability in a cross-classified data as the probability that an object or a person selected from a subgroup is a success (Stroke et al., 1997). The monotonic link function relates the predicted probability to the set of linear predictors.For the logistic regression where the underlying distribution is binomial, the link function is a logit.The deviance, Pearson χ 2 , standardized, logit and studentized residuals are the residuals normally associated with generalized linear models.The analysis of residuals made in this paper shows that the logit residual is approximately twice the size of standandized residuals.The standardized residual is approximately equal to the deviance residual.This can be seen in the appendix. Goodness of Fit Measures in Generalized Linear Models The deviance and the generalized Pearson χ 2 statistic are two measures of goodness of fit in generalized linear models.Both the deviance and the generalized Pearson χ 2 have exact χ 2 distributions for Normal-theory linear models if the models are true (McCullagh & Nelder, 1990).The deviance uses the log of the ratio of likelihoods.Cox and Snell R squared, another measure of goodness of fit in generalized linear models is a psudo R squared and a modification of the deviance which configures the test interval to lie between 0 and 1 (excluding 1) such that a smaller ratio implies a greater improvement. The deviance for the set of distributions in generalized linear models is given as follows: for the normal distribution, it is stated as For the poisson, binomial and gamma we have 2 and 2 respectively.For the inverse-Gaussian, multinomial and negative binomial, we have respectively.Cox and Snell R 2 is defined as where L(m int ) is the conditional probability of the dependent variable for the intercept model. In this paper a new goodness of fit test that makes use of fitted probabilities, a monotomic link function and the Nagelkerke range of possible values is proposed.The test is designed to serve both the general linear and the generalized linear models. It is given as follows: R 2 G&G , designed for the generalized linear models can be adapted for use as a goodness of fit measure in the general linear model by replacing the fitted probabilities and the link function values with fitted y values and the mean of y respectively.The value of R 2 G&G range from 0 to 1, with higher values implying better fits. Illustrative Example The hypothetical data below is used for the illustration of residual analysis in generalized linear models: The probability that a person from the ith sex level and the jth location status is infected with a certain virus. The model Let y i j be a binomial random response variable corresponding to the ith sex status and the jth location which assumes the value 0 or 1.The probability θ i j ; that a person of the hth sex and jth location is infected by the virus is modeled as where i = 1, 2, j = 1, 2, Stat Computing (2011) gave three interpretations of R 2 as follows: (i) R 2 as explained variability: The denominator of the ratio indicates total variation in the dependent variable while the numerator is the variability in the dependent variable that is not predicted by the model.The ratio is the proportion of the total variability explained by the model which agrees with R 2 in Ordinary Linear Models (Koutsoyiannis, 1983).Thus a higher ratio implies a better model. (ii) R 2 as improvement from null model to fitted model: A smaller ratio implies a greater improvement. (iii) R 2 as the square of the correlation: correlation between predicted values and the actual values.A higher R 2 implies a greater improvement of fit. It can be seen that the proposed R2 goodness-of-fit measure compares favourably with the Nagelkerke/Gragg & Uhler's R 2 (0.180 against 0.187).
v3-fos-license
2021-05-02T05:16:55.223Z
2021-03-05T00:00:00.000
233470285
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://journals.iucr.org/e/issues/2021/04/00/yk2147/yk2147.pdf", "pdf_hash": "83bbeb6986eb1cdd4bc969ea7e5e87abda8e78a5", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:1681", "s2fieldsofstudy": [ "Chemistry" ], "sha1": "83bbeb6986eb1cdd4bc969ea7e5e87abda8e78a5", "year": 2021 }
pes2o/s2orc
Crystal structure and Hirshfeld surface analysis of 4-(4-chlorophenyl)-5-methyl-3-{4-[(2-methylphenyl)methoxy]phenyl}-1,2-oxazole In the crystal, the title molecules are linked by intermolecular C—H⋯N, C—H⋯Cl, C—H⋯π contacts and π–π stacking interactions. A Hirshfeld surface analysis was undertaken to quantify the intermolecular interactions. Chemical context Azoles are five-membered heterocycles that have been widely used as promising scaffolds in designing novel therapeutics, in particular anticancer agents (Ahmad et al., 2018). Among them, isoxazole, a five-membered heterocycle with consecutive nitrogen and oxygen atoms in the ring, is found to be a key structural component of many commercial drugs or drug candidates in clinical development (Barmade et al., 2016). Moreover, a number of vicinal diaryl isoxazoles reported in the literature exhibit anticancer and COX-2 inhibitory activities, such as luminesbip and valdexocib, respectively (Murumkar & Ghuge, 2018). One of the critical steps in rational drug design is obtaining knowledge of the structure of the new drug candidates, and single-crystal X-ray diffraction (SCXD) is one of the most powerful methods for gaining this fundamental information, which can be used to guide the drug-design studies in connection with other technologies such as pharmacophore model elaborations, 3D QSAR, docking, and de novo design. SCXD has thus become an essential tool for drug development to unambiguously determine the threedimensional structures of molecules, which eventually paves the way for rapid development of new molecules (Wouters & Ooms, 2001). Moreover, during the drug-development process, another important issue lies in understanding the crystal packing of the active pharmaceutical ingredient (drug substance) for suitable formulation development. Since most drug molecules comprise solid dosage forms in the crystalline state, it is imperative to truly understand the relationships between the crystal structures and the solid properties of pharmaceutically active substances, which helps the best form of an active pharmaceutical ingredient to be chosen for development into a drug product (Aitipamula & Vangala, 2017). Based on the above and our continuing interest in structural studies and biological applications of diaryl heterocycles Ç alışkan et al., 2011;Dü ndar et al., 2009;Eren et al., 2010;Ergun et al., 2010;Garscha et al., 2016;Levent et al., 2013;Pirol et al., 2014;Ü nlü et al., 2007), we report herein the crystal structure and Hirshfeld surface analysis of the title compound. Figure 3 A view of the C-HÁ Á ÁN and C-HÁ Á Á andinteractions in the unit cell of the title compound. Dashed lines show short intermolecular contacts. Hirshfeld surface analysis Hirshfeld surface analysis (Hirshfeld, 1977;Spackman & Jayatilaka, 2009) of the title compound was carried out to investigate the location of atoms with potential to form hydrogen bonds and other intermolecular contacts, and the quantitative ratio of these interactions. Crystal Explorer17.5 (Turner et al., 2017) was used to generate the Hirshfeld surfaces and two-dimensional fingerprint plots (Rohl et al., 2008). The Hirshfeld surfaces were generated using a standard (high) surface resolution with the three-dimensional d norm surfaces mapped over a fixed colour scale of À0.0800 (red) to 1.5787 Å (blue) (Fig. 4). The red points, which represent closer contacts and negative d norm values on the surface, correspond to the C-HÁ Á ÁN (C17-H17AÁ Á ÁN1), C-HÁ Á ÁCl (C8-Cl1Á Á ÁH1C-C1) and C-HÁ Á Á (C6-H6Á Á Áphenylene) interactions (Table 2). Except for the red spots, the overall surface mapped over d norm is white and blue, indicating that the distances between the contact atoms in intermolecular contacts are nearly the same as the sum of their van der Waals radii or longer. The shape-index of the Hirshfeld surface is a tool for visualizing thestacking by the presence of adjacent red and blue triangles; if there are no such triangles, then there are nointeractions. The plot of the Hirshfeld surface mapped over shape-index clearly suggests that there areinteractions in the title compound (Fig. 5). Table 2 Summary of selected van der Waals contacts (Å ) involving H atoms in the title compound. Contact Distance Symmetry operation Figure 4 The Hirshfeld surface of the title compound mapped with d norm . Figure 5 Hirshfeld surface of the title compound plotted over shape-index. In compound (I), the asymmetric unit contains two molecules, A and B, with different conformations. In molecule A, the C O group of the ester points away from the benzene ring [C-C-C O = À170.8 (3) ], whereas in molecule B, it points back towards the benzene ring [C-C-C O = 17.9 (4) ]. The dihedral angles between the oxazole and benzene rings are also somewhat different [46.26 (13) and 41.59 (13) for molecules A and B, respectively]. Each molecule features an intramolecular C-HÁ Á ÁO interaction, which closes an S(6) ring. In the crystal, the B molecules are linked into C(12) chains along the c-axis direction by weak C-HÁ Á ÁCl interactions. In the crystal of (II), the components are linked by O-HÁ Á ÁN and N-HÁ Á ÁO hydrogen bonds, where the water molecule acts as both an H-atom donor and an acceptor, into a tape along the a-axis direction with an R 4 4 (16) graph-set motif. The water molecule is located on a twofold rotation axis. In (III), the dihedral angle between the benzene and isoxazole rings is 59.10 (7) . In the crystal, the components are linked by N-HÁ Á ÁO and O-HÁ Á ÁO hydrogen bonds into a three-dimensional network. The crystal structure is further stabilized by -stacking interactions [intercentroid distance = 3.804 (2) Å ]. Synthesis and crystallization Step 1: To a solution of N-hydroxy-4-[(2-methylbenzyl)oxy]benzimidoyl chloride (275 mg, 1 mmol) in diethyl ether (6 ml) was added Et 3 N (139.4 mL, 1 mmol). The resulting mixture was stirred for 2 h in an ice bath, and the precipitate formed was filtered off. The filtrate was evaporated under vacuum to obtain the arylnitriloxide intermediate. Step 2: To a solution of NaH (60% in mineral oil, 64 mg, 1.6 mmol) in dry THF (4 ml), 4-chlorophenylacetone (168,6 mg, 1.0 mmol) was added dropwise, and stirred for 1 h under a nitrogen atmosphere in an ice bath. At the end of the period, the arylnitriloxide intermediate was dissolved in dry THF (4 ml), and was added to the reaction mixture, then stirred at room temperature overnight. Upon completion of the reaction, aqueous ammonium chloride solution was added, and the product was extracted with EtOAc (2 Â 50 mL). The combined organic extracts were dried over anhydrous Na 2 SO 4 , filtered and evaporated to dryness. The crude product was purified by automated-flash chromatography on silica gel (12 g) eluting with a gradient of 0 to 40% EtOAc in hexane. The obtained pure product was recrystallized from methanol. Crystals for structural study were obtained by slow cooling of the solution, yield 77%, m.p. 387.2-388.6 K. 21, 18.42, 67.98, 113.96, 114.97, 120.84, 125.77, 128.15, 128.59, 128.86, 129.43, 130.12, 131.44, 132.57, 134.58, 136.64, 159.49, 160.09, 166.93 prepare material for publication: PLATON (Spek, 2020) and WinGX (Farrugia, 2012). Special details Geometry. All esds (except the esd in the dihedral angle between two l.s. planes) are estimated using the full covariance matrix. The cell esds are taken into account individually in the estimation of esds in distances, angles and torsion angles; correlations between esds in cell parameters are only used when they are defined by crystal symmetry. An approximate (isotropic) treatment of cell esds is used for estimating esds involving l.s. planes.
v3-fos-license
2018-04-03T01:20:56.371Z
2016-02-12T00:00:00.000
16785758
{ "extfieldsofstudy": [ "Biology", "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0148866&type=printable", "pdf_hash": "4f71bb1e0ab03b1f8f776e3dbcefcb86ed17f011", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:1682", "s2fieldsofstudy": [ "Biology", "Medicine" ], "sha1": "4f71bb1e0ab03b1f8f776e3dbcefcb86ed17f011", "year": 2016 }
pes2o/s2orc
The Role of Endothelin System in Renal Structure and Function during the Postnatal Development of the Rat Kidney Renal development in rodents, unlike in humans, continues during early postnatal period. We aimed to evaluate whether the pharmacological inhibition of Endothelin system during this period affects renal development, both at structural and functional level in male and female rats. Newborn rats were treated orally from postnatal day 1 to 20 with vehicle or bosentan (Actelion, 20 mg/kg/day), a dual endothelin receptor antagonist (ERA). The animals were divided in 4 groups: control males, control females, ERA males and ERA females. At day 21, we evaluated renal function, determined the glomerular number by a maceration method and by morphometric analysis and evaluated possible structural renal alterations by three methods: 〈alpha〉-Smooth muscle actin (α-SMA) immunohistochemistry, Masson's trichrome and Sirius red staining. The pharmacological inhibition of Endothelin system with a dual ERA during the early postnatal period of the rat did not leads to renal damage in the kidneys of male and female rats. However, ERA administration decreased the number of glomeruli, the juxtamedullary filtration surface area and the glomerular filtration rate and increased the proteinuria. These effects could predispose to hypertension or renal diseases in the adulthood. On the other hand, these effects were more pronounced in male rats, suggesting that there are sex differences that could be greater later in life. These results provide evidence that Endothelin has an important role in rat renal postnatal development. However these results do not imply that the same could happen in humans, since human renal development is complete at birth. Introduction Endothelin (ET) system is represented by three structurally similar endogenous 21-aminoacid peptides named ET-1, ET-2 and ET-3, that activate two G-protein-coupled receptors ET A and ET B and two activating proteases [1]. Each isoform is encoded by a separate gene and both their synthesis and secretion are highly regulated at the transcriptional level by hormonal and environmental factors [2]. water) or with Bosentan (Actelion, 20 mg/kg/day), a dual ERA, which was administered via oral with a micropipette. Blockade of ET receptors was performed during the first 20 days of life, comprising all the lactation period, because in rats growth and maturation of the kidney also continue after the completion of nephrogenesis and it has been considered that nephrons reach terminal differentiation at the time of weaning [24,25]. The weight of the animals was registered daily with an electronic scale (Precision; model: TH-1000 series). After weaning (day 21), the animals were placed in metabolic cages to obtain 24 h. urine samples. Then, the animals were anesthetized with urethane (1g/Kg) via i.p and blood samples were obtained by cardiac puncture, both kidneys were immediately removed and weighed. The right kidney of each animal was used to perform the count of glomerular number meanwhile the left kidney of each animal was used to perform histological evaluation, morphometric analysis and immunohistochemistry. Kidney weight was measured and expressed per 100g of body weight. Femur length was measured using a caliber. Determinations in the 24-hour metabolic cage studies Twenty-four hour urine samples were collected using metabolic cages. Urine volume was measured gravimetrically. Urine samples were analyzed for total protein using a kit provided by Wiener (Proti U/LCR; Wiener Lab., Rosario, Argentina). Urinary sodium and potassium concentrations were evaluated using an ion analyzer (Tecnolab; Mod. T-412). Kinetic determinations of serum and urinary creatinine concentrations were evaluated using a kit provided by Wiener (Wiener Lab., Rosario, Argentina). It is known that creatinine clearance can overestimate glomerular filtration rate (GFR) in rodents on account of tubular secretion of creatinine. However, other methods such as inulin clearance are not simple and also have practical limitations [26], which are magnified when applied to 21-day-old rats. Count of glomerular number A modification of the maceration method described by Damadian et al was used to count the glomerular number [27,28]. Briefly, the kidneys were decapsulated, cut into small pieces and incubated in 1% NH 4 Cl at room temperature, followed by incubation in 30 mL of 50% HCl for 90 min at 37°C with gently agitation. After slow-speed centrifugation, the pellet containing the glomeruli was suspended in 25 mL of distilled water. Twenty 20 μL aliquots were pipetted onto slides and all glomeruli were counted at 400x magnification. Although some experts favor stereology over maceration, the latter technique was chosen because it is simple and rapid and allows the detection of differences between groups [29]. The number of total glomeruli per rat was calculated as follows: [(N°of glomeruli per right kidney/ right kidney weight) x weight of both kidneys]. Histological evaluation and morphometric analysis The left kidneys were decapsulated and cut longitudinally, fixed in phosphate buffered 10% formaldehyde, pH 7.4, embedded in paraffin wax and cut to a thickness of 5μm. Renal tissue sections were stained with hematoxylin and eosin (H-E). To count the glomerular number, ten consecutive cortical and juxtamedullary areas from two renal sections per animal (five animals per group) were examined. The number of glomeruli was measured at 1 × 100 magnification and expressed per mm 2 . To determine glomerular areas we analyzed at least ten cortical and ten juxtamedullary glomeruli from five animals of each group. Both total glomerular area and glomerular capillary area were expressed in μm 2 and the ratio capillary glomerular area/total glomerular area (CGA/ TGA) of cortical and juxtamedullary nephrons were determined at 1 × 400 magnification. The filtration surface area of renal juxtamedullary and cortical tissue was calculated as the product of mean glomerular capillary area by the number of glomeruli per mm 2 . To determine the presence of early fibrosis in the renal cortex, kidney sections were subjected to halphai-Smooth muscle actin (α-SMA) immunohistochemistry, Masson's trichrome and Sirius red staining. At least ten cortical and ten juxtamedullary fields from five animals of each group were analyzed. Immunohistochemistry To perform immunohistochemistry a primary mouse monoclonal antibody to α-SMA was used (Biogenex, Canyon Road San Ramon, CA 94583, USA). Then, the kidney sections were incubated with a secondary biotinylated donkey anti mouse (Jackson ImmunoResearch, West Grove, PA, USA) followed by the streptavidin-biotin-peroxidase reaction (Dako Cytomation, Glostrup, Denmark) and visualized by exposure to dioaminobenzidine (DAB)-H 2 O 2 . Negative controls were performed by omitting the primary antibody and endogenous peroxidase activity was quenched with hydrogen peroxide to prevent unspecific staining. Tissue sections were counterstained with hematoxylin. Expression of α-SMA in the renal cortex was scored as follows: 0: normal staining confined only to smooth muscle cells of blood vessels; 1: (mild) additional weak staining in the peritubular interstitium, glomeruli, and periglomerular structures; 2: (moderate) moderate segmental or focal staining of peritubular interstitium, periglomerular structures, and a small minority of glomeruli; 3: (severe) strong staining, 25% of the cortical area, in the majority of glomerular cells and tubular and peritubular structures; and 4: (very severe) strong staining, >25% of the cortical area, in the majority of glomerular cells and tubular and peritubular structures. A score was assigned to each section, mainly reflecting the changes in the extent rather than the intensity of staining [30]. Sirius red staining Collagen accumulation was examined in the renal sections with the collagen-specific stain picrosirius red (Sirius Red 3 in a saturated aqueous solution of picric acid and fast green as a counterstain). Sirius Red staining is a method for collagen determination, enabling quantitative morphometric measurements to be performed in locally defined tissue areas [31]. Staining was scored as 0 (normal and slight staining surrounding the tubular, glomerular, and vascular structures), 1 (weak staining that doubles the normal label surrounding the tubular, glomerular, and vascular structures), 2 (moderate staining in the peritubular interstitium and inside the glomeruli), 3 (strong staining that replaces the glomerular and tubular structures, compromising <25% of the cortical area), or 4 (strong staining that replaces the glomerular and tubular structures, compromising>25% of the cortical area). Image capture and analysis Images from histological and immunohistochemical sections were captured using a Nikon Alphaphot-2 YS2 light microscope (Nikon Instrument Group, Melville, NY), coupled to a Sony color video camera digital (Model N°SSC-DC50A). All determinations were performed blindly and under similar light, gain and offset conditions by the same researcher. Image-Pro Plus 5.1 Ink software (Media Cybernetics, LP, Silver Spring, MD) was used to evaluate glomerular areas and fibrosis. Statistical analysis Data are presented as the mean ± standard error of the mean. Data were analyzed using twoway ANOVA where one factor was the treatment (control or ERA) and the other was the sex of the animals (male or female rats). The main effect of each factor was tested as well as the interaction within both factors. Bonferroni´s post-test was used for multiple comparisons. When the interaction was found to be statistically significant, the main effect of each factor was not informed (as each factor is influenced by the other) and simple main effects were informed separately. Data were analyzed using Graph Pad Prism version 5.0 for Windows, Graph Pad Software (San Diego, CA, USA). The null hypothesis was rejected when p<0.05. Effect of ERA administration on growth parameters The body weight of ERA-treated rats (male and female) decreased at the end of treatment when compared with control groups (male and female respectively). However, there were no differences between groups in femur length. We did not find differences in kidney weight, expressed per 100g of body weight. These results are shown in Table 1. ERA administration during early postnatal life decreases nephron number and affects renal filtration surface area The number of total glomeruli determined by the maceration method significantly decreased in ERAm vs Cm (Fig 1). In the cortical area, the treatment with the ERA decreased the CGA to the same extent in male and female rats. Female groups (control and ERA-treated) showed a lower CGA than their respective male groups. There were no significant changes in the other morphometric parameters evaluated in the cortical area; although there was a tendency to decrease renal filtration surface area and CGA/ TGA % in ERAm-treated rats. The main changes in the morphometric analysis were observed in the juxtamedullary area (JA). The morphometric evaluation showed that the number of glomeruli/mm 2 decreased significantly in the JA of both ERAm and ERAf compared with their respective controls. In addition, the juxtamedullary renal filtration surface area significantly decreased in both ERAm and ERAf. The juxtamedullary capilar glomerular area (CGA) significantly decreased in both ERAm and ERAf. The effect of ERA treatment on juxtamedullary capilar glomerular area/total glomerular area (CGA/TGA) ratio was different in females than in males. There was a significant decrease in the CGA/TGA ratio of the juxtamedullary nephrons in ERAf, whereas there was only a tendency to decrease this parameter in ERAm. When we determined the number of glomeruli/mm 2 without differenciating between CA and JA, we found significant differences in ERAm when compared with Cm, a result concordant with that obtained by the maceration method. hese results are shown in Table 2. Two-way ANOVA showed a statistically significant interaction (p < 0.05) between the effects of ERA treatment and sex on CGA /TGA % (JA) and in number of glom/mm 2 (CA + JA). There was no interaction between the effects of ERA treatment and sex on the other parameters. ERA treatment had a significant overall effect on CGA (CA) (p<0.01), CGA (JA) (p<0.05), N°of glom/mm 2 (JA) (p<0.01) and renal filtration surface area (JA) (p<0.01). Sex had a significant overall effect (p<0,05) on CGA (CA). Effect of ERA administration on renal functional parameters There was a significant increase in proteinuria in both male and female ERA-treated rats versus their respective control groups, being this increase higher in male than in female animals ( Fig 2A). GFR, estimated as the clearance of creatinine, significantly decreased in both male and female ERA-treated rats (Fig 2B). This decrease could be explained by the diminished renal filtration surface, which in turn could be a consequence of the smaller capilar glomerular area and/or the decreased number of glomeruli. There were no significant changes in the other renal functional parameters evaluated, although there was a tendency to increase diuresis. These results are shown in Table 3. There was no interaction between the effects of ERA treatment and sex. ERA treatment had a significant overall effect on proteinuria and creatinine clearance ( Ã p<0.05 vs control males; # p<0.05 vs control females). ERA administration during early postnatal life does not lead to early renal morphologic alterations in the kidneys of male and female rats The histological structure of rat kidneys in the H-E sections seemed to be unaffected (Fig 3). The score for both Masson´s trichrome and Sirius red staining was <1 for all the groups; it means a normal and slight staining surrounding tubular, glomerular and vascular structures ( Table 4). The score for α-SMA was 0 for all the groups, being the staining normal and confined only to smooth muscle cells of blood vessels. Representative images of the three techniques can be observed in Figs 4-6. Discussion It is well known that ET plays a central role in renal sodium and water balance, arterial blood pressure regulation and the development and maintenance of kidney disease in adult animals [33]. However, there are only a few studies about the participation of ET system during the postnatal renal development. The importance of this study lies in the fact that the stimuli or impact that an organ receives during its perinatal development (in this case the administration of a dual ERA) may impact in its function during the adulthood. The present study demonstrates that the pharmacological blockade of ET during the early postnatal period impairs body weight in both male and female rats. However, this result cannot be explained by a defect in the growth of the animals due to the absence of changes in femur length between the different experimental groups. In addition, there were no significant differences in renal weight between control and ERA-treated rats. The differences seen in body weights could be reflecting a greater water loss in ERA-treated rats than in control rats. Although the diuresis did not differ significantly between control and ERA-treated rats, there was a tendency to increase this parameter; a possible explanation may be that the increase in water loss could be due to decreased water reabsorption at tubular level. Alternatively, it may be that the differences in body weights of ERA-treated rats reflect a diminished food intake. No signs of structural renal damage were observed at the end of the treatment in the ERAtreated rats, evaluated by both histochemical and immunohistochemical methods. However, it would be interesting to evaluate if there are signs of fibrosis in these animals at a later stage of their lives. Sometimes early morphological changes are not evident, but changes at the molecular level are present and manifest later in life. The histological structure of rat kidneys in the H-E sections seemed to be unaffected. However, the morphometric analysis showed a decrease in glomerular number in the juxtamedullary area of the renal cortex for both ERAm and ERAf. The decrease in glomerular number seen in the morphometric analysis in ERAm was coincident with the decrease in the glomerular number observed by the maceration method. However, the maceration method did not show a decrease in glomerular number in ERAf. A decrease in nephron number has been found to be associated with susceptibility to develop diseases like hypertension and chronic renal failure [34][35][36][37]. This reduced nephron number seen in ERA-treated rats may be a consequence of decreased cellular proliferation or increased apoptosis in the renal cortex of the animals. Tight regulation of apoptosis is required in normal renal morphogenesis, being regulated by genetic, epigenetic and environmental factors, thus dysfunctions in apoptosis can manifest as developmental abnormalities [38]. In fact, Yoo et al have shown that inhibition of endogenous endothelins during the postnatal period impairs renal growth, in which decreased cellular proliferation, increased apoptosis and decreased expressions of renal Bcl-X(L) and Bax are possibly implicated [39]. However, they used a selective ET A receptor antagonist only from postnatal day 1 to 7. It has been shown that ET-1 receptor activation results in the stimulation of several signaling pathways including MAPKs/ERK and PI3-K [40]. Activation of the ETA or the ETB receptor results in phosphorylation of ERK, which is an important regulator for cellular proliferation, migration, differentiation and vascular smooth muscle constriction [41]. In addition, ET-1 induces activation of RAS in rats and human renal mesangial cells, which is dependent upon the formation of the Shc/Grb2/Sos1 signalling complex and resulted in ERK activation [42]. On the other hand, PI3K signaling is involved in growth and survival, inducing apoptosis inhibition. In mesangial cells, ET-1 receptor activation has been shown to stimulate PI3-K phosphorylation through Ras and to increase the catalytic activity of PI3-K [43,44]. So, it is probable that the blockade of both ET receptors can lead to the principal changes reported by inhibition of the MAPKs signaling and/or inhibition of PI3-K signaling. The decrease in the number of juxtamedullary glomeruli and consequently in renal filtration surface area observed in ERam and ERAf rats could be due to decreased cellular proliferation and increased apoptosis. These cellular events could be mediated by endothelin dependent signaling activating pathways that have implications in regulating proliferation, survival and apoptosis. The reduced nephron number that we observed was accompanied with a reduction of the renal filtration surface area at juxtamedullary level for both ERAm and ERAf. There were no significant changes in the renal filtration surface at cortical level although there was a tendency to decrease those values, especially in ERAm. As can be seen in Table 2, both total and glomerular capilar areas were larger in juxtamedullary than in cortical zone. It is known that glomeruli that will be located in the juxtamedullary region develop first and are larger than superficial glomeruli at birth and during early postnatal life [45,46]. Another interesting finding is that in ERA-treated rats the proteinuria was significantly higher than in control rats. In addition, the proteinuria in ERAm was higher than in ERAf. The molecular mechanisms that lead to proteinuria are poorly understood [47], but bearing in mind that ET-1, ET A and ET B are expressed in both podocytes and glomerular endothelial cells [48], this increase in proteinuria observed in ERA-treated rats suggests that ET regulates the composition and/or the function of the glomerular filtration barrier during postnatal development. There is recent evidence that TGF-β1 plays crucial roles in podocyte differentiation, glomerulogenesis, and nephrogenesis during kidney development and podocyte injury responses [49]. Bearing in mind the interaction of ET-1 and TGF-β in diverse tissues and organs, including the kidney, it is tempting to speculate that ET and TGFβ interact during renal postnatal development acting on the glomerular filtration barrier formation. The differences seen in neonatal male and female rats (hormone-independent) can be explained by the presence of epigenetic mechanisms that affect distinctively males and females. Epigenetic mechanisms that affect genes include insertion of histone variants, post-translational modifications of histones, expression of non-coding RNAs (ncRNAs) and methylation of DNA. These epigenetic effectors alter both the availability of genes for transcription and the rates of transcription. Two epigenetic mechanisms known to regulate the genes of the ET pathway are DNA methylation and histone modification. Most of the evidence available on epigenetic regulation of the ET pathway focuses on the EDN1 (gene that codifies ET-1) and EDNRB (gene that codifies ET B receptor) [50]. The pharmacological inhibition of ET system with a dual ERA during the early postnatal period of the rat decreases the number of glomeruli, the juxtamedullary filtration surface area, the glomerular filtration rate and increases the proteinuria. These effects could predispose to hypertension or renal diseases in the adulthood. On the other hand, these effects were more pronounced in male rats, suggesting that there are sex differences that could be greater later in life when sex hormones play a role. It is known that sex hormones affect Endothelin plasma levels, which are increased by testosterone and decreased by oestradiol [51]. In addition, sex steroids influence almost every component of the ET system [52][53][54][55]. Renal ET receptors function somewhat differently between males and females. ET A receptor activation leads to unfavorable effects in male kidneys, including renal medullary vasoconstriction and renal injury. In contrast, females are relatively protected against high blood pressure and kidney damage by virtue of increased ET B receptor function and perhaps reduced ET A -dependent haemodynamic effects [56]. These results provide evidence that ET has an important role in rat renal postnatal development. However, these results do not imply that the same could happen in humans, since human renal development is complete at birth. Our study could be clinically compared with the 3rd trimester of human gestation or the premature human kidney and actually highlights the importance of this type of studies to develop therapeutic approaches during perinatal life.
v3-fos-license
2019-12-05T09:37:28.059Z
2019-11-28T00:00:00.000
213100946
{ "extfieldsofstudy": [ "Materials Science" ], "oa_license": "CCBY", "oa_status": "HYBRID", "oa_url": "http://matmod.dstu.dp.ua/article/download/185101/184722", "pdf_hash": "427c7904377458aad197f1dafbb1f95d282ae973", "pdf_src": "Anansi", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:1683", "s2fieldsofstudy": [ "Materials Science" ], "sha1": "e57cb95c7fa8701f48bc2ba3538a8c564256e671", "year": 2019 }
pes2o/s2orc
MATHEMATICAL MODELING OF CONTINUOUS METAL BEHAVIOR WITH DEFECTS OF MACROSTRUCTURE IN THE ROLLING PROCESS The mathematical model of the process of rolling a continuous cast billet with defects of macrostructure on a smooth barrel and in calibers is presented. The boundary conditions are given by the speed of rotation of the rolls, the restriction of degrees of freedom of the workpiece and the rolls, as well as the coefficient of friction on the contact surface of the rolls-workpiece. For hot rolling, it The mathematical model of the process of rolling a continuous cast billet with defects of macrostructure on a smooth barrel and in calibers is presented. The boundary conditions are given by the speed of rotation of the rolls, the restriction of degrees of freedom of the workpiece and the rolls, as well as the coefficient of friction on the contact surface of the rolls-workpiece. For hot rolling, it was considered permissible to consider rolling rolls as a rigid, non-deformable rigid body. For the workpiece material used an elastic-plastic model of environmental behavior. There have been two cases considered: rollerless rolling on a smooth barrel and rolling in rectangular caliber. The roll material is steel, the surface is smooth. On the example of rolling in the first pass of the crimp mill, a comparative analysis of caliber rolling and rolling in calibers was carried out. It is shown that rolling on a smooth barrel has the potential to be used in the case of rolling continuous casting with macrostructure defects. The influence of the basic rolling parameters: absolute compression and temperature on the "healing" of macrostructure defects is made. Problem's Formulation In conditions of constant technological innovation and globalization of markets in front of enterprises, rolled metal manufacturers the problem of increasing production efficiency and ensure competitive products in demand quality. The use of continuous cast billets (CCB) of small cross section revealed a number of problems that did not previously occur with using hot rolled. In the greatest degrees this is typical for cases of production long products from quality structural and spring-spring brands steels. Formulation of the problem. CCB defects formed during solidification and may to develop both inside the cast billet and on outer surface. Analysis of recent research and publications In most cases defects have a negative effect on production causing rejection or increase production costs as a result the need to bring blanks in line with the required specifications before they by rolling. Defects expending on the cause can be divided into two groups [1][2][3]: -defects specific to a particular streams (found only on one stream due to hardware problems or its settings, for example, due to mold defects, irregularities secondary cooling, improper settings pulling and ruling rolls, etc.); defects specific to a particular swimming trunks (associated with the properties of liquid steel and caused by overheating, the presence of impurities, deoxidation at the stage of out-of-furnace steel processing). In relation to production conditions metal rolling from CCB obtained in conditions: -high-speed varietal caster with small bending radius, average figures of marriage by type have the form: violations of the geometry of the CCB -36.4%; -macrostructure defects -17.2%, cracks 31.9%; -slag inclusions -4.5% [2]. On many axial porosity is observed in the templates segregation strips and cracks, developed columnar structure and asymmetry of the ingot zones. In this regard, questions related to the study the behavior of defects in the macrostructure of continuously cast billets (shrink shell, gas bubbles, axial porosity) in the process deformations are relevant following the behavior of these defects in recently widespread mathematical (computer) methods simulation using software complexes like ANSYS, DEFORM, QFORM, PLAST, etc. in which modeling carried out using the finite method elements. Moreover, the variety of processes in each case requires its own approach to modeling techniques. So in [3,5] presented the results simulation of cutting and screw rolling processes solid blanks in the software package Deform-3D. We studied the effect on the depth of the weight work piece diameter, feed angle, roll calibration crimp mill, as well as the original shape of the ends blanks. In [4], a mathematical simulation of the process of screw firmware blanks large diameter. The aim of the work was to study with using the Deform-3D software package such parameters as a stress-strain state. In [4], mathematical modeling of the process of screw-piercing large-diameter workpieces was carried out. The aim of the work was to study using the Deform-3D software package such parameters as the stress-strain state metal, the accumulated deformation in terms of the volume of the workpiece, the nature of the development of the deformation, power parameters, firmware time, etc. Of great interest is the work [5][6][7] in which mathematical modeling of the process of metal deformation in calibers of various shapes was carried out. During the study, the influence of the stressstrain state on the flow of the deformed metal in various zones of high-quality calibers, depending on their shape, was analyzed. For modeling, the Deform-3D software package was also used. In [6], the problems of calibration design were studied in order to reduce energy consumption and reduce the likelihood of defect formation. Thus, the aim of the work was to develop a mathematical model and conduct research into the behavior of defects in the macrostructure of continuously cast metal during rolling. Formulation of the purpose of the study For mathematical modeling of the behavior of macrostructure defects during rolling, a computer simulation software package using the Deform-3D finite element method was used. The rolling process is quite complicated for modeling since it combines the rotational movement of the rolls and the translational motion of the workpiece. To obtain correct results, it is necessary to accurately position the workpiece relative to the rolling rolls and take into account the
v3-fos-license
2016-05-12T22:15:10.714Z
2013-12-23T00:00:00.000
12395987
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.frontiersin.org/articles/10.3389/fonc.2013.00312/pdf", "pdf_hash": "64f6e640aab55437c7fc5c91f1d12a153547049b", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:1684", "s2fieldsofstudy": [ "Medicine" ], "sha1": "64f6e640aab55437c7fc5c91f1d12a153547049b", "year": 2013 }
pes2o/s2orc
Age and Racial Differences among PSA-Detected (AJCC Stage T1cN0M0) Prostate Cancer in the U.S.: A Population-Based Study of 70,345 Men Purpose: Few studies have evaluated the risk profile of prostate-specific antigen (PSA)-detected T1cN0M0 prostate cancer, defined as tumors diagnosed by needle biopsy because of elevated PSA levels without other clinical signs of disease. However, some men with stage T1cN0M0 prostate cancer may have high-risk disease (HRD), thus experiencing inferior outcomes as predicted by a risk group stratification model. Methods: We identified men diagnosed with stage T1cN0M0 prostate cancer from 2004 to 2008 reported to the surveillance, epidemiology, and end results (SEER) program. Multivariate logistic regression was used to model the probability of intermediate-risk-disease (IRD) (PSA ≥ 10 ng/ml but <20 ng/ml and/or GS 7), and high-risk-disease (HDR) (PSA ≥ 20 ng/ml, and/or GS ≥ 8), relative to low-risk disease (LRD) (PSA < 10 ng/ml and GS ≤ 6), adjusting for age, race, marital status, median household income, and area of residence. Results: A total of 70,345 men with PSA-detected T1cN0M0 prostate cancer were identified. Of these, 47.6, 35.9, and 16.5% presented with low-, intermediate-, and high-risk disease, respectively. At baseline (50 years of age), risk was higher for black men than for whites for HRD (OR 3.31, 95% CI 2.85–3.84). The ORs for age (per year) for HRD relative to LRD were 1.09 (95% CI 1.09–1.10) for white men, and as 1.06 (95% CI 1.05–1.07) for black men. Further, among a subgroup of men with low PSA (<10 ng/ml) T1cN0M0 prostate cancer, risk was also higher for black man than for white men at baseline (50 years of age) (OR 2.70, 95% CI 2.09–3.48). The ORs for age (per year) for HRD relative to LRD were 1.09 (95% CI 1.09–1.10) for white men, and as 1.06 (95% CI 1.05–1.07) for black men. Conclusion: A substantial proportion of men with PSA-detected prostate cancer as reported to the SEER program had HRD. Black race and older age were associated with a greater likelihood of HRD. INTRODUCTION Prostate cancer is the most common malignancy in U.S. men. In 2013 alone, an estimated 241,000 new cancer cases will be diagnosed and 28,000 deaths will be attributed to prostate cancer (1). Current screening methods for prostate cancer include prostate-specific antigen (PSA) testing and digital rectal examination, although benefits of the former remain controversial (2,3). The concern is that early detection and treatment of clinically insignificant prostate cancer may cause unnecessary side effects without added benefit. Ever since FDA approval of PSA as a screening tool, many men have had a prostate biopsy because of an elevated PSA, despite a normal digital rectal examination. Prostate cancer diagnosed in this setting is classified as stage T1c disease based on the American Joint Commission on Cancer (AJCC) (4). Early studies have shown that stage T1c disease is heterogeneous in its pathological features. After a retrospective review of 257 patients with stage T1c prostate cancer who underwent prostatectomy and nodal dissection from 1987 to 1991, Lerner et al. (5) from the Mayo Clinic reported that 45% of patients had non-organ confined pT3 disease and 4% had node positive disease. Of 240 men with stage T1c disease who underwent radical prostatectomy at Johns Hopkins from 1994 to 1996, 28% had extracapsular extension, seminal vesicle, or lymph node involvement (6). From 1988 to 1998, 638 men with stage T1c prostate cancer underwent radical prostatectomy at Washington University, and 30% had non-organ confined pT3 or node positive disease (7). Many studies have shown that adverse pathological features such as pT3 disease (extracapsular extension and seminal vesicle involvement), high Gleason score (GS) and positive surgical margins increased the risk of disease recurrence resulting in inferior outcomes (8)(9)(10)(11)(12)(13)(14)(15)(16)(17). These studies suggested that some men with stage T1c disease might have high-risk prostate cancer. However, current PSA screening protocols are not able to identify clinically significant disease in this cohort. The purpose of this study is to provide a contemporary profile of stage T1c prostate cancer based on demographic features and a risk stratification scheme developed by D'Amico et al. (18,19) that includes stage, GS, and PSA level. This risk stratification has been validated and widely used, including by the National Comprehensive Cancer Network (NCCN). We analyzed demographic and tumor characteristics of over 70,000 men who, according to data reported to the Surveillance, Epidemiology, and End Results (SEER) program (2004)(2005)(2006)(2007)(2008), were diagnosed with prostate cancer based on an elevated PSA level and without other clinical signs of disease (stage T1cN0M0). We report the probability of high-risk prostate cancer in these patients adjusting for characteristics such as age, race, marital status, median household income, and area of residence, while taking into account pre-biopsy PSA levels. These findings may provide important information in our efforts to develop more effective prostate cancer screening tools. PATIENT DATABASE Men diagnosed with AJCC stage T1cN0M0 prostate adenocarcinoma at age ≥ 18 years between 2004 and 2008 and reported to the SEER 17 Registries were identified. Year 2004 was chosen as the start year, since it was February 2004 that SEER initiated collection of detailed T, N, and M staging information. "Death certificate only" and "autopsy only" cases were excluded. A total of 78,367 cases were identified with stage T1cN0M0 disease. A total of 70,345 cases had PSA or GS available for analysis. The youngest man in the cohort with stage T1cN0M0 disease was 37 years of age. We identified 262,172 men ≥37 years of age with all stages of prostate cancer between 2004 and 2008. The SEER program provided the number of all men age ≥37 years in SEER catchment area based on 2000 U.S. population estimates. Based on PSA levels and GS, we divided cases into the three risk groups as described by D'Amico et al. (18). Low-risk disease (LRD) was defined as PSA <10 ng/ml and GS ≤6; intermediaterisk disease (IRD) as PSA ≥10 and <20 ng/ml and/or GS 7; and high-risk disease (HRD) as PSA ≥20 ng/ml and/or GS ≥8. Patient socioeconomic status was evaluated using "median household income" and "rural-urban continuum code" provided in the SEER program, as in prior studies (20). Median household income is an aggregate measure based on county attributes derived from the 2000 U.S. census. Rural-urban continuum code, which provides information about potential accessibility to cancer care, allows classification of counties by population size, degree of urbanization, and proximity to metropolitan areas. We grouped patients into three categories by area of residence: within metropolitan areas, adjacent to metropolitan areas, and not adjacent to metropolitan areas. Marital status at diagnosis was classified as follows: never-married (single), married, and others (widowed, divorced, and separated). STATISTICAL ANALYSIS The means of continuous variables (e.g., age and median household income) were compared between groups using t -tests with unequal variances. Chi-square tests were used to assess differences in categorical variables (e.g., race and gender) between groups. Multivariate logistic regression analyses were conducted to model the probability of developing IRD and HRD. The list of potential predictors to be included in the models was age, race, marital status, median household income, area of residence, and all possible first-order interactions. The analyses treated "age" as continuous variable and set 50 year of age as baseline. To select predictors significantly associated with disease risk, the data set was randomly partitioned into a training and validation data set of equal sizes (50% of the original data set each). We first ran a backward model selection procedure on the training data set to identify candidate predictors potentially associated with disease risk. Once this training step was completed, we fitted the identified model to the validation data set; predictors with p-values smaller than 0.05 were considered significant and included in the final models. This analysis was performed for the IRD and HRD separately. The final models were also used to model the probability of developing IRD and HRD in a subgroup analysis for patients with PSA <10 ng/ml stage T1cN0M0 prostate cancer. In all of our statistical analyses, tests were two-sided and the significance level (probability of type-1 error) was set at 0.05. Analyses were conducted using the SAS statistical package, version 9.13 (SAS Institute, Cary, NC, USA). RESULTS The age distribution of men ≥37 years of age in the SEER catchment areas are shown in Table 1 Frontiers in Oncology | Genitourinary Oncology A total of 70,345 men with stage T1cN0M0 prostate cancer had GS and PSA information available and these men were evaluated (median age 69 years, range 37-105) ( Table 2). There were 11,600 men (16.5%) with HRD. Men with HRD were significantly older (median age 72, 70, and 67 for HRD, IRD, and LRD respectively, p < 0.01); more likely to be black (19.4 and 15.6% for black and white respectively, p < 0.01); less likely to be married (16.7 and 15.5% for never-married and married respectively); and had a lower median household income (17.7, 16.6, and 15.6% for first, second, and third tertile respectively, p < 0.01). Based on age and PSA level, Table 3 shows the distribution of GS among 70,345 men with stage T1cN0M0 disease. For example, a 72-year-old man with a PSA of 8 ng/ml stage T1cN0M0 prostate cancer has a 7.2% chance of having GS ≥8 HRD. For an 82-year-old man with the same PSA level, the chance of HRD is 16.5%. Older men were more likely to have higher PSA levels at the time of diagnosis ( Figure 1A, p < 0.01). The percentage of men with PSA ≥20 ng/ml stage T1cN0M0 prostate cancer increased with age (5.3, 6.4, 7.6, and 13.3% for age groups 37-49, 50-64, 65-74, and ≥75, respectively). Furthermore, the percentage of men with higher GS increased with age ( Figure 1B, p < 0.01). There were 3.6, 6.1, 9.1, and 17.5% men with GS ≥8 stage T1cN0M0 prostate cancer for age groups 37-49, 50-64, 65-74, and ≥75, respectively. Black men were more likely to have higher PSA levels than white men at diagnosis (Figure 2A, p < 0.01). For example, 10.6% of black men and 7.5% of white men had PSA of ≥20 ng/ml. There was a small but significant Frontiers in Oncology | Genitourinary Oncology difference in GS profile between black and white men ( Figure 2B, p < 0.01). Multivariate logistic regression analyses identified a significant interaction between age and race, indicating that risk increased faster with age among white men compared to blacks for both HRD and IRD relative to LRD ( Table 4). At baseline (50 years of age), risk was higher among black men than among whites for HRD and IRD (OR 3.31 and 2.02, 95% CI 2.85-3.84, and 1.81-2.25, respectively). The ORs for age (per year) for HRD and IRD relative to LRD were estimated as 1.09 and 1.06 respectively (95% CI 1.09-1.10, 1.05-1.06) for white men, and as 1.06 and 1.04 respectively (95% CI 1.05-1.07, 1.03-1.04) for black men. Compared to baseline (50 years of age), the ORs at 75 years of age for HRD and IRD relative to LRD were estimated as 9.25 (95% CI 8.45-10.13) and 3.97 (95% CI 3.72-4.24) respectively for white men, and as 4.24 (95% CI 3.56-5.04) and 2.58 (95% CI 2.24-2.96) respectively for black men. Men in the "never-married" category compared with "married" men were more likely to have HRD and IRD (OR 1.35 and 1.22, 95% CI 1.25-1.46 and 1.15-1.29, respectively), but no significant association was observed between median household income and disease risk. As one may argue that the elevated PSA levels in the elderly patients with HRD may be a function of lead-time detection of younger individuals with lower PSA, we also analyzed age and racial effect in a subgroup of men with PSA <10 ng/ml stage T1cN0M0 prostate cancer ( Table 5). We found that in this group of men with low PSA (<10 ng/ml), again, older men were more likely to have GS ≥8 HRD (12.3, 6.6, 4.3, and 1.9% for age groups ≥75, 65-74, 50-64, and 37-49 respectively). Black men were more likely to have GS ≥8 HRD (7.6 and 6.7% for black and white men respectively). Multivariate logistic regression analyses for this cohort of men showed that risk increased faster with age among white men compared to black men for both HRD and IRD relative to LRD ( Table 6). At baseline (50 years of age), risk was higher among black men than among whites for HRD and IRD (OR 2.70 and 1.94, 95% CI 2.09-3.48 and 1.70-2.20, respectively). The OR for age (per year) for patients with GS ≥8 HRD (relative to www.frontiersin.org DISCUSSION To our knowledge, this is the largest population-based study focused only on PSA-detected (stage T1cN0M0) prostate cancer in the U.S. in the contemporary era of widespread PSA testing. A significant number of men (16.5%) in this cohort had HRD. We found that men of older age and black race were more likely to have HRD than younger and white men. According to the U.S. census, 4.6% of the total U.S. male population in 2010 was ≥75 years of age respectively (21). However, 40.3% of men with HRD were ≥75 years of age. Further, 36.8% of men with PSA <10 ng/ml and GS ≥8 stage T1cN0M0 prostate cancer were ≥75 years of age. Our finding that older men were more likely to have HRD, albeit limited to PSA-detected stage T1cN0M0 disease in this survey, is consistent with recently published studies (22,23). After evaluating all prostate cancer cases reported to the SEER program from 1998 to 2007, Scosyrev et al. (22) reported that men ≥75 years of age were more likely to present with either metastatic or locally advanced disease, and experienced the highest prostate cancer-specific mortality. Bechis et al. (23) reviewed the database of the Cancer of the Prostate Strategic Urologic Research Endeavor (CaPSURE) and reported that 26% of men age ≥75 had HRD based on the Cancer of the Prostate Risk Assessment (CAPRA) score. In addition to clinical stage, PSA, and GS, CAPRA scores take into consideration patient age and percentage of biopsy cores involved with prostate cancer. The specific indications for PSA testing and prostate biopsy could not be confirmed in our series, given the limitations of the SEER database. Nonetheless, all cases reported to the SEER program were diagnosed by a needle biopsy because of an elevated PSA level with no other clinical signs of disease (AJCC T1cN0M0) (4). The higher proportion of older men with HRD reported here cannot be explained by potential bias that older men with inherently higher PSA levels were more likely to have biopsy. Even among men with PSA <10 ng/ml stage T1cN0M0 disease, older men had significant risk of HRD. One of many possible explanations is that older men may harbor aggressive disease that is not reflected by PSA level. After pathological review of 211 autopsied prostate glands from deceased men with no known prostate cancer at the time of death, Delongchamps et al. (24) reported older men had significantly larger tumors, higher GS, and were Frontiers in Oncology | Genitourinary Oncology more likely to have extraprostatic extension or microscopic invasion of bladder neck (4). After a prospective review of 268 men with stage T1c prostate cancer who underwent radical prostatectomy at seven U.S. medical centers, Southwick et al. (25) noted age was one of the significant predictors of unfavorable pathological outcome, including extracapsular extension, seminal vesicle invasion, invasion of bladder neck/rectum, and lymph node involvement. Further study is needed to evaluate many confounding factors on observed age effect. African Americans have the highest prostate cancer burden and mortality of any racial group (1). Our study showed that in the cohort of men with PSA screen-detected stage T1cN0M0 disease, African Americans had a higher likelihood of harboring HRD than whites, including cohort of men with PSA <10 ng/ml. Many socioeconomic and intrinsic factors may contribute to such a racial difference. But even among U.S. service men with equal access to care, racial difference in prostate cancer risk remains (26). In an early study among men with non-palpable prostate cancer (clinical stage T1c disease) who underwent prostatectomy, Sanchez-Ortiz et al. reported that African American men had higher GS and greater tumor volume (27). Therefore, prostate cancer in black men may be biologically different from whites. In fact, several genetic and biological mechanisms have been identified that may contribute to the aggressiveness of prostate cancer in African American men (28)(29)(30)(31)(32). Our study was not designed to address the question of whether earlier detection would improve survival of men with HRD. Also to our knowledge, there have been no reported randomized studies that evaluate the outcome of early intervention vs. active surveillance for men found to have stage T1cN0M0 prostate cancer. There are two UK-based ongoing prostate cancer trials: the CAP (Comparison Arm for ProtecT) and ProtecT (Prostate Testing for Cancer and Treatment) trials (33) that may help address issues of screening and treatment, but the results will not be www.frontiersin.org available until 2016. Based on results from many published studies (34)(35)(36) including stage T1cN0M0 disease, it is conceivable that a subgroup of men may indeed benefit from early detection and treatment. The challenge at this time is to distinguish these patients from many men with clinically non-consequential disease. Our study has several weaknesses due to the retrospective design and inherent deficiencies of data reported to the SEER program. We had no way to independently verify staging accuracy. SEER did not provide detailed information about biopsy templates/schemes. We were not able to analyze patient outcomes due to the short follow up in the cohort. Further, there was no information about patients' performance status, medical co-morbidities, voiding symptoms, or family history of prostate cancer in the SEER database; these factors might have influenced screening decisions. Eight percent (5,605/70,345) of men in this study had a PSA level <4 ng/ml. It is unclear why these men proceeded to biopsy, though likely explanations may include patient preference, family history of prostate cancer, or incorrect staging. We used a risk stratification scheme developed by D'Amico et al. (18,19) that has been validated and widely used. In addition to clinical stage, PSA, and GS, there are many other factors that may influence prostate cancer outcome. These include primary and secondary GS, tertiary GS (37)(38)(39)(40)(41), percentage of positive biopsies (42)(43)(44)(45)(46)(47), or presence of perineural invasion in the biopsy specimen (48). The SEER database does not provide such important information; therefore we limited our risk estimation based on clinical stage (in this case, stage T1cN0M0), PSA, and GS. Despite the above limitations, our large population-based study shows that a substantial proportion of men with PSA-detected stage T1cN0M0 prostate cancer may have HRD in the contemporary era. Older and black men were more likely to have HRD than younger and white men. Analytic studies with independently verified staging information are needed to confirm these findings, and examine clinical outcomes in these men, especially those of older age and of black race. AUTHOR CONTRIBUTIONS Conception and design: Hong Zhang, Edward M. Messing, Lois B. Travis, and Yuhchyau Chen; Collection and assembly of data: Hong Zhang; Data analysis and interpretation: all authors; Manuscript writing: all authors; Final approval of manuscript: all authors. ACKNOWLEDGMENTS The authors would like to thank Ms. Laura Finger for expert editorial assistance. This study was presented in part at the February 2013 ASCO Genitourinary Cancers Symposium, Orlando, Florida.
v3-fos-license
2023-01-11T14:51:33.611Z
2020-04-18T00:00:00.000
255588082
{ "extfieldsofstudy": [], "oa_license": "CCBY", "oa_status": "HYBRID", "oa_url": "https://link.springer.com/content/pdf/10.1007/s12298-020-00789-z.pdf", "pdf_hash": "9d50633b43d22923607f32e3a6201a350494733a", "pdf_src": "SpringerNature", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:1687", "s2fieldsofstudy": [ "Agricultural And Food Sciences" ], "sha1": "9d50633b43d22923607f32e3a6201a350494733a", "year": 2020 }
pes2o/s2orc
Alleviation of drought stress by melatonin foliar treatment on two flax varieties under sandy soil The role of melatonin treatments on improving plant tolerance against drought stress is clear, while its special role and influences are poorly investigated. Thus, the effect of external treatment with different concentrations (2.5, 5.0 and 7.5 mM) of melatonin on two varieties of flax plant (Letwania-9 and Sakha-2) growth, some biochemical aspects and yield under normal [100% water irrigation requirements (WIR)] and drought stress conditions (75% and 50% WIR) in sandy soil were investigated in this study. Drought stress decreased significantly different growth parameters, photosynthetic pigments, yield and yield components of the two studied flax varieties. While, it increased significantly phenolic contents, total soluble sugars (TSS), proline and free amino acids as well as some antioxidant enzymes (superoxide dismutase, catalase, peroxidase and polyphenol oxidase). Meanwhile, external treatment of melatonin (2.5, 5.0 and 7.5 mM) increased significantly different growth and yield parameters as well as the studied biochemical and physiological aspects under 100% WIR. Also, melatonin treatment could alleviate the adverse effects of drought stress and increased significantly growth parameters, yield and quality of the two varieties of flax plant via improving photosynthetic pigments, indole acetic acid, phenolic, TSS, proline free amino acids contents and antioxidant enzyme systems, as compared with their corresponding untreated controls. Foliar treatment of 5.0 mM melatonin showed the greatest growth, the studied biochemical aspects and yield quantity and quality of Letwania-9 and Sakha-2 varieties of flax plants either at normal irrigation or under stress conditions. Finally we can conclude that, melatonin treatment improved and alleviated the reduced effect of drought stress on growth and yield of two flax varieties through enhancing photosynthetic pigment, osmoptrotectants and antioxidant enzyme systems. 5 mM was the most effective concentration. Introduction Flax plant (Linum usitatissimum L.), one of the most important crops grown in Egypt, is used as seed, fiber and dual purpose plant (fibers and seeds). Flax seeds contain 30-40 percent of edible oil with high nutritional value resulting from the high amount of essential fatty acids (linoleic acid, linolenic acid and oleic acid) as well as, proteins, mucilage and cyanogenic glycosides. In Egypt, flax is considered second fiber crop after cotton. This plant used in production of feeding stuff for poultry and animals, as well as, different types of compact wood (particle board) (Bakry et al. 2013). Various flax varieties greatly differ in yield and yield components (Darja and Trdan 2008). Drought stress (as an environmental stress) is severe deficiency of water which depress plant growth, development and productivity especially in arid and semiarid regions (Battipaglia et al. 2014). The increase in aridity is expected due to the increase in global climate changes in various regions all over the world (Blum 2017). Drought stress affect adversely plant growth, photosynthetic pigments, water and nitrogen use efficiency alterations, changes in cell structure and activities of key enzymes in various plant species (He et al. 2016;Chen et al. 2019). Also, drought stress caused oxidative damage to plant cells via increasing accumulation of reactive oxygen species (ROS) which reduce photosynthesis, stomatal closure and alter the activities of enzymes. ROS formation is considered a threat to cell as it causes electron leakage, lipid peroxidation and subsequent membrane damage, as well as damage to nucleic acids and proteins (Maksup et al. 2014). To decrease these damages, plants have evolved different pathways such as increasing antioxidant compounds either non enzymatic antioxidant (as glutathione, ascorbic acid carotenoids, a-tocopherols) or enzymatic antioxidants (including superoxide dismutase (SOD), ascorbate peroxidase (APX), catalase (CAT) and guaiacol peroxidase (GPX) (Abd Elhamid et al. 2014). Another antioxidants compound which improves plant tolerance in plant tissue is different phenolic compounds. Phenolic compounds are potential antioxidants acting as ROS-scavenging compounds (Rice-Evans et al. 1997). Thus, more studies are needed on plant response to drought stress (Petit et al. 1999). Recently, use of efficient, economic and inexpensive compounds for improving and enhancing plant tolerance to biotic and abiotic stress such as drought stress has been reported. One of these compounds is melatonin. Melatonin is a new plant growth regulator efficient in enhancing environmental stress tolerance of different crops. Melatonin is present in various living organisms (Tan et al. 2012) with various levels in plant (Arnao and Hernández-Ruiz 2014;Fleta-Soriano et al. 2017;Alam et al. 2018). The lipophilic and hydrophilic nature of melatonin gives it the possibility of passing through morpho-physiological barriers easily resulting in rapid transport of the molecule into plant cells (Tan et al. 2012). Melatonin plays many important roles in vegetative growth improvement, rooting and flowering (Arnao and Hernández-Ruiz 2014;Hardeland 2015). Also, melatonin could enhance plant tolerance of multiple stresses as well as helps in homeostasis of various ions (Arnao and Hernández-Ruiz 2015;Wei et al. 2015;Li et al. 2016Li et al. , 2018Li et al. , 2019. Melatonin is a well-documented antioxidant in various crops (Zhang and Zhang 2014). Improving antioxidant abilities of plant is a general effective role of melatonin, thus causing increase in plant stress tolerance (Arnao and Hernández-Ruiz 2015;Zhang et al. 2015). Exogenous treatment of melatonin has been found to increase stress tolerance of plant (Zuo et al. 2017;Sun et al. 2018). Even though, many investigations have stated that melatonin external treatment can improve drought tolerance, its specific role and the underlying mechanism of melatonin's role on plant drought tolerance are poorly understood. Firstly, the effect of melatonin on plant drought tolerance has been studied in only a few plant species, and only a quite small number of these investigations have focused on highly important crops. Secondly, these investigations have added melatonin by either adding it into the soil or into a nutrient solution, both of which are inconvenient in field crop production. Third, the majority of these investigations have been done under environmentally controlled conditions, such as in growth chambers or greenhouses, thus their results cannot accurately reflect the performance of melatonin with respect to stress tolerance in the field environment . Therefore, the performance and mechanism of melatonin's effect on drought tolerance needs further study, especially in highly important crops under field environmental conditions. So, in this investigation, our aim was to study the enhancing role of foliar treatment of melatonin on growth and yield of two varieties of flax plant grown under drought stress in sandy soil. Materials and methods Two field experiments were carried out at the experimental station of National Research Centre, Al Nubaria district El-Behira Governorate-Egypt, in 2015/2016 and 2016/2017 winter seasons. Soil of the both experimental sites was sandy soil. Mechanical, chemical and nutritional analysis of the experimental soils is reported in Table 1 according to Chapman and Pratt (1978). The experimental design was split-split plot design, using three replicates where water irrigation requirements (100%, 75% and 50%) occupied the main plots, two flax cultivars (Letwania-9 and Sakha-2) were allocated in sub plots and the concentrations of melatonin (0.0, 2.5 mM, 5 mM and 7.5 mM) were allocated at random in sub-sub plots. Flax seeds of Letwania-9 and Sakha-2 cultivars were sown on 17th November in the two winter seasons in rows 3.5 meters long, and the distance between rows was 20 cm apart, plot area was 10.5 m 2 (3.0 m in width and 3.5 m in length). The seeding rate was 2000 seeds/m 2 . Pre-sowing, 150 kg/fed of calcium super-phosphate (15.5% P 2 O 5 ) were used. Nitrogen was applied after emergence in the form of ammonium nitrate 33.5% at rate of 75 kg/fed in five equal doses. Potassium sulfate (48% K 2 O) was added at two equal doses of 50 kg/fed. Irrigation was carried out using the new sprinkler irrigation system where water was added every 7 days as per schedule in Table 2 for water requirements/fed. Irrigation water requirements Three irrigation water requirements was calculated using Penman-Monteith equation and crop coefficient according to Allen et al. (1989). The average amount of irrigation water applied with sprinkler irrigation system were 2500, 1875 and 1250 m 3 fed. -1 season -1 as (100%, 75% and 50%, respectively) for both seasons of in 2015/2016 and 2016/2017. The amounts of irrigation water were calculated according to the following equation: where IWR = irrigation water requirement m 3 /fed/irrigation, ETo = reference Evapotranspiration (mm/day), Kc = crop coefficient, Kr = reduction factor (Keller and Karmeli 1975), I = irrigation interval, day, Ea = irrigation efficiency, 90%, LR = leaching requirement = 10% of the total water amount delivered to the treatment. Foliar application of different concentrations of melatonin (0.0, 2.5 mM, 5 mM and 7.5 mMl) were carried out twice at rate of (200 L/fed); where plants were sprayed after 30 and 45 days from sowing. Plant samples were taken after 60 days from sowing for measurements of growth characters and some biochemical parameters. Growth parameters were in terms of, shoot length (cm), shoot fresh and dry weight (g), roots length (cm), root fresh and dry weight (g). Chemical analysis measured were photosynthetic pigments, total phenol contents and some antioxidant enzymes such as polyphenol oxidase (PPO), peroxidase (POX), catalase (CAT) and superoxide dismutase (SOD). Plant samples were dried in an electric oven with drift fan at 70°C for 48 h till constant dry weight for determination of total soluble sugars (TSS), free amino acids and proline contents. Flax plants were pulled when signs of full maturity were appeared, then left on ground to suitable complete drying. Capsules were removed carefully. At harvest, plant height (cm), fruiting zone length (cm), number of fruiting branches/plant, number of capsules/plant, seed yield/plant (g), biological yield/plant (g) and 1000 seeds wt (g), were recorded on random samples of ten guarded plants in each plot. Also, seed yield/fed (kg/Fed), straw yield (kg/fed), biological yield (kg/fed) and oil yield (kg/Fed) were studied. Chemical analysis: Photosynthetic pigments contents (chlorophyll a and b and carotenoids) in fresh leaves were estimated using the method of Lichtenthaler and Buschmann (2001). Total phenol content was measured as described by Danil and George (1972). Total soluble sugars (TSS) were extracted by the method of Homme et al. (1992) and analyzed using Spekol Spectrocololourime-terVEB Carl Zeiss (Yemm and Willis,1954). Free amino acids were extracted according to Vartanian et al. (1992) and estimated according to (Yemm and Cocking 1955). Proline was extracted as free amino acid and assayed according to Bates et al. (1973). The method used for extracting the enzyme is that of MuKherjee and Choudhuri (1983). Polyphenol oxidase (PPO, EC 1.10.3.1) activity assayed using the method of Kar and Mishra (1976). Peroxidase (POX, EC 1.11.1.7) activity assayed using the method of Bergmeyer (1974). Catalase (CAT, EC 1.11.1.6) activity was assayed according to the method of Chen et al. (2000). Superoxide dismutase (SOD, EC 1.12.1.1) activity was measured according to the method of Dhindsa et al. (1981). The enzyme activities were calculated by Kong et al. (1999). Seed oil content was determined using Soxhlet apparatus and petroleum ether (40-60°C) according to AOAC (1990). Statistical analysis The data were statistically analyzed on complete randomized design under split-split plot system according to Snedecor and Cochran (1980). since the trend was similar in both seasons, the homogeneity test Bartlet's equation was applied and the combined analysis of the two seasons was done according to the method of Gomez and Gomez (1984). Means were compared by using least significant difference (LSD) at 5%. Growth parameters The presented data in Table 2 shows the effect of foliar treatment of two flax varieties with different concentrations of melatonin (0.0 mM, 2.5 mM, 5 mM and 7.5 mM) grown under different water irrigation requirements WIR (100%, 75% and 50%) on growth parameters. Drought stress (75% and 50% WIR) decreased gradually and significantly shoot length, fresh and dry weight, while increased significantly and gradually root length, fresh and dry weight of root relative to those plants irrigated with 100% WIR (control plants) of the two varieties. It is clear that, Letwania-9 variety was more tolerant to drought stress in relation to Sakha-2 variety under the two drought stress levels (75% and 50%). 75% irrigation water requirement caused 10.48%, 8.90% and 30.71% decrease in Letwania-9 variety, while the percent of decreases were 15.54%, 16.53% and 24.18% in Sakha-2 variety of shoot length, fresh and dry weight, respectively as compared with plants irrigated with 100% irrigation water requirement. On the other hand, foliar treatment of the two tested varieties of flax plants with different concentrations of melatonin (2.5, 5.0 and 7.5 mM) increased the above mentioned growth parameters (shoot length, fresh and dry weight), as well as it caused more increases in root length, fresh and dry weight of root relative to their untreated controls under different WIR either at normal WIR (100%) or drought stressed WIR (75% and 50%). 5 mM melatonin foliar treatment was the most effective concentration over the other two concentrations (2.5 and 7.5 mM) as it caused the highest increases in most studied parameters (Table 2). Photosynthetic pigments Irrigation of two varieties (Letwania-9 and Sakha-2) of flax plants with low water irrigation requirements (75% and 50%) caused significant and gradual decreases in all components of photosynthetic pigments (chlorophylls a, b and carotenoids and consequently total photosynthetic pigments) relative to the control plants which were irrigated with 100% WIR (Fig. 1). On the other hand, melatonin foliar treatment with different concentrations (2.5, 5.0 and 7.5 mM) improved photosynthetic pigments of the two flax varieties under normal and stressed conditions compared with those untreated plants. 5.0 mM was the most effective treatment as it caused the highest increases in all photosynthetic pigments components of the two varieties of flax plant under different water irrigation requirement. Changes in phenolics Subjecting flax plant (Letwania-9 and Sakha-2 varieties) to different water irrigation requirements WIR 75% and 50% caused significant and gradual increases in phenolic contents of the two varieties of flax plant relative to their controls plant (100%) (Fig. 2). Whereas, melatonin foliar treatment with different concentrations (2.5, 5.0 and 7.5 mM) caused gradual increases in phenolics contents in the two varieties of flax plant as compared with their corresponding untreated controls (Fig. 2). It is clear that 5.0 mM was the most effective concentration as it caused the highest increases in phenolics under different WIR of the tested flax varieties (Letwania-9 and Sakha-2). Changes in some osmorotectants The changes in some osmoprotectants as total soluble sugars TSS %, proline and free amino acids contents of two varieties Letwania-9 and Sakha-2 of flax plants in response to foliar treatment of different concentrations (2.5, 5.0 and 7.5 mM) of melatonin under different water irrigation requirements 100%, 75% and 50% are presented in Table 3. Decreased WIR 75% and 50% increased gradually and markedly TSS, proline and free amino acids contents in flax two varieties as compared with plants with 100% WIR. Moreover, different melatonin concentrations (2.5, 5.0 and 7.5 mM) caused marked increases in the studied osmoprotectants (TSS, proline and free amino acids) of the two studied varieties as compared with their corresponding untreated controls under normal irrigation conditions (100%) or stressed conditions (75% and 50%). 5.0 mM was the most effective treatment on increasing different osmoprotectants contents of flax plant varieties (Table 3). Changes in antioxidant enzyme activities The antioxidant enzymes data presented in Fig. 3a-d shows that exposure of the two varieties of flax plant to drought stress (by decreasing water irrigation requirements to 75% and 50%) increased significantly the activities of the tested enzymes as superoxide dismutase (Fig. 3a, SOD), catalase (Fig. 3b, CAT), peroxidase (Fig. 3c, POX) and polyphenol oxidase (Fig. 3d, PPO) as compared with those plants irrigated with 100% WIR (control plant). Moreover, different concentrations of melatonin (2.5, 5.0 and 7.5 mM) caused more significant increases in different studied enzymes (Fig. 3a-d) as compared with untreated control plants under their corresponding WIR (100%, 75% and 50%). The highest different enzyme activities were obtained with foliar treatments with 5.0 mM melatonin under different WIR on the two tested varieties Letwania-9 and Sakha-2 compared with the other two concentrations (2.5 and 7.5 mM) of melatonin. (100), as well as under reduced water irrigation requirements (75% and 50%). Data show the superiority of Letwania-9 variety over Sakha-2 in yield and yield components. Discussion One of environmental stresses responsible for decrease in plant growth and productivity is drought stress. In this investigation, growth parameters were significantly decreased in the two varieties (Letwania-9 and Sakha-2) of flax plant under drought (decreasing WIR) as in Table 2. In harmony with our results of drought stress, Dawood and Sadak (2014), Sadak (2016a), Elewa et al. (2017) and Ezzo et al. (2018) stated that different growth criteria of canola, wheat, quinoa, and moringa plants decreased with drought stress and they referred these decreases to disorders induced by drought and generation of reactive oxygen species (ROS). These decreases in plant height might be due to decreases in cell elongation, cell turgor, cell volume and eventually cell growth (Banon et al. 2006). Moreover, drought affects plant-water relations, decreases shoot water contents, causes osmotic stress, inhibits cell expansion and cell division as well as growth of plants as a whole (Alam et al. 2014 , which referred this effect to the action of melatonin as a growth regulator and thus it could improve growth of various plants and as a protector against abiotic stress (Li et al. 2012). In addition, melatonin can act as a potential modulator of plant growth and development in a dose-dependent manner (Gao et al. 2018). Photosynthesis is the physico-chemical process which use light energy to drive the biosynthesis of different organic compounds and consequently plant production (Ye et al. 2016). Drought reduced photosynthetic pigments in the two studied varieties of flax plant (Fig. 1). These obtained data are congruent with those obtained earlier on canola (Dawood and Sadak 2014), fenugreek Sadak (2016b), quinoa (Elewa et al. 2017) and Moldavian balm (Kabiri et al. 2018;Ezzo et al. 2018). These decreases might have resulted from photo-oxidation of pigments that cause oxidative, photosynthetic system damaging which leads to reduction in photosynthetic carbon assimilation (Din et al. 2011;Pandey et al. 2012). Moreover, the principle reason for decreasing photosynthetic rate is that, limitation of surrounding CO 2 diffusion to the site of carboxylation, induced by stomatal closure resulted from water stress (Liu et al. 2013 This indicated that melatonin treatment improved the ultrastructure of chloroplasts under drought stress. In addition, melatonin treatment played an important role in preservation of chlorophyll and promotion of photosynthesis due to the antioxidant enzyme activities (as in Fig. 3a-d) and antioxidant contents and thus, inhibiting production of reactive oxygen species (Ezzo et al. 2018). Many authors referred the promotive effect of melatonin to the interactive effect of melatonin and other plant growth promoters such as kinetin and ABA on leaf senescence (Arnao and Hernandez-Ruiz 2017). Under various environmental stresses such as drought stress, plants have developed different physiological and biochemical mechanisms to adapt or to tolerate stress. Figure 2 shows that drought stress and/or melatonin treatments enhanced phenolic content accumulation. Similar results were obtained under abiotic stress on different plant species, El-Awadi et al. (2017a) and Ezzo et al. (2018). These increases might be due to the effect of drought stress in induction of various metabolic processes disturbances which leads to increase in the synthesis of phenolic compounds (Keutgen and Pawelzik 2009). Actually, different abiotic stress as drought induced reactive oxygen species (ROS) accretion and this is generally coupled with changes in net carbon gain which may strongly affect the biosynthesis of carbon-based secondary compounds, particularly leaf polyphenols (Radi et al. 2013). Moreover, the promotive role of melatonin could result from its signaling function, via the induction of different metabolic processes and stimulate production of various substances, preferably operating under stress (Tan et al. 2012). Moreover, the enhancing role of melatonin on phenolic contents resulted from the induction of various metabolic pathway and promote the formation of different compounds especially operating under stress (Tan et al. 2012). The accumulation of soluble carbohydrates in plants has been widely reported as a response to drought stress despite a significant decrease in net CO 2 assimilation rate. The increased levels of TSS in response to different abiotic stress are confirmed earlier by Elewa et al. (2017) on quinoa plant and Ezzo et al. (2018) on moringa plant. Soluble carbohydrates could act as scavengers of ROS and contribute to increase in membrane stabilization. The increased levels of TSS might help in turgor maintenance and stabilization of cellular membrane (Hosseini et al. 2014). The results of the present work show that melatonin treatments decreased the harmful effect of drought stress on the two varieties of flax plants and increased their drought stress tolerance. Melatonin is a free radical scavenger and broad-spectrum antioxidant that might directly eliminate ROS when produced under stressful conditions. In the present work, drought stress caused marked increases of proline and free amino acids, moreover, melatonin treatment caused increases in proline and free amino acids contents (Table 3). The osmotic adjustment in plants subjected to drought stress occurs by the accumulation of high concentrations of osmotically active compounds known as compatible solutes such as proline, glycinebetaine, soluble sugars, free amino acids and polyamines (Abd Elhamid et al. 2016). Earlier studies agreed with our obtained results Elewa et al. (2017) on quinoa plant and Ezzo et al. (2018) on moringa plant. They revealed that osmoprotectants (TSS, proline and free amino acids) play an important role in adaptation of cells to various adverse environmental conditions through raising osmotic pressure in cytoplasm, stabilizing proteins and membranes, and maintaining the relatively high water content obligatory for plant growth and cellular functions. Proline accumulation is considered as an indicator in several plant species under drought stress conditions, acting as an osmotic protectant and contributing to the turgor maintenance of cells (Elewa et al. 2017). Furthermore, the increases in proline content could be attributed to the decrease in proline oxidase activity under drought conditions (Bakry et al. 2012). Free amino acid accumulation associated with stress may actually be a part of an adaptive process contributing to osmotic adjustment (Sadak et al. 2010). Drought stress caused significant increases in different enzymes of the two flax varieties (Fig. 3). These increases could be considered as an indicator of increased production of ROS and a build-up of a protective mechanism to reduce oxidative damage triggered by stress experienced by plants as mentioned by Abdelgawad et al. (2014), El-Awadi et al. (2017a), Kabiri et al. (2018) and Ezzo et al. (2018). Antioxidative enzymes are not part of this system but key elements in the defense mechanisms. Higher levels of enzyme activities in flax plant under water deficient may be due to its high resistance. NAD ? recovering and CO 2 fixation at the Calvin cycle decrease under drought stress causing damage to cell membrane due to the increases of free radicals. Adverse environmental stresses increase catalase activity in several cycles of physiological processes. Stress conditions accompanied with higher content of ROS (especially H 2 O 2 ) is detoxified by catalase (Dat et al. 2000). Superoxide dismutase (SOD) is the first defense enzyme that converts superoxide to H 2 O 2 , which can be scavenged by catalase (CAT) and different classes of peroxidases (POX). Shi et al. (2007) confirmed the essential role of antioxidant systems in plant tolerance of various environmental stress especially in tolerant cultivars that had higher activities of ROS-scavenging enzymes than susceptible ones. Melatonin and some of its metabolites are considered as endogenous free radical scavenger and antioxidants ) that could directly scavenge ROS such as H 2 O 2 . Moreover, one of the main functions of melatonin, along with the activities of SOD and CAT may be to preserve intracellular H 2 O 2 concentrations at steady -scale levels . Li et al. (2017) showed that melatonin, a potent long-distance signal, may be translocated from the treated leaves or roots of plant to distant untreated tissues via vascular bundles, leading to systemic induction of different abiotic tolerance. Moreover, studies on how melatonin interacts with stress signaling mechanisms have identified a complex relationship with ROS. Results concluded that, melatonin is a broad-spectrum direct antioxidant which can scavenge ROS with high efficiency. A detailed knowledge of melatonin chemistry and molecular interactions with ROS and with strong oxidants has been documented. As well as, melatonin treatments modulate the antioxidant enzymes by both up-regulating the transcript level and increasing the activity levels (Zhang et al. 2014a). Improving plant antioxidant systems has been considered the primary function of melatonin in plant stress tolerance . Zhao et al. (2011) had proposed that melatonin protected Rhodiola crenulata cells against oxidative stress during cryo-preservation by increasing SOD and CAT activities. Plant responses to water stress include growth parameters and biochemical changes that lead first to acclimation and later, as water stress become more severe leading to damage and the loss of plant parts (Chaves et al. 2003). Water stress reduced yield and yield components of flax varieties (Table 4a, b). Similar results were obtained by Dawood and Sadak (2014) on canola plant, Abd Elhamid et al. (2016) on fenugreek plant, Elewa et al. (2017) on quinoa plant. Water deficits affect plants in different ways, slowly developing water deficits decrease growth, by slowing rates of cell division and expansion due to loss of turgor (Lawlor and Cornic 2002) and/or resulted from osmotic effect of water stress which caused disturbances in water balance of stressed flax plant leading to decreases in photosynthetic pigments (Fig. 1) and consequently retarded growth rate (Table 2). In regards to melatonin effect, similar results were obtained by Li et al. (2012), Janas and Posmyk (2013), Zhang et al. (2014b) and Sadak (2016aSadak ( , 2016b. The increases in growth characters caused by different melatonin concentrations might be due to the role of melatonin in alleviation growth inhibition, thus enabling plants to maintain a robust root system and improve photosynthetic capacity (Posmyk and Janas 2009) and thus increased yield and yield attributes. Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons. org/licenses/by/4.0/.
v3-fos-license
2018-12-03T00:54:54.362Z
2011-11-28T00:00:00.000
142662291
{ "extfieldsofstudy": [ "Sociology" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://journal.fi/ar/article/download/67475/27772", "pdf_hash": "cf653f4dc7674b756c6f43f48a3904d226c3240c", "pdf_src": "Anansi", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:1689", "s2fieldsofstudy": [ "Philosophy" ], "sha1": "8ff7a6d6b32e092781a76309b61460ce038ad6cd", "year": 2011 }
pes2o/s2orc
Rethinking the Enlightenment , or thinking the Enlightenment for the first time ? In his famous comment on Kant’s Was ist Aufklärung?, Foucault considers that the debate ‘for’ or ‘against’ the Enlightenment has no meaning as such, and calls for a new space of inquiry that would take into account our own determination, as subjects, by the Enlighten­ ment, making it the object of a new history, still to be written. Although this short text has been quoted over and over, it is still a sort of empty programme that does not overcome the antinomies of modern rationality. I would like to draw on one of Foucault’s most sugges­ tive remarks, albeit in some ways enigmatic: ‘[m]any things in our experience convince us that the historic­ al event of the Enlightenment did not make us mature adults, and we have not reached that stage yet.’ Start­ ing from this statement, I would like to delineate the possibilities of a New Enlightenment, that would not be the first one made better, or rendered adequate to its original project (as an extended rationality or as a reflexive normativity, for instance), but that would take really seriously the potential reflexivity encrypted in the Aufklärung, redefining the legitimate use of reason and the fair distribution of knowledge in a ‘post­rationalist’ age. In order to contribute to the collective reflection, I will use my own ongoing research on two different, but not unrelated topics: the question of the public as it ap­ pears in the new ‘cultural public sphere’ and the socio­ logical analysis of the rationalisation process. Rethinking the Enlightenment, or thinking the Enlightenment for the first time? Jean-louis fabiani I n his famous comment on Kant's Was ist Aufklärung?, Foucault considers that the debate 'for' or 'against' the Enlightenment has no meaning as such, and calls for a new space of inquiry that would take into account our own determination, as subjects, by the Enlighten ment, making it the object of a new history, still to be written.Although this short text has been quoted over and over, it is still a sort of empty programme that does not overcome the antinomies of modern rationality.I would like to draw on one of Foucault's most sugges tive remarks, albeit in some ways enigmatic: '[m]any things in our experience convince us that the historic al event of the Enlightenment did not make us mature adults, and we have not reached that stage yet.'Start ing from this statement, I would like to delineate the possibilities of a New Enlightenment, that would not be the first one made better, or rendered adequate to its original project (as an extended rationality or as a reflexive normativity, for instance), but that would take really seriously the potential reflexivity encrypted in the Aufklärung, redefining the legitimate use of reason and the fair distribution of knowledge in a 'postrationalist' age.In order to contribute to the collective reflection, I will use my own ongoing research on two different, but not unrelated topics: the question of the public as it ap pears in the new 'cultural public sphere' and the socio logical analysis of the rationalisation process. How can the history of the Enlightenment be rewritten? The social and cultural history of the Enlightenment is still unfinished.Two major books; Jonathan Israel's provocative work on radical Enlightenment (2001) and Antoine Lilti's innovative analysis of the salons in eighteenth century intellectual life (2005), have triggered new lines of debate. Let's go back to the main argument of the sympo-sium: something went wrong because the very idea of 'cosmopolis' involved a gross oversimplification of the notion of the natural and social order.Three principles illustrate its shortcomings: the taste for homologies, which is a way of popularising the mathesis universalis; the imperialism of unilinearity, so obvious in the variable-based social sciences, one of the strongest legacies of the Enlightenment; and finally the hypostasis of abstract universality, which appears most of the time now as local visions mistaken for universal statements.Perhaps the most convincing element in the list is the idea of universalism as localism in disguise.We, the French People, know this too well, as colonisation French style was the direct and explicit consequence of a desire to bring French universalism to the world.The 'Parti coloniste' that advocated the conquest of the 'primitive' regions of the world in the name of the civilising duty of the Republic, was crowded with enlightened and secularised intellectuals who believed that their claims were directly derived from the 'Lumières' .Jules Ferry, the Minister of Public Instruction, who brought in free and mandatory primary school education, was one of the main theoreticians of the colonising process, seeing it as a part of the same project as that of educational development in France.The identification of the colonising process with the civilising process was a major feature of the Third Republic and contributes to an explanation of the difficulties that France faced at the time of decolonisation.In the same way, the very local and peculiar Revolution française had become the symbol of modernity and emancipation in a constructed mythology that lasted a very long time, until François Furet and Mona Ozouf deconstructed it.The promises of the Enlightenment were not homogeneous across the various national cultures and they were drastically reshaped in different styles by the rise of the nation states in the nineteenth century.However something was common to all the local receptions of the Aufklärung: the overlapping of a political order and an epistemological order.Politics can be reduced to epistemology and epistemology is a political endeavour.Louis Althusser's theory epitomises the overlap.When he claimed that as the Greeks had discovered the continent of mathematics and Galileo the continent of physics, Karl Marx had discovered the continent of history, he gave a sort of naïve definition of the political/epistemological mix that has been one of the main outcomes of the reception of the Enlightenment since the mid-nineteenth century.Politics (and the social order as a whole) can be 'scienticised' .Thus, the revolutionary professional may be equated with the great scientist, not only in Stalin's caricature as a major linguist, but in Louis Alt husser's and Alain Badiou's definition of Lenin as the hero of a true scientific revolution. You might argue that it has nothing to do with an Enlightenment wrongly understood by its inheritors, but with a totally different set of intellectual phenomena, for example, the historical growth of a proletarian intelligentsia.I am just pointing out the issue of the nexus between knowledge and politics that is at the core of the very idea of Aufklärung, as Foucault noticed it.Can we rethink that nexus?Jonathan Israel has recently come up with a very exciting but questionable 'rethinking' of the Enlightenment.He has reloaded it with radicalism and subversion.His work is an explicit attack on the still dominant neo-Kantian interpretation of Aufklärung, mainly popularised by Jürgen Habermas and focused on the critical paradigm and on the rise of the public sphere.Israel has put Spinoza's legacy at the forefront, as a new matrix suitable for rethinking the Enlightenment.After Israel's reloading, it would be no longer the making of a bourgeois order, which it is even when it is not read with Kantian glasses, but a clandestine movement and a model (or a secret proto-Lecture de Moliere dans un salon by JeanFrançois de Troy, 18th century. type) for later radicalism.Here again, we are provided with a new universalism in its own right.Although it is not plagued so obviously by the ideological illusion that views a 'local' phenomenon, the rise of the European bourgeois order, Israel's thesis hypostatises the 'radical view' of the world that is almost as Eurocentric as the Habermasian public sphere.Israel's reinterpretation leads us to a radical, materialistic and democratic definition of the Enlightenment.Obviously, he leaves aside big chunks of the intellectual history of the phenomenon: Voltaire, the Scots and many others do not fit nicely into the picture.I consider that Israel is a symptom of the flourishing academic leftist intelligentsia in the first years of the new century.There is a return to the grand narrative: a local phenomenon-Spin oza's reception in Europe is mistaken for a global and unitary explanatory factor of a hugely diverse movement.Israel's obsessive political reading of the intellectual field looks quite commonplace for a senior analyst of radical thought: deterministic arguments, unilinear reasoning and a plea for the multitudes against the bourgeois sphere of the contract go along with a refusal of uncertainty, ambivalence and multilinear approaches that should be considered as playing a central role in the social sciences.Antoine Lilti (2009) has brilliantly shown that Israel was doing a very traditional history of phil osophy, strictly limited to the reading of texts and not interested in their circulation, selective appropriations, misreadings and misunderstandings which are central in what I have called the social life of concepts (Fabiani 2010). Contrasting this unilinear interpretation, Lilti has shown the irreducible plurality of the philosophy of the Enlightenment and its theoretical eclecticism due to the varieties of local social settings and the diversity of sources.At some point, I would be close to writing the philosophies of the Enlightenment, as the object is more the emergence of a controversial space than the construction of a cohesive doctrine.The radical Enlightenment is a contradiction in terms, or at least an anachronism. Linearity, discipline, public Is it meaningful to go back to Foucault to make further comments on the necessity of 'rethinking' the Enlightenment?In my introduction I quoted his fam ous sentence on the fact that the Enlightenment has not reached a point of maturity yet.By discarding the criticisms against the tyrannies of rationalism that have become common parlance in the twentieth century, Foucault invites us to contribute to the archaeology of a 'moment' , or of an 'event' , or even of a sequence of events, that lead to the autonomisation of reason, not as a stage that could be delimited by a beginning and an end, but as an ongoing process that goes well beyond the historical circumstances of its emergence.I would claim that we should disentangle the obviously 'local' elements of the process that identify the Aufklärung with a very narrow European time and space from the epistemological consequences of the process.As Foucault, after Kant, reminds us, the autonomy of reason does not imply the notion of an absolute reason, nor does it imply the universalisation of local principles.Autonomising reason implies that we know its limits and its terms of use.Criticisms of old fashioned rationalism (Bruno Latour is very good at rationalism bashing) are in fact addressed to an ideology that mixes up the Enlightenment idea of the sovereignty of reason and the imperialist idea of a sovereign European power over the rest of the world.It is still possible to delocalise the very notion of critique and to use it in a non-imperialistic way.Of course, as Judith Revel (2007) has reminded us, different national traditions within Europe have taken up the issue in quite different ways.In Germany, Hegelianism imposed a quite stable agenda for philosophical (and later sociological) research as an historical reflection on society.In France, after Auguste Comte, the issue was centred on epistemology: philosophers and scientists took up the issues of the boundaries between science and non-science and between knowledge and belief.Foucault did not go further in his analysis.In both cases, he tried to identify the ethos of modernity that is still our ethos to a large extent: it is centred on the present (what is the novelty brought about by today as compared with yesterday?).Foucault, like Israel, is strongly anti-Habermasian, but for totally different reasons.According to Foucault, Habermas is desperately looking for an ideal linguistic community that unites critical reason and the social project.He thinks that this is not the point.The question Was ist Aufklärung? is not about our belonging to a universal community, but about our belonging to the present, to what he calls a 'certain us' , always related to a cultural configuration defined by its own present and not by a tradition. I would like to follow Foucault, at least to some extent, by using a non-Foucauldian path.If we want to give a tentative answer to Yehuda Elkana's initial impulse when he asked: 'What went wrong with the Enlightenment?' we cannot dream of going back to an original project that would have gone off track, since such a project is a king of post-factum anachronism.Israel's attempt shows us the shortcomings brought on by a too cohesive a view of the process.I would rather like to take seriously the potential reflexivity encrypted in the Aufklärung and redefine the legitimate use of reason by extending it to new territories.One of the main issues is undoubtedly the fair distribution of knowledge in a so-called post-rationalist age.The democratisation of knowledge is not an issue of the past.Rationalism and contractualism, which seem to be the backbone of an enlightened social project, have been recently challenged in two ways: • The first is epitomised by Bruno Latour's critique of the 'national rationalism' that plagues, among other things, the French Republic.Rationalism is not dead.It just has to be differentiated from unilinear and deterministic thinking. • The second is, among a large crowd of radical thinkers, Toni Negri's anti-contractualist theme of the multitudes, or of the privilege given to the common against the public.The idea of a social contract is not dead: it is still possible to raise the question of the common good in terms of the public.We just have to redefine the public according to the line of the democratisation and de-commodification of knowledge. I have identified three areas where the sociologist could play an active role in improving the explanatory style of the social sciences and in redefining in post Habermasian terms the issue of the public. 1) The first area is mainly epistemological.It aims at improving the explanatory tools of sociology, a discipline deeply divided between analysis and interpretation.Using Andrew Abbott's Time Matters (2001) as a point of departure, I suggest that we objectify the reasons for the reification of causal analysis and the domination of 'fixed entities' to give room to an 'eventful sociology' , as Bill Sewell Jr puts it, that would put an end to the decontextualisation of action (Sewell 2005: 81-123).This goes against the mainstream in sociology, either quantitative or qualitative.Getting rid of the unilinear patterns of causation is very often considered as an act of murder against sociology as a professionalised discipline. 2) The second area is related to the organisation of knowledge.One the most tangible consequences of the Enlightenment is the rise of the universities, organised around disciplinary boundaries.Are these boundaries still efficient?Can we propose alternative models that would not be an attempt to deregulate knowledge and to diminish the social weight of learned communities?Would the social model of the enlightened conversation be of some use in a democratic age?Are social networks and electronic exchanges a way of constructing a new cultural public sphere? 3) Contemporary 'high' culture is still the reserve of a social elite.Many sociologists and experts in the humanities have made very pessimistic statements about the imminent death of learned cultures.This is the paradox of contemporary cultural institutions.In some cases that I have analysed in ethnographic as well as in quantified ways, the public can be turned into a participant that constructs a collective entity: it has nothing to do with a multitude, but can be described as an ephemeral community that can develop contractual and reflexive links (Fabiani 2008).These links can be documented and allow us to describe a cultural public sphere in statu nascendi.project focusing on the history of French rationalism, with special attention to the fate of Marxism in the country of Descartes, and plans to resume fieldwork in the sociology of the environment, one of his early interests.He edited the first book devoted to the topic in France in 1987 and published the results of his fieldwork on the ecology of restoration in the late 1990s.He plans to work on citizens' responses to climate change in Europe and Latin America.Email: fabianij(at)ceu.hu. Thus, cultural institutions are not mere surviving features of a dying bourgeois order, but the promise of a new social contract. Jean-Louis Fabiani received his PhD in Sociology in 1980.He has wide teaching experience in various institutions in France and elsewhere.He is currently a member of the Raymond Aron Centre.Since September 2008, he has been Senior Professor of Sociology and Social Anthropology at Central European University in Budapest.His latest book, Qu'estce qu'un philosophe français?was published in October 2010 in France.Fabiani is currently working on a collaborative A critical debate in Avignon.
v3-fos-license
2018-04-03T05:36:17.554Z
2015-12-16T00:00:00.000
39751583
{ "extfieldsofstudy": [ "Biology", "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://zookeys.pensoft.net/article/6044/download/pdf/", "pdf_hash": "afaea5c6675e2a763bd30e07d93e8205ba73c711", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:1690", "s2fieldsofstudy": [ "Biology", "Environmental Science" ], "sha1": "afaea5c6675e2a763bd30e07d93e8205ba73c711", "year": 2015 }
pes2o/s2orc
Embryogenesis and tadpole description of Hyperolius castaneus Ahl, 1931 and H. jackie Dehling, 2012 (Anura, Hyperoliidae) from montane bog pools Abstract Tadpoles of Hyperolius castaneus and Hyperolius jackie were found in the Nyungwe National Park in Rwanda and adjacent areas. Tadpoles of both species were identified by DNA-barcoding. At the shore of a bog pool three clutches of Hyperolius castaneus of apparently different age, all laid on moss pads (Polytrichum commune, Isotachis aubertii) or grass tussocks (Andropogon shirensis) 2–5 cm above the water level, were found. One clutch of Hyperolius castaneus was infested by larval dipterid flies. The most recently laid clutch contained about 20 eggs within a broad egg-jelly envelope. The eggs were attached to single blades of a tussock and distributed over a vertical distance of 8 cm. A pair of Hyperolius castaneus found in axillary amplexus was transported in a plastic container to the lab for observation. The pair deposited a total of 57 eggs (15 eggs attached to the upper wall of the transport container, 42 eggs floated in the water). Embryogenesis of the clutch was monitored in the plastic container at 20 ± 2 °C (air temperature) and documented by photos until Gosner Stage 25. The description of the tadpole of Hyperolius castaneus is based on a Gosner Stage 29 individual from a series of 57 tadpoles (Gosner stages 25–41). The description of the tadpole of Hyperolius jackie is based on a Gosner Stage 32 individual from a series of 43 tadpoles (Gosner stages 25–41). Egg laying behavior and embryogenesis are unknown for Hyperolius jackie. The labial tooth row formula for both species is 1/3(1) with a narrow median gap of the tooth row. Variation in external morphology was observed in size and labial tooth row formula within the species. With the tadpole descriptions of Hyperolius castaneus and Hyperolius jackie, 36 tadpoles of the 135 known Hyperolius species have been described, including five of the eleven Hyperolius species known from Rwanda. Introduction The reed frog genus Hyperolius currently comprises 135 species (Frost 2015). Taxonomy of this genus is known to be complicated (e.g., Ahl 1931, Schiøtz 1975, Lötters et al. 2004, Rödel et al. 2010) because of high intraspecific variability, high interspecific morphological similarity, and sympatric distributions (e.g., Channing et al. 2013, Liedtke et al. 2014. Not surprisingly, the tadpoles of only 34 (24.8%) Hyperolius species have been described to date (Viertel et al. 2007, Channing et al. 2012), a serious drawback for a reliable assessment of the presence of species in remote regions where adults are not easily caught (e.g. Greenbaum et al. 2013). During our recent field work in Rwanda, we focussed on the estimation of Hyperolius diversity, specifically in the Nyungwe National Park (about 970 km² cloud forest, Plumptre et al. 2003; for a map see Dehling 2012: page 60, figure 4). Despite a century of taxonomic studies (Ahl 1931, Hinkel and Fischer 1990, 1995, Fischer and Hinkel 1992, Hinkel 1996, Sinsch et al. 2011, Dehling 2012) diversity of the cloud forest Hyperolius from that area is not yet clear. The checklist of Hinkel (1996) mentions H. adolfifriederici Ahl, 1931, H. alticola Ahl, 1931, H. castaneus Ahl, 1931, H. discodactylus Ahl, 1931, H. raveni Ahl, 1931 francoisi Laurent, 1951, several of which are now considered junior synonyms (Frost 2015). Our current view integrating morphological, bioacoustics and molecular data gives credit to the presence of only four species in the Nyungwe National Park: H. castaneus, H. discodactylus, H. frontalis Laurent, 1950 and the recently described H. jackie Dehling, 2012(Sinsch et al. 2011, Dehling 2012, Greenbaum et al. 2013, Liedtke et al. 2014. Analysing habitat preferences and distribution of these four species within the cloud forest and the adjacent areas now deforested and in agricultural use would be easier, if encountered tadpoles could be assigned to either taxon. Yet, none of the tadpoles are currently described (Channing et al. 2012). Consequently, we surveyed lentic water bodies for Hyperolius tadpoles of these four species at all localities where we previously detected the presence of either species by collection of specimens or based on advertisement calls (Sinsch et al. 2011, Dehling 2012, Greenbaum et al. 2013, Liedtke et al. 2014. This survey yielded a large number of tadpoles which we identified as those of H. castaneus and H. jackie by DNA-barcoding. Herein we describe the morphological features of the tadpoles and provide new information on the egg-laying behavior of H. castaneus and embryogenesis in their terrestrial clutches. Study areas and field surveys Presence of larval and adult individuals of Hyperolius castaneus and H. jackie was monitored in the Nyungwe National Park, Rwanda (Sinsch et al. 2011, Dehling 2012 and adjacent areas used for agriculture (Table 1). Daytime surveys (9.00-17.00) for tadpoles and nightly records (18.00-21.00) of calling males were conducted in March 2009, March and April 2011and in March 2012. Hyperolius castaneus egg laying behavior was studied in the Uwasenkoko swamp. Tadpoles of H. castaneus were collected at the same site and additionally in the Karamba swamp together with those of H. jackie (Table 1). Additional tadpole specimens were collected from multiple localities in the Albertine Rift in Democratic Republic of Congo and Uganda. Museum acronyms are: UTEP = University of Texas at El Paso, ZFMK = Zoologisches Forschungsmuseum Alexander Koenig, Bonn (Appendix I). Larval characters The format of the tadpole description follows that of Viertel et al. (2007) but excludes description of oral cavities. Tadpoles were preserved in 5-10% formalin. Body measurements follow the primary landmarks defined by McDiarmid and Altig (1999: see figure 3.1 on page 26 for tadpole drawing with defined primary landmarks). In our descriptions, we use the terminology of Altig (1970) and McDiarmid and Altig (1999) with the labial tooth row formula (LTFR) written as a fraction in line with the rows with median gaps in parentheses. P1 = first posterior tooth row. Ecomorphological types for larvae follow McDiarmid and Altig (1999) and Orton (1953). Tadpoles were staged according to Gosner (1960). Preserved tadpoles were observed on tiny glass beads (1 mm) filled shallowly with water to allow proper positioning. Most measurements were taken to the nearest 0.1 mm using a stereomicroscope equipped with an Recorded measurements include: body length (distance from the tip of the snout to the body terminus, which is the junction of the posterior body wall with the tail axis); tail length (distance from the body terminus to the absolute tip of tail); total length (sum of body length and tail length); body width (measured at the widest point right behind the eyes); body height (at level of eye); eye diameter; interorbital distance (measured between the centers of the pupils); internarial distance (measured between the centers of the nostril indicated by reduced pigmentation when closed); distance between tip of snout and naris (from center of the naris to the middle of the snout); and distance between nostril and eye (from the center of nostril to the anterior edge of the eye); spiracle length (medially to opening); and spiracle tube width (at level of opening), and oral disc width (at middle between outer marginal papillae). Drawings of tadpoles were done with a camera lucida attached to a microscope. Descriptions of coloration in life are based on photos taken by JMD shortly after collection in the field. DNA sampling and barcoding We isolated DNA from the tail tip of the tadpole morphotypes, collected at the Karamba and Uwasenkoko localities (Table 1). DNA was used to sequence a fragment of the 16S mitochondrial rRNA gene, a suggested universal marker to barcode amphibians for species allocation (Vences et al. 2005). Protocols of DNA extraction, PCR, purification, and sequencing follow Dehling and Sinsch (2013) and Greenbaum et al. (2013). The obtained sequences were compared with our own sequences from adult frog specimens collected in southwestern Rwanda and are deposited in GenBank (Table 2). Editing and alignment were completed in MEGA5 (Tamura et al. 2011). Sequences were trimmed to the same length. The final alignment consisted of 548 base pairs. Calculations of pairwise distances and phylogenetic analysis (Maximum Likelihood) were carried out in MEGA5. A Maximum Likelihood analysis was run with 1000 bootstrap replicates using the GTR + G + I model and the Nearest-Neighbor-Interchange, as proposed by jModelTest 2 (Darriba et al. 2012) using the Akaike information criterion. Distribution and habitat preferences of Hyperolius spp. in the Nyungwe region Based on call surveys and collection of adult specimens, H. castaneus populations were detected at seven localities, five inside the Nyungwe National Park, and two outside (Table 1). They occured in sympatry with H. discodactylus, H. jackie, Leptopelis karissimbensis Ahl, 1929, L. cf. kivuensis 2 (sensu Portillo et al. 2015), Phrynobatrachus acutirostris Nieden 1912, "1913", P. cf. versicolor Ahl, 1924, Xenopus wittei Tinsley, Ko-bel & Fischberg, 1979 and an undetermined species of Amietia Dubois, 1987Dubois, "1986 Hyperolius castaneus tadpoles shared the same lentic water bodies with those of H. jackie, Leptopelis karissimbensis and L. cf. kivuensis 2 (Fig. 1). Hyperolius jackie populations are currently known only from the type locality (a natural pond at Karamba, Nyungwe National Park), and a stream at the west end of the Nyungwe National Park (Table 1). Adults were found in sympatry with H. castaneus, H. discodactylus, Leptopelis karissimbensis and Xenopus wittei; and tadpoles syntopically with those of H. castaneus and L. cf. kivuensis 2. Hyperolius discodactylus tadpoles were found syntopically with tadpoles of Phrynobatrachus acutirostris in a slow flowing stream passing through the Uwasenkoko swamp. Males of Hyperolius castaneus and H. jackie were observed vocalizing from shrubs and sedges bordering forest swamps. Hyperolius castaneus also called from the ground in moist swamp areas. While H. jackie never started vocalizing before dusk, H. castaneus gave advertisement calls throughout the day, but more frequently at night. Bog pools close to calling sites and containing tadpoles had a pH of 5.5-6.0 and a water depth varying from a few centimetres to a maximum of 35 cm (Fig. 1). Egg-laying behavior and embryogenesis of H. castaneus The natural history observations reported here were made on 22 March 2012 between 13:00 and 16:00 hrs, at a small breeding pond forming part of the Uwasenkoko swamp (2379 m a.s.l.; Fig. 1B). During an initial survey of a 25 m² area, we located two males advertising at the ground and an unpaired female, all individuals staying 3-8 m apart from each other. At the shore of the pond we detected three clutches of different ages, laid on moss pads and grass tussocks 2-5 cm above the water level (Fig. 2). The first clutch mass was placed on a moss pad (Polytrichum commune, Isotachis aubertii) and consisted only of the gelatinous remains of the egg envelopes ( Fig. 2A). According to the duration of embryogenesis (see below) we estimate the age of this clutch is at least seven days. The second clutch was found upon depressed blades of mainly Andropogon shirensis ( Fig. 2B) and had a similar consistency to the first one. However, with the exception of three undeveloped eggs, it contained a large number of undetermined insect larvae, probably of parasitic dipterid flies. The third clutch was recently laid with about 20 eggs within the broad egg-jelly envelope. The eggs were attached to single blades of an Andropogon shirensis tussock and distributed over a vertical distance of 8 cm (Fig. 2C) conclude that a reproductive burst of several pairs had occurred 1-2 weeks prior to the survey, but that reproduction period is prolonged with little synchronisation among the several hundred local H. castaneus adults. During the same survey we observed a pair in axillary amplexus on shore close to the open water surface (Fig. 3A). The male did not call and during the next two hours the pair moved occasionally along the shoreline. As the pair did not oviposit during this period, they were transferred into a small plastic container (5 cm diameter, 12 cm height, containing water to a height of 4 cm) and transported to the laboratory in Butare at 1643 m a.s.l. Reaching the laboratory two hours later we found that the pair had laid 15 eggs attached to the upper wall of the transport container and another 42 eggs were floating in the water (Fig. 3B). Eggs were deposited one by one using the egg-jelly envelope as glue for attachment to the wall and among single eggs. The pair, which already had finished amplexus, was removed from the box. Embryogenesis of the clutch was monitored in the same transport container at 20 ± 2 °C, but at a significantly higher air temperature compared to the native Uwasenkoko locality where daily fluctuations between 5 and 19 °C occur. Six hours after oviposition the first eggs of the upper egg mass showed signs of cleavage (Gosner Stage 2; Fig. 4A). The egg envelope was not swollen by moisture uptake, but each single egg remained distinguishable. After 48 h most eggs were in a stage of gastrulation (Gosner stages 10-13). After 5 d the most advanced embryos had reached Gosner Stage 19 (Fig. 4B), and after 6 d embryos reached Gosner Stage 22 and egg envelopes had fused to a single swollen gelatinous mass (Fig. 4C). Between 6 and 7 d following oviposition the egg-jelly became more fluid and the late embryos and early tadpoles of Gosner stage 24-25 started moving within the egg mass. At the end of day 7 the most advanced tadpoles had moved downwards within the egg-jelly, reaching the water level and beginning their free-swimming tadpole stage (Figs 4D, 5). In general, embryonic development of the 15 eggs was slightly asynchronic and two eggs did not seem to be fertilized (Fig. 5). In contrast, eggs deposited in water failed to develop further than Gosner Stage 10. DNA-barcoding of tadpoles DNA-sequences of representative specimens of the three morphologically distinct tadpole types collected in the Karamba pond and of the two tadpole types collected in the Uwasenkoko swamp were unequivocally associated (uncorrected p distance 0.0% between tadpole and corresponding adult sequence) with adult sequences of H. castaneus, H. jackie, Leptopelis karissimbensis, and L. cf. kivuensis 2 (Fig. 6). Tadpole of Hyperolius castaneus Ahl, 1931 The following description is based on a Stage 29 individual from the Uwasenkoko swamp, Rwanda (Figs 7A, B, ZFMK 97190, selected from a series of 52 tadpoles, Gosner stages 25-38, ZFMK 97191, and a series of 5 tadpoles, Gosner stages 34-41, ZMFK 97192 from Karamba, Figs 8-10). Exotrophous lentic benthic Type IV tadpole with following measurements (mm): total length 24.0, body length 9.0, tail length 15.0, body width 4.7, body height 3.6, eye diameter 1.0, interorbital distance 4.0, internarial distance 2.7, snout-naris-distance 1.9, distance-naris-eye 1.6, spiracle length 1.7, spiracle width 1.0, distance-snout-spiracle 6.4, tail muscle height at its beginning 2.4, tail muscle height at tail mid-length 1.8, greatest tail height 4.0, oral disc width 2.3. In dorsal view the body is elongated and ovoid and is widest at the level of the spiracle opening. The snout is rounded both in lateral and dorsal views. The interorbital distance is about twice the snout-naris distance, and internarial distance is 68% of interorbital distance. The eyes are positioned laterally, directed dorsolater- ally, and are not visible in ventral view. The external nares are nearly round (slightly elongated horizontally), very small, and positioned laterally. They are more closely positioned to the eyes than to the snout (naris-eye-distance to snout-naris-distance 84%). In lateral view the body is highest at the mid-body length (approximately at the level of the spiracle opening). The body height is 40% of the body length, the body width is about half (52%) the length of the body, and the body height is 77% of the body width. The spiracle is single, sinistral, and attached to the body wall. Its shape is cylindrical and its length is about twice (170%) the eye diameter. The spiracle opening is rounded, directed posteriorly, and located at mid-body with its upper margin below the lower margin of the eye in lateral view. The length of the tail represents 63% of the total length. The tail is highest at about mid-tail and represents about a quarter (27%) of the tail length. The greatest tail height is located at the anterior quarter of the tail. The greatest tail height is slightly more than twice (225%) the body length, and slightly larger (111%) than the body height. The dorsal fin does not extend onto the body. Dorsal and ventral fins are about equal in height throughout their length. The tip of the tail is narrowly pointed and rounded. The height of the tail musculature at mid-body is about half (45%) of the maximum tail height. The vent tube is dextral, short, posteriorly directed, and linked to the tail musculature. The oral disc (Figs 7B, 8) is anteroventral, not emarginated, about half (49%) of the body width, and bordered at its lateral and posterior margin by a row of short and round papillae. Few submarginal papillae are present laterally and below the third lower tooth row. The LTRF is 1/3(1) with a narrow median gap in P1. The first two tooth rows are about equal in length, occupying nearly the entire width of the oral disc, the third tooth row is slightly shorter, and the shortest is the most posterior one. Jaw sheaths are finely serrated. The upper jaw sheath is inversely U-shaped and the lower V-shaped and narrower. The variation in external morphology of the larval series is limited to size (Table 3) and LTRF. Fourteen tadpoles differ from the above described LTRF: Seven tadpoles had a LTRF of 1/3(1, 3), three of 1/3(1, 2), two of 1/3(1, 2, 3), one of 1(1)/3, and Table 3. Measurements (mm) of 57 larvae of Hyperolius castaneus. Mean followed by one standard deviation, and range in parentheses for sample sizes larger than 2. In preservative the larvae are entirely pale grayish brown to tan. The body is darker dorsally compared to the translucent venter. Tail musculature is tan and the fins are translucent, both bearing dark gray melanophores in various degrees. Hyperolius castaneus The coloration in life (Figs 9, 10) of the body was dorsally tan with minute brownish-orange spots and translucent whitish on the venter. The tail musculature was green- ish tan and the fins were translucent tan with irregular dark marbling. Black spots and flecks were scattered dorsally and laterally on the body, tail musculature and dorsal fin. The ventral fin has fewer black spots and flecks or none at all. Younger stages (e.g., Gosner Stage 25, Fig. 10A) are paler compared to older stages (e.g., Gosner Stage 38, Fig 10B). The series from Uwasenkoko was overall darker (e.g., Gosner Stage 38, Fig. 10B) compared to the series from Karamba (e.g., Gosner Stage 37, Fig. 9), possibly reflecting phenotypic plasticity. From stages 38 on in both series, distinct tan or whitish yellow dorsolateral stripes are present on each side extending from the snout to the end of the body. The iris was brownish orange with a few dark gray reticulations. Tadpole of Hyperolius jackie Dehling, 2012 The following description is based on a Gosner Stage 32 individual from the Karamba swamp (Fig. 11, ZFMK 97193, from a series of 43 tadpoles, Gosner stages [25][26][27][28][29][30][31][32][33][34][35][36][37][38][39][40][41]ZFMK 97194,. Exotrophous lentic benthic Type IV tadpole with the following measurements (mm): total length 31.5, body length 9.5, tail length 22.0, body width 5.2, body height 3.4, eye diameter 1.2, interorbital distance 4.8, internarial distance 3.0, distance-snout-naris 1.5, distance-naris-eye 1.6, spiracle length 1.9, spiracle width 0.6, distance-snout-spiracle 7.2, tail muscle height at its beginning 3.3, tail muscle height at tail mid-length 2.8, greatest tail height 6.8, oral disc width 1.6. In dorsal view the body is elongated and ovoid and is widest just posterior to the eye. The snout is rounded both in lateral and dorsal views. The interorbital distance is about three times the snout-naris-distance, and the internarial distance is 62.5% of the interorbital distance. The eyes are positioned laterally, directed dorsolaterally, and are slightly visible in ventral view. The external nares are ovoid and round (elongated horizontally), very small, and positioned laterally. They are nearly positioned in the middle between the eyes and snout (naris-eye-distance to snout-naris-distance 106.6%). In lateral view the body is highest at the mid-body length (approximately at the level of the spiracle opening). The body height is 36% of the body length, the body width is about half (55%) the length of the body, and the body height is 65% of the body width. The spiracle is single, sinistral, and attached to the body wall. Its shape is cylindrical and its length is 158% of the eye diameter. The spiracle opening is rounded, directed posteriorly, and located at mid-body with its upper margin reaching the level of the lower margin of the eye in lateral view. The length of the tail represents 70% of the total length. The tail is highest at about mid-tail and represents 31% of the tail length. The greatest tail height is 72% of the body length, and twice the body height. The dorsal fin does not extend onto the body. The dorsal fin is slightly higher than the ventral fin for about two thirds of the anterior tail length. The dorsal and ventral fins are of equal height for the posterior third of the tail. The tip of the tail is pointed and rounded. The height of the tail musculature at mid-body is slightly less than half (41%) of the maximum tail height. The vent tube is dextral, short, posteriorly directed, and linked to the tail musculature. The oral disc (Figs 11B, 12) is anteroventral, not emarginated, 31% of the body width, and bordered at its lateral and posterior margin by a row of short and round papillae. Few submarginal papillae are present laterally and below the third lower tooth row. The LTRF is 1/3(1) with a narrow median gap in P1. The first two tooth rows are about equal in length, occupying nearly the entire width of the oral disc, the third tooth row is slightly shorter, and the shortest is the most posterior one. Jaw sheaths are finely serrated. The upper jaw sheath is inversely U-shaped and the lower V-shaped and narrower. In preservative the larvae are entirely pale grayish brown to tan. The body is darker dorsally compared to the translucent venter. The tail musculature is tan and the fins are translucent, both bearing dark gray melanophores in various degrees. The coloration in life (Figs 13, 14) of the body was tan dorsally with minute brownish-orange and grayish-green spots and translucent whitish ventrally. The tail musculature was greenish tan and the fins were translucent tan with irregular dark marbling. Dark gray spots and flecks were scattered dorsally and laterally on the body, Fig. 13A) are paler in overall coloration pattern compared to older Gosner stages (e.g., Gosner Stage 30, Fig 13B). Individuals greatly differ in the amount of gray spots and flecks. Some have few gray spots and flecks scattered on the body and tail (Fig. 13C), whereas others have either numerous spots or flecks (Fig. 14C) or the tail tip can be nearly uniformly black (Fig. 13D). From Gosner stages 38 on, distinct tan or whitish yellow dorsolateral stripes are present on each side extending from the snout to the end of the body. The iris was brownish orange with a few dark gray reticulations. Differential diagnosis of bog pool tadpoles In the Nyungwe National Park Hyperolius castaneus and H. jackie tadpoles may cooccur and share the same pool with Leptopelis karissimbensis or L. cf. kivuensis 2. The tadpole of L. karissimbensis has been described in detail before (Roelke et al. 2009), and that of the morphologically similar L. kivuensis briefly in Channing et al. (2012). Duméril & Bibron, 1841) are currently known to occur in Rwanda (Dehling 2012, unpubl. Data, Sinsch et al. 2011, 2012 Viertel et al. (2007) were the first ones to describe oral disc and buccal cavity morphology in Hyperolius tadpoles and their value for taxonomy. Applying scanning electron microscopy, Viertel et al (2007) noted inter-and intraspecific differences in the types of labial teeth as well as interspecific differences in the buccal cavity. However, such methodology is relatively expensive and time intensive. Regarding external morphology, proportions, coloration and LTRF, Hyperolius tadpoles are very similar with only minor differences, which make species identifications unreliable, especially in areas with high species diversity, syntopic distributions or areas that have not been surveyed. This is the case for both H. castaneus and H. jackie larva, which only differ externally by their size (H. jackie larva are larger). We therefore consider DNA barcoding the most reliable method for identifications of larval Hyperolius, which was already noted by Viertel et al. (2007). Dipteran predation on arboreal frog eggs in Africa was first described by Vonesh and Ross (2000) for four species of Hyperolius from Uganda. An infestation rate of 40% was recorded within the 1261 observed clutches of Hyperolius lateralis, H. cinnamomeoventris, H. platyceps (Boulenger, 1900), and H. kivuensis. Larvae of ephydrid and phorid flies feed on frog ova and cause high embryonic mortality, and the surviving tadpoles hatch at a smaller size (Vonesh andRoss 2000, Vonesh 2005). Our observation of an infestation of egg mass by larval dipterid flies in H. castaneus is to our knowledge the first record for this species. With continuing fieldwork in Rwanda and other African countries, we are confident that the knowledge on reproduction, embryogenesis and species diversity of Hyperolius will increase.
v3-fos-license
2018-04-03T01:29:57.893Z
2016-07-12T00:00:00.000
17272292
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://bmcpublichealth.biomedcentral.com/track/pdf/10.1186/s12889-016-3251-2", "pdf_hash": "51f413a13c08e37fce038264993bebef0831e58e", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:1692", "s2fieldsofstudy": [ "Psychology" ], "sha1": "51f413a13c08e37fce038264993bebef0831e58e", "year": 2016 }
pes2o/s2orc
Association between regular physical exercise and depressive symptoms mediated through social support and resilience in Japanese company workers: a cross-sectional study Background Regular physical exercise has been reported to reduce depressive symptoms. Several lines of evidence suggest that physical exercise may prevent depression by promoting social support or resilience, which is the ability to adapt to challenging life conditions. The aim of this study was to compare depressive symptoms, social support, and resilience between Japanese company workers who engaged in regular physical exercise and workers who did not exercise regularly. We also investigated whether regular physical exercise has an indirect association with depressive symptoms through social support and resilience. Methods Participants were 715 Japanese employees at six worksites. Depressive symptoms were assessed with the Center for Epidemiologic Studies Depression (CES-D) scale, social support with the short version of the Social Support Questionnaire (SSQ), and resilience with the 14-item Resilience Scale (RS-14). A self-report questionnaire, which was extracted from the Japanese version of the Health-Promoting Lifestyle Profile, was used to assess whether participants engage in regular physical exercise, defined as more than 20 min, three or more times per week. The group differences in CES-D, SSQ, and RS-14 scores were investigated by using analysis of covariance (ANCOVA). Mediation analysis was conducted by using Preacher and Hayes’ bootstrap script to assess whether regular physical exercise is associated with depressive symptoms indirectly through resilience and social support. Results The SSQ Number score (F = 4.82, p = 0.03), SSQ Satisfaction score (F = 6.68, p = 0.01), and RS-14 score (F = 6.01, p = 0.01) were significantly higher in the group with regular physical exercise (n = 83) than in the group without regular physical exercise (n = 632) after adjusting for age, education, marital status, and job status. The difference in CES-D score was not significant (F = 2.90, p = 0.09). Bootstrapping revealed significant negative indirect associations between physical exercise and CES-D score through the SSQ Number score (bias-corrected and accelerated confidence interval (BCACI) = −0.61 to −0.035; 95 % confidence interval (CI)), SSQ Satisfaction score (BCACI = −0.92 to −0.18; 95 % CI), and RS-14 score (BCACI = −1.89 to −0.094; 95 % CI). Conclusion Although we did not find a significant direct association between exercise and depressive symptoms, exercise may be indirectly associated with depressive symptoms through social support and resilience. Further investigation is warranted. Background Depressive symptoms are common in the workplace and can result in outcomes such as suicide, impaired job performance [1], long absences due to sickness [2], and the need to pay disability pensions [3]. Depressive symptoms therefore represent a substantial economic burden to society [4,5]. Preventing the development of depressive symptoms in the workplace is therefore of great importance for both employees and employers, as well as for society as a whole. Depressive symptoms in the workplace have been associated with psychosocial factors, such as poor social support and job strain, defined as high demands and low decision latitude in the workplace [6]. Accumulated evidence has shown that moderateintensity regular physical exercise has beneficial effects on depressive symptoms, as well as diseases such as type 2 diabetes and coronary heart disease. A meta-analysis revealed that exercise has moderate beneficial effects on depressive disorders [7], and several studies have suggested that exercise can reduce the risk of depressive symptoms in the workplace [8,9]. Guidelines for the treatment of depressive disorders developed by the Japanese Society of Mood Disorders recommend exercise three or more times per week for mild depressive disorders [10], although the precise dose of physical exercise needed to treat depression remains elusive. Both biological factors and psychosocial factors have been proposed as possible mechanisms for the beneficial effect of regular physical exercise on depression. Social support is an important preventive factor for depressive symptoms [11][12][13]. Physical exercise is often undertaken in a social environment, leading to the 'social interaction' hypothesis [14]. For example, contact with the person supervising the exercise in interventional trials of exercise may have provided social support, resulting in improvement of depressive symptoms [15][16][17]. As discussed in systematic reviews of the effects of physical exercise interventions on depressive symptoms, a number of studies did not control for variables such as social support, although participants were required to exercise under supervision or in group situations [7,14,16]. Thus, physical exercise may prevent depression by promoting social support. Resilience, which is defined as a dynamic process and the ability to adapt to challenging life conditions [18][19][20], is key to adapting to the daily psychological burden in the workplace and to preventing the development of depressive symptoms. Resilience has been negatively associated with depressive symptoms, and positively associated with emotional regulation [21,22]. Compared with persons with low resilience scores, persons with high resilience scores were reported to have more positive emotions even in stressful situations [22] and to have more emotional flexibility in response to a rapidly changing stressful psychological task [23]. Resilience is also associated with quick recovery from cardiovascular arousal [22]. Exercise has been shown to have effects similar to those of resilience. It is well recognized that physical exercise has a beneficial effect on positive mood [24]. Childs and de Wit demonstrated that those who reported exercising at least once per week also reported a lesser decline in positive affect after an emotional stress task than those who did not report physical exercise [25]. A meta-analysis demonstrated a positive effect of acute aerobic exercise on stressrelated blood pressure responses [26]. Furthermore, exercise increases brain-derived neurotrophic factor, which protects neurons in regions of the brain such as the striatum and hippocampus in stressful situations [27,28]. Zschucke et al. demonstrated that physical exercise activated the hippocampus, inactivated the prefrontal cortex, and reduced the cortisol response to an emotional task. Physical exercise might thus enhance resilience by regulating the hypothalamic-pituitary-adrenal axis to buffer the effect of daily stress [29]. Physical exercise may therefore prevent depression by promoting resilience. To the best our knowledge, however, no studies have investigated the association of regular physical exercise and resilience by using a validated resilience scale. The aim of this study was to investigate differences in depressive symptoms, social support, and resilience between a group of Japanese company workers who engaged in regular physical exercise and a group of workers who did not and to determine whether regular physical exercise has an indirect association with depressive symptoms through social support and resilience. Participants We conducted a research in 6 workplaces in Kanto area of a company which agreed to cooperate. We instructed occupational health staffs in 6 workplaces of a company, and they explained the details of research to the company workers in face-to-face interviews. Participants were provided with a written explanation of the research, a consent form, and the self-report questionnaires by the company's occupational health staff. Workers who agreed to participate in this study provided consent by returning the consent form and questionnaires by postal mail. This study was conducted by using a database which was collected in a previous study [20,30]. Of the 15,071 workers at six separate worksites of a large company located in an urban area of Japan, 2159 workers (13.4 %) were approached. Among them, 741 (34.3 %) agreed to participate in the study. We excluded 26 participants with missing responses to items related to the subscales used, leaving 715 participants for analysis in this study. The workers who did not participate did not differ significantly from the participants in terms of age or sex. Measures Demographic information on sex, marital status, educational attainment, and job status were collected by selfreport. Assessment of depressive symptoms The Center for Epidemiologic Studies Depression (CES-D) questionnaire was administered to assess depressive symptoms. CES-D is a self-report questionnaire consisting of 20 items, and the scores are summed to yield a total score between 0 and 60, with a higher score indicating more severe depressive symptoms. This scale is one of the most widely used scales to assess depressive symptoms in the past week [31]. The reliability and validity of the Japanese version have been verified [32]. Assessment of social support The short version of the Social Support Questionnaire (SSQ) was administered to assess social support. The short version of SSQ consists of six items with 12 questions [33]. Each item has two parts. The first part assesses the number of others to whom the individual feels he or she can turn in times of need in various situations. The second part measures the individual's degree of satisfaction with the perceived support available in that particular situation. Responses are rated on a 6-point Likert scale (1 = "very dissatisfied"; 6 = "very satisfied,"). Two scores are obtained: the SSQ Number score for the perceived number of social supports, and the SSQ Satisfaction score for satisfaction with the social support that is available. The scores for each participant were calculated by averaging the scores of all items. Sarason et al. developed the SSQ as a reliable, valid, and convenient index of social support [34]. The Japanese version of the SSQ has been verified to be reliable and valid [35]. Assessment of resilience The 14-item Resilience Scale (RS-14) was administered to assess resilience. The RS-14 is an abbreviated version of the Resilience Scale (RS), which is a self-report questionnaire consisting of 25 items that measure the degree of individual resilience [18]. Each item is rated on a 7-point Likert scale (total score range, 14-98), with a higher score indicating more resilience [18]. The RS was developed through a qualitative study of people who had experienced a recent loss (e.g., of a spouse, health, or employment) and had adapted successfully [18,[36][37][38][39][40]. The RS scale was recommended as an excellent and widely used scale to assess psychological resilience in a review by Ahern [41]. The RS-14 strongly correlates with the RS. The reliability and validity of the Japanese version have been verified [42]. Assessment of frequency of physical exercise To evaluate physical exercise habits, we extracted a single item from the Japanese version of the Health-Promoting Lifestyle Profile [43]. Physical exercise was assessed with a frequency question: "The next question is about your physical exercise habits. In the last six months, how often did you do relatively hard exercise for more than 20 min, such as jogging or running, cycling, aerobics, and stepping exercise?." Four response options were given for each question: 1) never, 2) 1-3 times a month, 3) 1-2 times a week, and 4) 3 or more times per week. Statistical analysis All of the analyses were performed using SPSS, version 23 (SPSS Inc., Chicago). Alpha levels were all set at p < 0.05. We divided the participants into two groups based on their frequency of relatively hard exercise: those exercising more than 20 min, three or more times per week, were defined as the regular physical exercise group, and all others were defined as the group without regular exercise. We also dichotomized the participants by demographic data, as follows: marital status into whether married or not, educational attainment into whether graduated from college or university or not, and job status into whether in a management position or not. Age was compared between the two groups with Student's t test. Differences in the categorical variables of marital status, educational attainment, and job status were analyzed with chi-square tests or Fisher's exact test. The group differences in CES-D score, SSQ Number score, SSQ Satisfaction score, and RS-14 score were compared between the groups with and without regular physical exercise after adjusting for age, sex, marital status, educational attainment, and job status by using analysis of covariance (ANCOVA). Additionally, to investigate indirect associations between regular physical exercise and depressive symptoms through social support and resilience, we conducted a mediation analysis using the statistical analysis framework defined by Baron and Kenny [44], as follows. First, a regression analysis was conducted to evaluate the c path ( Fig. 1), in which CES-D score was the dependent variable and regular physical exercise was the independent variable. Second, regression analysis was conducted to evaluate the a n paths; each mediator variable (n = 1: SSQ Number score; n = 2: SSQ Satisfaction score; n = 3: RS-14 score) was entered as a dependent variable, and regular physical exercise was the independent variable. Third, regression analysis was conducted to evaluate the b n paths and c′ path, with CES-D score as the dependent variable and each mediator variable as an independent variable. Next, the sizes of the indirect associations between regular physical exercise and the CES-D score through SSQ Number score (a 1 × b 1 ), SSQ Satisfaction score (a 2 × b 2 ), and RS-14 score (a 3 × b 3 ) were estimated, using a bias-corrected bootstrapping method [45] with 5000 replications, and bootstrap 95 % confidence intervals (CIs) were obtained. The mediation model and any indirect associations were assessed by using Preacher and Hayes' bootstrap script for SPSS [45], which can handle nonparametric data. The CES-D score was the dependent variable; regular physical exercise was entered as the independent variable; the RS-14, SSQ Number, and SSQ Satisfaction scores were entered as mediator variables; and age, sex, marital status, educational attainment, and job status were entered as control variables. When the bootstrap 95 % CI did not include zero, the indirect association was taken to be significant, equivalent to testing for significance at the 0.05 level. Demographics All 715 participants were Japanese. Other demographic characteristics, and mean scores in the CES-D, SSQ, and RS-14 instruments, are shown in Table 1. In a univariate analysis of background variables and regular physical exercise, only low educational attainment was significantly associated with regularly engaging in physical exercise (p < 0.01; Table 2). Regular physical exercise and depressive symptoms, social support, and resilience There was no significant difference in CES-D score between the group with regular physical exercise and the group without regular physical exercise (F = 2.90, p = 0.09; Table 3). The group with regular physical exercise had significantly higher SSQ Number score (F = 4.82, p = 0.03), Fig. 1 Models of associations between exercise and depressive symptoms. a Illustration of a direct association between regular physical exercise and depressive symptoms. Path c represents the total effect of regular physical exercise on the total score of the Center for Epidemiologic Studies Depression (CES-D) scale. b Illustration of an indirect association between regular physical exercise and depressive symptoms (CES-D) mediated by resilience (14-item Resilience Scale, RS-14) and social support (Social Support Questionnaire, SSQ). The paths a n represent the association between regular physical exercise and each mediator. The paths b n represent the association between each mediator and depressive symptoms (CES-D). Path c′ is the association between the regular physical exercise and depressive symptoms (CES-D), without mediators Indirect association between regular physical exercise and depressive symptoms through social support and resilience The results of the regression analysis using Preacher and Discussion We investigated the association between physical exercise and depressive symptoms, social support, and resilience in Japanese workers. The participants in the current study were mainly men who were highly educated and worked for a large Japanese company that provides good job security and a relatively good balance of effort and reward. Only 11.6 % of participants indicated that they engage in regular physical exercise, which we defined as at least 20 min, three or more times per week, as recommended by the guidelines for treating depressive disorders from the Japanese Society of Mood Disorders. CES-D scores were numerically lower in participants who engaged in regular physical exercise, but this did not reach statistical significance in the ANCOVA analysis. This result does not seem to support previous findings, which demonstrated a benefit of physical exercise on depressive symptoms [7,46,47]. This might be because our participants did not have depressive symptoms severe enough to prevent them from performing the routine duties of their company jobs. Accumulated evidence supports depression as a continuum of disorders, with severity being the only difference between major depression and minor depression [48]. Consistent with our results, a previous randomized, controlled, intervention study of a workplace physical exercise program for white-collar employees with minimal symptoms of depression did not show a statistically significant improvement compared with a control group [9]. Thus, exercise might have more limited effects in individuals with mild depressive symptoms. Another possible explanation for our results is the dose of physical exercise. There have been several studies showing a U-shaped association between physical exercise and depressive symptoms [49][50][51]. The risk of depressive symptoms was found to gradually decrease from no exercise to a high dose of leisure-time exercise (16.5 to <25 metabolic equivalent [MET] hours per week), and then to slightly increase again at a very high dose (above 25.5 MET hours per week) in a cohort study of Japanese company workers [49]. A U-shaped association was also found between vigorous-intensity exercise and depressive symptoms in a cohort study of American Black women, with the greatest risk reduction (18 %) occurring at 3-4 h per week of vigorous exercise [50]. The dose of physical exercise in the present study therefore may not be enough to alleviate depressive symptoms, or very high doses of exercise in some participants might have attenuated the benefits of exercise on depressive symptoms. However, the result of the current study suggest that regular exercise might have a benefit on depressive symptoms in the workplace through social support and resilience. The ANCOVA analysis indicated that participants engaging in regular exercise had significantly higher social support and resilience compared with those who did not engage in regular physical exercise. Furthermore, in the mediation analysis, the bootstrap result showed a statistically significant indirect association between depressive symptoms and physical exercise through resilience and social support. Although our results did not meet the statistical framework in which Baron and Kenny [44] defined mediation as occurring if the a n , b n , c paths are significant and the c′ path is not significant, because the c path was not significant in our analysis, some authors have proposed that a significant total effect is not necessary to show mediation if the indirect effect is significant [52,53]. Thus the findings of this study were not inconsistent with the hypothesis that regular physical exercise attenuates depressive symptoms in part by promoting social support and resilience, but further investigation is warranted. Chou reported a beneficial effect of Tai Chi, a traditional Chinese exercise, on depressive symptoms, but found that the effect disappeared when changes in social support were controlled for, indicating that social support might be partly responsible for the effect of the exercise on depressive symptoms. Many kinds of physical exercise need a supervisor or instructor, some require a partner, and some are played in groups or teams. The improvements in mental health following physical exercise are at least partly related to the mutual support and social relationships that are provided when participating in physical exercise with others [54]. Although there are several lines of evidence linking resilience to regular physical exercise, to the best our knowledge, this is the first study to investigate the association between regular physical exercise and resilience by using a validated resilience scale. In this study, only 11.6 % of participants engaged in regular physical exercise. It might not be easy for a busy company worker to get into the habit of regular physical exercise. Substantial drop-out rates have been reported in studies of physical exercise interventions [7], and sustaining physical exercise as a fitness habit for the long term is difficult, although it is important for preventing depressive symptoms [55]. Developing the habit of physical exercise itself might reinforce self-esteem because it is a difficult accomplishment; this is one proposed mechanism for the effect of physical exercise on depressive symptoms [56]. Interventional studies for the prevention of depression might also produce resilience even in the absence of a significant change in depressive symptoms themselves. In fact, several approaches that increase resilience, such as well-being therapy, are used to treat depression, not by attenuating and preventing negative symptoms but by promoting positive emotions in order to increase psychological well-being [57][58][59][60]. Our study had several limitations. First, due to the cross-sectional nature of the study design, causal relationships between the factors could not be determined. It is also possible that social support and resilience attenuated the depressive symptoms through regular physical exercise, rather than the effects of exercise being mediated by social support and resilience. However, the findings of the current study do not seem to support such mediation models, because the association between regular physical exercise and depression (b path) was weaker than the association between resilience and depressive symptoms and did not reach statistical significance. There are also possible mutual or bidirectional associations among physical activity, social support, and resilience. These associations might be helpful for developing the habit of physical exercise. Second, a response rate was not satisfactory; we could not exclude the risk of the bias. Those who were depressed and sedentary might be reluctant to participate in the research compared with those were not depressed and active such as with regular physical exercise. These biases may attenuate the association among regular exercise, resilience, and depression. No statistically significant difference in CES-D score between the group with regular physical exercise and the group without regular physical exercise in current study might be due to this low response rate. Third, the participants were mainly men, they were highly educated, and they worked for a large Japanese company that provides good job security and a relatively good balance of effort and reward. The company worker from a single large company may be a vulnerable subject and have a potential about deviated report. These characteristics leave open the possibility that the participants are not representative of workers more generally. Further studies should be conducted in a community level or a multi-company level. Forth, information on the frequency of exercise was self-reported, and nondifferential misclassification may be inevitable and could attenuate the observed associations. Finally, residual confounding by uncontrolled or unmeasured factors may have distorted genuine associations. Conclusion We assessed the association between regular physical exercise, which is recommended by guidelines for maintaining health, and depressive symptoms in Japanese company workers, taking into account social support and resilience. The results suggest that regular physical exercise might not affect depressive symptoms directly, but might attenuate depressive symptoms indirectly through social support and resilience. In conclusion, the findings of the current study are not inconsistent with regular exercise providing a benefit for reducing depression through social support and resilience, but further investigation is warranted. Abbreviations ANCOVA, analysis of covariance; BCACI, bias-corrected and accelerated confidence interval; CES-D, The Center for Epidemiologic Studies Depression; CI, confidence interval; RS-14, 14-item resilience scale; SSQ, Short version of Social Support Questionnaire
v3-fos-license
2016-05-04T20:20:58.661Z
2015-10-01T00:00:00.000
9578774
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://doi.org/10.3201/eid2110.150840", "pdf_hash": "9f6025fe7b8b9ee4140c00bb2a6f5bda8ca6420e", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:1693", "s2fieldsofstudy": [ "Biology", "Medicine" ], "sha1": "9f6025fe7b8b9ee4140c00bb2a6f5bda8ca6420e", "year": 2015 }
pes2o/s2orc
Utility of Oral Swab Sampling for Ebola Virus Detection in Guinea Pig Model To determine the utility of oral swabs for diagnosing infection with Ebola virus, we used a guinea pig model and obtained daily antemortem and postmortem swab samples. According to quantitative reverse transcription PCR analysis, the diagnostic value was poor for antemortem swab samples but excellent for postmortem samples. which results in a high number of deaths in humans. EBOV is the etiologic agent of the ongoing EVD outbreak in West Africa. Nonadapted EBOV causes disease in nonhuman primates, but adaptation is required for the virus to cause disease in rodent models (1)(2)(3)(4). Fatal disease has been observed in 20% of guinea pigs infected with wildtype (WT) nonadapted EBOV, but a uniformly lethal guinea pig-adapted EBOV isolate was found to have developed after a limited number of serial infection passages in guinea pigs (3,5,6). Real-time quantitative reverse transcription PCR (qRT-PCR) is used to detect EBOV in the current West Africa outbreak. Appropriate sample collection and knowledge of interpreting results on the basis of specimen type are essential for accurate triage of patients thought to have EVD. Oral swab sampling for postmortem EBOV diagnosis has been supported by use of a nonhuman primate model (7), and oral swab sampling for antemortem EVD diagnosis has been a major consideration in the current outbreak because collection of swab samples is less invasive than collection of serum samples and poses a much lower risk of transmitting EBOV to the person obtaining the sample than traditional phlebotomy. However, the utility of oral swabs for antemortem testing has not been investigated in detail under controlled experimental conditions. In addition, Bausch et al. have suggested that the oral milieu, such as saliva composition and oral cavity tissue structure, may potentially inhibit diagnostic capabilities of oral swab sampling (8). Wong et al. have shown that oral swabbing can be used to detect virus and shedding in guinea pigs at isolated intervals after infection (9). We investigated oral swab sampling as an antemortem means of diagnosing EVD and used qRT-PCR to detect EBOV RNA in daily oral swab samples obtained from guinea pigs infected with guinea pig-adapted EBOV (GP-EBOV) and with WT-EBOV. The Study Procedures and experiments described herein were approved by the Centers for Disease Control and Prevention (CDC) Institutional Animal Care and Use Committee and conducted in strict accordance with the Guide for the Care and Use of Laboratory Animals (10). CDC is a fully accredited research facility of the Association for Assessment and Accreditation of Laboratory Animal Care International. Healthy adult male and female strain 13/N guinea pigs, 1.0-2.5 years of age, were housed in a Biosafety Level 4 laboratory in microisolator cage systems filtered with high-efficiency particulate arrestance filters. Groups of 5 animals, distributed proportionally by age and sex, were inoculated intraperitoneally with a 50% tissue culture infectious dose (TCID 50 ) at low (5 TCID 50 ) or high To serve as negative controls, 3 animals were inoculated intraperitoneally with Dulbecco's Modified Eagle's Medium. Animals were monitored for signs of clinical illness, and body weight and temperature readings were obtained daily. Oral swab samples were collected daily for isolation of RNA and analyzed by qRT-PCR. Postmortem oral swab samples were obtained from 10 animals that were euthanized because of severe clinical illness consistent with EBOV. Carcasses of the dead animals were kept in an incubator at 30°C to simulate conditions in equatorial Africa. Samples were obtained from 9 of the 10 animals for up to 5 days after death and from 1 animal at 2 days after death. In addition to oral swab samples, paired blood samples were collected from the cranial vena cava of anesthetized animals at 3 days postinfection (dpi) and by cardiac puncture at the time of death for euthanized animals. Low and high doses of GP-EBOV-Mayinga were uniformly lethal. Clinical illness was delayed in 1 animal in the high-dose group; the animal was euthanized at 12 dpi, but all other animals were euthanized by 9 dpi. One animal infected with nonadapted WT-EBOV-Mayinga was euthanized Fever developed in all animals infected with low-and highdose GP-EBOV-Mayinga, in 20% of animals infected with WT-EBOV-Makona or WT-EBOV-Mayinga, and in none of the negative control animals. Hypothermia, typical during the terminal phases of many disease processes, was observed in animals with end-stage EVD (Figure, panel B). Substantial weight loss (>15%) was observed in all febrile animals ( Figure, panel C). The 1 animal infected with WT-EBOV-Makona that showed clinical signs experienced transient fever and weight loss but started to regain weight by 9 dpi. Utility of Oral Swab Sampling for Ebola Virus Detection in Guinea Pig Oral swab samples were analyzed by qRT-PCR targeting the EBOV nucleoprotein gene; 18s ribosomal RNA levels were also analyzed to serve as a sampling control. EBOV RNA abundance was calculated by comparing the cycle threshold values to an in vitro-transcribed smallsegment RNA standard of known copy number. All oral swab samples that were collected 0-4 dpi were negative for EBOV nucleoprotein RNA (Table). At 3 dpi, blood samples from 7 (41%) of 17 infected animals from which blood samples could be obtained were positive for EBOV, but no viral RNA was detected in any of the paired oral swab samples. The earliest detection of EBOV RNA by oral swabbing was at 5 dpi in an animal infected with WT-EBOV-Mayinga. At 6 dpi, coinciding with the time of overt clinical signs of disease (i.e., fever, weakness, anorexia, and ruffled fur), qRT-PCR of oral swab samples detected EBOV RNA in 8 (73%) of 11 animals in which fatal illness developed and in 10 (50%) of 20 infected animals. EBOV RNA was detected by qRT-PCR in all postmortem swab samples. Conclusions Our data suggest that oral swab samples obtained early in the course of infection, before death, are not a reliable method for diagnosing infection with EBOV. Paired oral swab and blood samples collected at 3 dpi and at time of euthanasia showed that sensitivity of oral swab samples was low compared with the sensitivity of traditional blood samples. Testing of oral swab samples did not indicate infection until 3 days after EBOV RNA was detectable in blood samples, with the exception of 1 animal in which oral swab samples revealed viral RNA 2 days after the blood sample. At the time of overt clinical disease, the utility of oral swab samples for diagnostics improved but was not completely consistent with infection until postmortem time-points. Our studies also enabled us to investigate whether the virulence of the WT-EBOV-Makona variant in guinea pigs was as low as that of the prototypic WT-EBOV-Mayinga variant. As shown in previous studies (3,5,6), WT-EBOV is less pathogenic than GP-EBOV, regardless of variant, in this animal model. Investigating the utility of oral swab samples for diagnosing EVD in humans is challenging because paired blood and oral swab samples are rarely available and because the timing of sample collection relative to onset of disease and course of infection is often estimated. Although EVD in the nonhuman primate model mimics many aspects of the disease in humans, sampling from nonhuman primates in an experimental setting is problematic because of the species' temperament, which requires anesthesia during specimen collection and venipuncture. The guinea pig model of EVD (3,5,6) offers the convenience of daily oral swab sampling without the need for anesthesia. Although suggestive, as with any animal model system, when extrapolating these data to human diagnostics, the effect of potential differences in oral milieus (e.g., saliva composition and oral cavity tissue structure) must be considered. In the future, additional studies that use paired oral swab and blood samples from humans would provide information for continued discussion of antemortem swab sampling as a useful diagnostic modality of EVD in humans. Our data support the use of oral swab samples as a sensitive modality for postmortem diagnostics; however, the utility of oral swab samples under field conditions, especially those collected before death, may decrease because of inherent problems with sampling techniques and specimen handling conditions (i.e., delays in transport and storage at typically high ambient temperatures). Despite these considerations, oral swab sample collection D0 D1 D2 D3 D4 D5 D6 D7 D8 D9 D10 D11 D12 D14 1 GP could be a useful sampling strategy for humans and animals with unknown causes of death when EVD is suspected and when other types of samples are more prohibitive to obtain.
v3-fos-license
2023-02-24T16:59:32.586Z
2023-02-21T00:00:00.000
257134337
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.mdpi.com/1999-4923/15/3/716/pdf?version=1676968321", "pdf_hash": "dfba2d46db256c02d2be7c97e22302d524d0cb5a", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:1694", "s2fieldsofstudy": [ "Medicine" ], "sha1": "a6af90a0664cc0d11df2c18c7ec8f5e8ccaeded7", "year": 2023 }
pes2o/s2orc
Oral Administration as a Potential Alternative for the Delivery of Small Extracellular Vesicles Small extracellular vesicles (sEVs) have burst into biomedicine as a natural therapeutic alternative for different diseases. Considered nanocarriers of biological origin, various studies have demonstrated the feasibility of their systemic administration, even with repeated doses. However, despite being the preferred route of physicians and patients, little is known about the clinical use of sEVs in oral administration. Different reports show that sEVs can resist the degradative conditions of the gastrointestinal tract after oral administration, accumulating regionally in the intestine, where they are absorbed for systemic biodistribution. Notably, observations demonstrate the efficacy of using sEVs as a nanocarrier system for a therapeutic payload to obtain a desired biological (therapeutic) effect. From another perspective, the information to date indicates that food-derived vesicles (FDVs) could be considered future nutraceutical agents since they contain or even overexpress different nutritional compounds of the foods from which they are derived, with potential effects on human health. In this review, we present and critically analyze the current information on the pharmacokinetics and safety profile of sEVs when administered orally. We also address the molecular and cellular mechanisms that promote intestinal absorption and that command the therapeutic effects that have been observed. Finally, we analyze the potential nutraceutical impact that FDVs would have on human health and how their oral use could be an emerging strategy to balance nutrition in people. Introduction The enteral route, including oral administration of drugs is the preferred delivery method to treat systemic diseases or local gastrointestinal (GI) pathologies due to its minimal invasiveness (pain-free), relatively low cost, and ability to self-administer [1]. However, these advantages are challenged by acidic conditions in the stomach and degrading conditions in the intestine, which affect the stability, absorption, and bioavailability of various therapeutic molecules, limiting the diversity of therapeutic compounds that can be prescribed orally [2]. Indeed, macromolecules such as proteins, peptides, or nucleic acids as free agents show only slight absorption when administered orally, as they are degraded by GI enzymes, have low stability at acidic pH and limited permeation through biological barriers [3]. Likewise, several hydrophilic and lipophilic drugs also have limitations for their oral intake since their absorption is greatly conditioned by their molecular weight, Figure 1. Diagram of the native structure of small extracellular vesicles and the functionalization strategies that can be performed on them to provide them with specific therapeutic properties. Small extracellular vesicles (sEVs) have a structure formed by a membrane composed of a lipid bilayer. Different proteins are expressed in it, which may be common to the vast majority of sEVs (such as tetraspanins for sEVs derived from eukaryotic cells) or specific proteins according to the origin of their parental cell. The core of sEVs is composed of nucleic acids, lipids, proteins, and metabolites. One of the characteristics of sEVs that make them good nanocarriers is that they can be easily modified to endow them with specific therapeutic properties. For example, to acquire a certain therapeutic efficacy, sEVs can be engineered to carry a specific therapeutic payload: drugs, proteins, or different types of nucleic acids (siRNA, miRNA, shRNA). Depending on the molecule's type and therapeutic function to be triggered, the payload can be incorporated into or anchored to the surface of the sEVs membrane. It can also be loaded into the sEVs core (1). To provide them with a better safety profile, sEVs can be functionalized to target a specific cell or tissue by incorporating a targeting moiety into their surface membrane (2). This strategy reduces off-target interactions while improving the bioavailability of the therapeutic molecule at the site of interest. Both the strategy of therapeutic loading molecules and the strategy of targeting sEVs to a specific tissue can be performed together in sEVs (3), providing the nanovesicles with better efficacy and safety profiles at the same time (created with http://www.biorender.com (accessed on 16 november 2022)). Orally administered sEVs must survive the harsh degrading conditions of the digestive system as moisture, lubricants, mechanical forces, digestive enzymes, emulsifiers, pH neutralizers [2], commensal microbiota and their derivates [18] to successfully reach the intestine and deliver their therapeutic payload regionally or be absorbed intact for systemic distribution ( Figure 2). The latter is the most challenging since once the sEVs penetrate the intestinal mucus layer, they must cross the intestinal epithelium to reach the lamina propria and cross the endothelium of blood vessels for systemic distribution [19]. Diagram of the native structure of small extracellular vesicles and the functionalization strategies that can be performed on them to provide them with specific therapeutic properties. Small extracellular vesicles (sEVs) have a structure formed by a membrane composed of a lipid bilayer. Different proteins are expressed in it, which may be common to the vast majority of sEVs (such as tetraspanins for sEVs derived from eukaryotic cells) or specific proteins according to the origin of their parental cell. The core of sEVs is composed of nucleic acids, lipids, proteins, and metabolites. One of the characteristics of sEVs that make them good nanocarriers is that they can be easily modified to endow them with specific therapeutic properties. For example, to acquire a certain therapeutic efficacy, sEVs can be engineered to carry a specific therapeutic payload: drugs, proteins, or different types of nucleic acids (siRNA, miRNA, shRNA). Depending on the molecule's type and therapeutic function to be triggered, the payload can be incorporated into or anchored to the surface of the sEVs membrane. It can also be loaded into the sEVs core (1). To provide them with a better safety profile, sEVs can be functionalized to target a specific cell or tissue by incorporating a targeting moiety into their surface membrane (2). This strategy reduces off-target interactions while improving the bioavailability of the therapeutic molecule at the site of interest. Both the strategy of therapeutic loading molecules and the strategy of targeting sEVs to a specific tissue can be performed together in sEVs (3), providing the nanovesicles with better efficacy and safety profiles at the same time (created with http://www.biorender.com (accessed on 16 november 2022)). Orally administered sEVs must survive the harsh degrading conditions of the digestive system as moisture, lubricants, mechanical forces, digestive enzymes, emulsifiers, pH neutralizers [2], commensal microbiota and their derivates [18] to successfully reach the intestine and deliver their therapeutic payload regionally or be absorbed intact for systemic distribution ( Figure 2). The latter is the most challenging since once the sEVs penetrate the intestinal mucus layer, they must cross the intestinal epithelium to reach the lamina propria and cross the endothelium of blood vessels for systemic distribution [19]. When synthetic nanoparticles are orally ingested, most are degraded or eliminated, and a small fraction is effectively absorbed [20]. Nonetheless, it is currently unknown if the same proportion of orally administrated sEVs is degraded or eliminated since sEVs differs in their surface molecules, expressing receptors, peptides, saccharides, and lipids from a biological progenitor. cellular transport consists of the diffusion of particles between cells through tight junctions that form the intestinal epithelial barrier. However, due to limited physical dimensions between cells in physiological conditions, only particles ranging between 0.5 and 20 nm should be considered for this mechanism in a relevant proportion [21]. Conversely, a pro-inflammatory context in the intestine disrupts the epithelial barrier, allowing the passing of larger particles, as demonstrated by Tulkens et al. [22]. Thus, based on the reported results, the paracellular transport of an oral administration of sEVs (>200 nm) should only be considered in inflammatory diseases of the intestine or treatments that disrupt the epithelial barrier as side effects. Figure 2. Scheme of the gastrointestinal tract and the physiological factors that influence the absorption of therapeutic molecules. Several physiological barriers in the gastrointestinal (GI) tract challenge drug administration by the oral route. In the GI environment, the presence of factors such as pH, degradative enzymes and salts, motility and interaction with the microbiota can alter the solubility and stability of drugs, which finally affect their permeability across the mucosal barriers. This figure is based on a schematic drawing and does not fully represent the accurate structural reality of the intestine (created with http://www.biorender.com (accessed on 21 november 2022)). The second mechanism described by which nanoparticles cross the intestinal layers and reach systemic circulation is transcellular transport by epithelial cells (mainly enterocytes that constitute 90-95% of the cells in the GI tract) and M-cells (specialized phagocytic cells that represent 1% of the intestinal epithelium) [1]. Transcellular transport consists of the endocytosis of particles in the apical face, the intracellular transit, and posterior exocytosis in the basal face [20]. The main challenge for this route of absorption is avoiding the transport and fusion of the intracellular vesicles with lysosomes enriched in degradative enzymes. A unidirectional transport of bacterial-derived extracellular vesicles (bEVs) was demonstrated by epithelial cells in vitro, where a small proportion was "re-secreted" towards the basal face, partially supporting the proof of concept of orally ingested sEVs potential absorption and lysosome avoiding [23]. A better understanding of the molecular mechanism of uptake by these intestinal cells could contribute to an engineered increased interaction with sEVs. Macropinocytosis, caveolin, and clathrin-dependent endocytosis are the principal mechanisms described for particle uptake in enterocytes. In M-cells, the most studied are the uptake by phagocytosis and receptor-mediated endocytosis [20]. To date, it has been described that different types of nanoparticles can cross the intestinal epithelium using different mechanisms, such as paracellular transport [1]. Paracellular transport consists of the diffusion of particles between cells through tight junctions that form the intestinal epithelial barrier. However, due to limited physical dimensions between cells in physiological conditions, only particles ranging between 0.5 and 20 nm should be considered for this mechanism in a relevant proportion [21]. Conversely, a proinflammatory context in the intestine disrupts the epithelial barrier, allowing the passing of larger particles, as demonstrated by Tulkens et al. [22]. Thus, based on the reported results, the paracellular transport of an oral administration of sEVs (>200 nm) should only be considered in inflammatory diseases of the intestine or treatments that disrupt the epithelial barrier as side effects. The second mechanism described by which nanoparticles cross the intestinal layers and reach systemic circulation is transcellular transport by epithelial cells (mainly enterocytes that constitute 90-95% of the cells in the GI tract) and M-cells (specialized phagocytic cells that represent 1% of the intestinal epithelium) [1]. Transcellular transport consists of the endocytosis of particles in the apical face, the intracellular transit, and posterior exocytosis in the basal face [20]. The main challenge for this route of absorption is avoiding the transport and fusion of the intracellular vesicles with lysosomes enriched in degradative enzymes. A unidirectional transport of bacterial-derived extracellular vesicles (bEVs) was demonstrated by epithelial cells in vitro, where a small proportion was "re-secreted" towards the basal face, partially supporting the proof of concept of orally ingested sEVs potential absorption and lysosome avoiding [23]. A better understanding of the molecular mechanism of uptake by these intestinal cells could contribute to an engineered increased interaction with sEVs. Macropinocytosis, caveolin, and clathrin-dependent endocytosis are the principal mechanisms described for particle uptake in enterocytes. In M-cells, the most studied are the uptake by phagocytosis and receptor-mediated endocytosis [20]. Biodistribution, Stability, and Safety of Oral Delivery of Native and Drug Loaded sEVs Murine studies identifying the biodistribution pattern of orally administered sEVs are few and focus mainly on cow's milk-derived sEVs, although we have found some studies using plant-derived exosomes-like particles. These studies show that sEVs/exosomes- like particles manage to withstand the hostile environment of the gastrointestinal tract, associated with their transit through acidic conditions in the stomach and degradative conditions in the gut, in various murine models [24][25][26]. Cow's milk-derived sEVs cross the upper gastrointestinal tract and reach the intestine in relatively short times (1-6 h) [27,28]. The absorption of sEVs seems to occur in the gut through mechanisms that are not well understood, but that facilitates the entry of sEVs into the systemic circulation and their distribution in other organs, essentially localized in the abdominal cavity [27][28][29][30][31]. Unlike the "trapping" of sEVs in the organs of the mononuclear phagocytic system (liver, spleen, and lung) after systemic injection of sEVs [17], oral ingestion allows a considerable accumulation of sEVs in the intestine [27,28,30,[32][33][34]. In the other organs of the body, the accumulation of sEVs is less but notably shows a homogeneous distribution among them. Figure 3 shows the biodistribution pattern in mice after oral and systemic administration. seems to not alter the biodistribution pattern observed after a single oral intake of sEVs [28]. Betker et al. [29] and Samuel et al. [28] also show that orally ingested sEVs can migrate and accumulate in xenograft tumors in vivo. Other sources of sEVs studied in similar investigations are those obtained from yeast [28], beer [28], grape [34], acerola [33], ginger [32], garlic [35], tea leaves [36], and mulberry bark [37]. Yeast-, grape-, acerola-, ginger-, garlic-, tea leaves-and mulberry bark-derived sEVs showed a biodistribution pattern like that previously described for cow's milk-derived sEVs, but beer-derived sEVs in the mouse's organs could not be detected [28]. Interestingly, orally ingested ginger-derived exosomes-like particles showed a differential bio-distribution after 12 h of gavage depending on the feed condition of mice: starved mice accumulated exosomes-like particles in the stomach and small intestine, whereas non-starved mice accumulated exosomes-like particles in the colon [32]. These conditions open a new variable to consider for the pharmacokinetic profile of the oral administration of sEVs, loaded or not with drugs. Whether or not other cellular sources (of prokaryotic or eukaryotic origins) of sEVs can cross the harsh microenvironment of the gastrointestinal tract and replicate the biodistribution pattern described so far is still unknown. Table 1 summarizes the key aspects of the studies performed to determine the biodistribution pattern of sEVs/exosomes-like particles administered orally, including the type of sEVs, the cellular origin of the sEVs, doses of sEVs administered, time of detection, tissue distribution, among other variables of relevance. Comparative diagram of the biodistribution pattern of sEV administered orally and intravenously. The illustration shows the pattern of biodistribution of sEVs in different mice tissues after oral or intravenous administration. In the body on the left, the tissues and organs where the sEV would accumulate after intestinal absorption are identified in red. The considerable accumulation of sEVs in the intestine and, to a lesser extent, in the rest of the body's organs stands out. In the body on the right, the organs where sEVs would accumulate after intravenous administration are identified in gray. A considerable accumulation of sEVs is observed in the organs associated with the mononuclear phagocytic system (liver, spleen, lung), with little reach to other body organs. These data suggest that the biodistribution pattern is defined by the route of administration of the sEVs, a dependency that can be used strategically to reach a specific organ in patients (created with http://www.biorender.com (accessed on 2 february 2023)). Interestingly, repeated oral administration of cow's milk-derived sEVs on mice seems to not alter the biodistribution pattern observed after a single oral intake of sEVs [28]. Betker et al. [29] and Samuel et al. [28] also show that orally ingested sEVs can migrate and accumulate in xenograft tumors in vivo. Other sources of sEVs studied in similar investigations are those obtained from yeast [28], beer [28], grape [34], acerola [33], ginger [32], garlic [35], tea leaves [36], and mulberry bark [37]. Yeast-, grape-, acerola-, ginger-, garlic-, tea leaves-and mulberry bark-derived sEVs showed a biodistribution pattern like that previously described for cow's milk-derived sEVs, but beer-derived sEVs in the mouse's organs could not be detected [28]. Interestingly, orally ingested ginger-derived exosomeslike particles showed a differential bio-distribution after 12 h of gavage depending on the feed condition of mice: starved mice accumulated exosomes-like particles in the stomach and small intestine, whereas non-starved mice accumulated exosomes-like particles in the colon [32]. These conditions open a new variable to consider for the pharmacokinetic profile of the oral administration of sEVs, loaded or not with drugs. Whether or not other cellular sources (of prokaryotic or eukaryotic origins) of sEVs can cross the harsh microenvironment of the gastrointestinal tract and replicate the biodistribution pattern described so far is still unknown. Table 1 summarizes the key aspects of the studies performed to determine the biodistribution pattern of sEVs/exosomes-like particles administered orally, including the type of sEVs, the cellular origin of the sEVs, doses of sEVs administered, time of detection, tissue distribution, among other variables of relevance. As mentioned, the intestine is the anatomic site where the absorption of sEVs seems to occur after oral gavage in mice, which results in their entry into the bloodstream. Transendocytosis through intestinal epithelial cells [29,38] or paracellular translocation through the epithelial barrier [22] are some proposed mechanisms for this phenomenon. Figure 4 shows the mechanisms of cellular absorption of orally administered sEVs that are [25] determined through a series of well-established experiments that cow s milk derived sEVs loaded with insulin exhibited efficient internalization by active multiple endocytic routes to the epithelia. Since sEVs derived from the milk (nutrient), the authors also studied the involvement of the nutrientassimilation pathway. The data showed that the uptake of milk-derived sEVs its mediated by peptide transporter, amino acid transporters, glucose transporters, and the neonatal Fc receptor (FcRn) [25], as was first proposed by Betker et al. [29]. However, the uptake of sEVs was not affected by Niemann-Pick C1-like 1 (NPC1L1) protein, which mediates the absorption of cholesterol and phytosterols [25]. According to Sriwastva et al. [37] mulberry bark-derived exosome-like particles were predominantly taken up by gut epithelial cells, Paneth cells, and colon tissue. Furthermore, in the spleen and liver, these particles were predominantly present in F4/80+ macrophages. In this work, mice showed no adverse effects, no significant changes in body weight, skin rashes, or abnormal fecal discharge, and no abnormal effects regarding morphology of internal organs, microscopic structure of gut tissue, blood cholesterol, triglycerides, or liver enzyme alanine transaminase. Due to the complex composition and structure of sEVs, more receptors and transporters need to be investigated to elucidate the endocytic mechanisms that facilitate uptake in intestinal epithelial cells. As well as, to validate the data obtained in in vitro experimental settings in in vivo conditions. Little information exists about other pharmacokinetic parameters of orally administered sEVs. Regarding its stability in circulation, Munagala et al. [31] observed that cow's milk-derived sEVs remained in circulation for at least 24 h after oral administration in nude mice. Results that strongly contrast with the numerous studies demonstrating the rapid clearance rate of circulating exogenous exosomes after systemic injection (~2-30 min), mainly mediated by macrophages [17]. Why sEVs absorbed from the gastrointestinal tract have a longer circulating half-life than observed in systemically administered sEVs is a question that must be answered to understand the real clinical potential of oral administration of sEVs. It is worth mentioning that the same research group subsequently tested milk-derived sEVs for oral administration of the chemotherapeutic drug paclitaxel (PAC) in a lung tumor xenograft model, demonstrating that orally administered PAC-loaded sEVs significantly inhibited tumor growth compared to the same dose of PAC administered intraperitoneally. These PAC-loaded sEVs showed remarkably lower systemic and immunologic toxicities as compared to i.v. PAC [39]. Soo Kim et al. [40] loaded murine RAW 264.7 macrophages-derived sEVs with PAC, showing a more than 50-fold increase of cytotoxicity in drug resistant MDCK MDR1 (Pgp+) cells in vitro. The studies that evaluated the safety profile of sEVs orally administered consistently showed that the parameters of body weight, plasma cytokine concentration, or tissue damage remain unchanged, suggesting they are well-tolerated and non-immunogenic. Although these studies were performed mainly in milk-derived sEVs, different concentrations, and even at repeated doses, toxicity data confirm preliminarily the potential clinical use of milk-derived sEVs. Table 2 details the main results that allow knowing the safety profile of oral administration of sEVs. Nonetheless, we highlight the fact that vesicles derived from food sources may not be the exclusive sEVs that possess the capacity to be absorbed after oral ingestion, emphasizing the lack of information from other sEVs sources with proven therapeutic properties such as mesenchymal stem cells [41]. Little information exists about other pharmacokinetic parameters of orally admi tered sEVs. Regarding its stability in circulation, Munagala et al. [31] observed that co milk-derived sEVs remained in circulation for at least 24 h after oral administration nude mice. Results that strongly contrast with the numerous studies demonstrating rapid clearance rate of circulating exogenous exosomes after systemic injection (~2 min), mainly mediated by macrophages [17]. Why sEVs absorbed from the gastrointe In vitro studies demonstrate that multiple endocytic pathways are involved, including caveola-and clathrin-mediated pathways and macropinocytosis. Additionally, it has been described that sEVs could cross the intestinal epithelium through the intercellular spaces between epithelial cells, passively transported from the intestinal lumen to the circulation in a mechanism called paracellular translocation (created with http://www.biorender.com (accessed on 24 November 2022)). No changes in clinical signs, body weight, or dietary intake in animals. Biochemical (liver and kidney function) and hematological parameters remained unchanged except for triglycerides. No changes in cytokine profile (IL-1α, IL-1β, IL-2, IL-4, IL-5, IL-6, IL-10, IL-12, IL-13, GM-CSF, IFN-γ and TNF-α), except for the anti-inflammatory cytokine GM-CSF. [31] Cow's milk-derived sEVs 2 mg/kg × 7 d IRC mice 7 d No changes in body weight in animals. Biochemical (liver function) and hematological parameters remained unchanged. Histopathology examination (H&E staining) of the heart, liver, spleen, lung, kidney, and small intestine exhibited no pathological changes. [25] Abbreviations: sEVs, small extracellular vesicles; mg, miligrams; kg, kilograms; h, hours; d, days. sEVs Attributes for an Efficient Oral Administration Very few studies have investigated the use of sEVs formulations for gastric drug delivery. According to Bardonnet et al. [42], nanoparticle size is essential for gastric retention because particles with a diameter < 7 mm are efficiently evacuated. Since sEVs possess a much smaller size range of 50-200 nm [13], in their native state (unmodified) is unlikely to exert any biological effect in the stomach due to weak gastric retention. However, modifying sEVs with mucoadhesion strategies using polymers or phospholipids in their surface membrane could give them time to trigger the desired biological changes [1]. Regarding intestinal drug delivery, using unmodified sEVs as nanocarriers has demonstrated promising results. Several studies have reported systemic absorption of drugs in the intestine from sEVs or a regional effect, as described in the previous section. However, a wide array of modifications has also been tested to improve sEVs stability in the GI tract, uptake by intestinal cells, and even delivery to cells independent (or far from) of the GI system. Table 3 presents a summary of these articles, where they were classified according to the source of sEVs, the attribute or modification studied, the use of sEVs as a drug nanocarrier, and the observed biological effect and the type of models utilized. As a note to mention, most articles on the oral administration of sEVs are based on the use of vesicles derived from edible compounds (fruits, vegetables, spices, milk, or its derivatives), as extensively reviewed by Ciéslik et al. [24]. Of all these sources of sEVs, notably bovine milk has reported increased stability in acidic media that emulate the conditions of the stomach lumen and structural conservation after boiling due to the presence of calcium in comparison to colorectal cancer-derived sEVs (LIM1215 cells) [28]. Besides, the addition of casein (a highly abundant protein in breast milk) has been shown to enhance the uptake of sEVs isolated from human cardiosphere-derived stromal/progenitor cells after oral ingestion [43]. The modification of sEVs with casein also presented an increased biological effect than unmodified sEVs in cardiac dysfunction [43]. This data indirectly supports the bovine milk-derived sEVs as nanocarriers for oral drug delivery since the abundant natural presence of casein in the milk should confer similar properties to those isolated vesicles. Another compound present in breast milk from various species is folic acid [47]. In a publication from Munagala et al. [31], the addition of folic acid to the surface of sEVs isolated from bovine milk and loaded with withaferin A decreased the tumor volume in a murine model of lung cancer. The modification with folic acid in the sEVs surface increased the therapeutic effect in this cancer model compared to the unmodified sEVs; however, it is not clarified if this response is attributed to enhanced stability in the GI tract or if a targeting to tumor cells after systemic circulation is reached [31]. Another approach for increasing the uptake of milk-derived sEVs was published by Warren et al. [3], where they modified the surface of the vesicles with Polyethylene glycol (PEG). Due to this modification, hydrophobic interactions with mucin (present in the lumen of the intestine) are decreased, thus enhancing the interaction, uptake by epithelial cells, and delivery of a loaded siRNA in vitro. Besides, adding PEG to the surface of the milk-derived sEVs increased the recovery after incubation in acidic conditions mimicking an infant (pH 4.5) or adult (pH 2.2) stomach acidity [3]. Another studied source of sEVs with therapeutic properties after oral delivery is grape juice. Ju et al. [34] showed that grape exosome-like nanoparticles (GELNs) isolated from grape juice possess bioactivity in intestinal stem cells, protecting from colitis in an in vivoinduced model and facilitating organoid formation in vitro. They assembled liposome-like nanoparticles with lipids from these GELNs and showed their role in in vivo targeting of intestinal stem cells through oral gavage [34]. As mentioned previously, it is currently unknown how these modifications mentioned above could modify the bioavailability of sEVs isolated from other sources different from foods or their derivatives. Cellular and Molecular Mediators for sEVs Uptake after Oral Administration Despite the incipient understanding of the cellular/molecular mechanism that regulates the biological effect of sEVs through oral administration, in the literature, it is possible to find several articles reporting therapeutic efficacy when sEVs are administered orally in models of inflammatory diseases and cancer. In an induced cutaneous delayed-type hypersensitivity (DTH) model, Nazimek et al. [48] and Wasik et al. [49] demonstrated that T cells and B1a cells secrete a subpopulation of immunosuppressive sEVs that contain the inhibitory miRNA-150, which prevent inflammation and DTH after systemic administration in mice [48,49]. In the second of these publications [49] intravenous, intraperitoneal, intradermal, and oral administration of equivalents doses of the immunosuppressive sEVs were tested head-to-head to evaluate the anti-inflammatory response. Unexpectedly, the most potent anti-inflammatory effect was registered in the oral administration of the T cells-and B1 cells-derived sEVs. However, no further data is detailed that could explain these findings [49]. When the murine model of DTH was previously depleted of macrophages administrating clodronate liposomes, the anti-inflammatory properties of the T cells-derived sEVs were significantly lost, suggesting that the response of orally administrated T cells-derived sEVs was mediated in part by these myeloid cells [48]. This immunological effect mediated by macrophages after sEVs injection is widely described in the literature since the significant clearance of sEVs in systemic circulation occurs through these myeloid cells [17,[50][51][52]. Peyer's patches (PPs) are subepithelial lymphoid follicles present in the intestine greatly enriched in innate and adaptive immune cells [53]. These dome-shaped clusters control antigen presentation and immunological response by several mechanisms, being the most studied the transendocytosis by specialized epithelial cells named microfold cells (M-cells) towards resident macrophages and dendritic cells [54]. The M-cells have also been previously studied for drug delivery employing synthetic nanoparticles as carrier platforms since these cells have reduced intracellular enzymatic activity and thinner mucus layer and glycocalyx in comparison to enterocytes, promoting easier access and intracellular transport [1]. In the field of drug delivery using nanoparticles, several articles study the active targeting of M-cells through surface modifications, adding peptides [55], mannose receptors ligands [56], and lectin ligands [57], but in most cases, the modification do not fully control unspecific interaction since other cellular lineages express the same receptors. In Table 4 we summarized the studies with cells present in the GI tract that potentially contribute to the uptake of sEVs after oral administration. Bovine milk sEVs uptake decreased when incubated at low temperature (4 • C), after proteinase K treatment, using endocytosis inhibitors or carbohydrate competitors. Notably, Rubio et al. [23] demonstrated a unidirectional transport from the apical face towards the basal face of fluorescent-labeled B. subtilis-derived bEVs in polarized epithelial Caco-2 cells in vitro, where a fraction of those bEVs did not fuse with the cellular membranes and where secreted to the other side. From this study, three major questions arise: (1) if this described transport mechanism is conserved to all sources of sEVs (e.g., bovine milk or human derived EVs); (2) if those secreted "intact" sEVs conserve their capacity to regulate the acceptor cell function and transcriptional expression; (3) if the observed mechanism also occurs in vivo. Regarding the secretion of sEVs from epithelial cells in vivo, Sakhon et al. [58] reported in transgenic mice that M-cells constitutively release sEVs to the subepithelial space, where the highest co-localization was observed with myeloid cells (CX3CR1+/CD11b+ and CX3CR1+/CD11c+ cells). To date, diverse available data concerning the therapeutic mechanism of action of orally administered sEVs partially supports the concept of epithelial transendocytosis and uptake by the immune cells within PPs. However, this idea requires further experiments to confirm it. The previously mentioned relationship between epithelial and immune cells from the intestine cannot fully explain the observed phenomena in inflammatory bowel disease (IBD) and cancer models. Tulkens et al. [22] demonstrated that patients with intestinal barrier dysfunction allow the paracellular translocation of bacterial bEVs (diffusion through gaps between adjacent cells), which in the end increased the production of proinflammatory cytokines. This effect was conserved for patients with different treatments (HIV infection, IBD, and chemotherapy), demonstrating that it is vinculated with intestinal barrier dysfunction rather than a particular pathology [22]. Besides, the results published by Samuel et al. [28] showed increased fluorescent signal in tumors of colorectal cancer murine models after oral gavaging of bovine milk-derived sEVs. This result, added to other articles that mention the biodistribution of orally ingested sEVs, suggests that a fraction of the vesicles enter systemic circulation, reaching other tissues and organs, as mentioned above. In a recent review, Ciéslik et al. [24] listed the therapeutic effects of orally administered sEVs from various sources assessed in different diseases. Among the cited pathologies, several articles demonstrate an effect in organs that do not belong to the GI system, suggesting that the ingested circulatory vesicles maintain therapeutic properties [24]. Nevertheless, in most articles that study the therapeutic effect of orally ingested sEVs, only the outcome is measured and characterized without a detailed explanation of a potential mechanism of action that sustains their results. Food Derived Vesicles (FDVs)-Based Nutraceutical Perspectives in Infant and Elderly Health In the last decade, structures morphologically like extracellular vesicles (EVs), called "food-derived vesicles" or FDVs, have been isolated from different foods (such as honey, pollen, milk, fruits, and vegetables, among other foods). These findings raise the question of whether FDVs contain or overexpress the nutritional compounds or nutraceutical effects of the foods from which they are derived. Several studies have shown that different FDVs have nutraceutical effects. For example, it was observed that the nanovesicles in Apis mellifera hypopharyngeal gland secretomal products (honey, royal jelly, and bee pollen) participate in the known antibacterial and pro-regenerative properties of bee-derived products [62]. Furthermore, Chen et al. [63] described that honey-derived nanoparticles possess antiinflammatory properties by inhibiting the NLRP3 inflammasome, thus preventing liver damage in vivo. As the research mentioned, various other studies report the presence of FDVs and their bioactive compounds in different models of diseases such as cancer, intestinal inflammation, and autoimmune diseases [64]. In global terms, the biological effect reported for FDVs is highly associated with the source of identification. However, comparing articles is challenging since different isolation methods are employed, the doses administrated are different, and several routes of administration are tested [36,64]. Several studies have revealed the effects of sEVs derived from breast milk on the immune function of infants. The analysis of breast milk-derived sEVs showed that the molecules they contain vary depending upon the maternal allergy status [65]. Malnutrition in the elderly population is an important risk factor for sarcopenia, osteoporosis, and other age-related diseases. Protein and other components are key nutrients for the human body and affect bone and muscle mass and quality. Dairy products are rich in these nutrients, which implies that dairy products or their bioactive components, such as sEVs might be ideal for the elderly population. The use of milk sEVs as bioactive ingredients represents a novel avenue to explore in the context of human nutrition, and they might exert significant beneficial effects at multiple levels, including but not limited to intestinal health, bone and muscle metabolism, immunity, modulation of the microbiota, growth and development [66]. Due to its nutritional and immunological benefits, bovine milk-derived sEVs have become an essential fraction of proteomic research. An open online database of bovine milk proteome was established, BoMiProt (http://bomiprot.org accessed on 17 november 2022), with over 3100 proteins from whey, milk fat globule membranes (MFGM), and sEVs [67]. Interestingly,~70% of the research on bovine milk has focused on whey proteins, followed by MFGM (20%) with a surge of sEVs in the past ten years, counting around 10%. The protein lists across milk fractions were compared to identify common and exclusive proteins among whey, MFGM, and sEVs. Interestingly, more than 1300 proteins were exclusively found in exosomes, while 801 and 294 proteins were identified in whey and MFGM, respectively. In contrast, 131 proteins were common across all the fractions [67]. To complement the content characterization, a recent lipidomic study showed eight major lipid classes found in milk sEVs, in which more than 200 fatty acid variations were identified, demonstrating the high complexity of the lipid composition of such vesicles. Also, high levels of phosphatidylcholine (PC), phosphatidylethanolamine (PE), cholesterol (Chol), and phosphatidylserine (PS) were measured. Interestingly, sEVs exhibited increased levels of phosphatidylinositol (PI) as compared to sEVs isolated from cultured cells [68]. Other cargoes, including long noncoding RNAs (lncRNAs) identified in human breast milk sEVs, are also proposed to be implicated in adult metabolism, infant metabolism, neonatal, and development [69]. Many relevant questions need to be answered to understand the value of sEVs in milk formulas since the content of sEVs, and their cargo is modest to not detectable [70]. Dietary depletion of milk sEVs elicited phenotypes such as increased purine metabolites in human and murine body fluids and tissues [71]. Dietary depletion of these sEVs also caused a variety of phenotypes in mice, including a moderate loss of grip strength, an increase in the severity of symptoms of inflammatory bowel disease, a decrease in the postnatal survival, and changes in bacterial communities in the ceca [72]. These observations raise concerns regarding infant and adult nutrition using milk formulas and the need to fortify them with sEVs supplements. However, the compatibility of adding sEVs with existing compounds is crucial for efficient absorption. A recent clinical study by Mutai et al. showed that fortification of soy formulas with milk sEVs must be done by removing lectins for a viable strategy for delivering bioavailable exosomes and their cargos. Lectins in soy formulas bind glycoprotein on the surfaces of milk sEVs, thereby preventing exosome absorption [72]. As a final reflection, this interkingdom relationship has been present throughout the entire existence of animals by dietary consumption, but little was known before the identification of the FDVs. Limitation, Future Direction and Conclusions While oral administration of sEVs presents various physiological and practical advantages over other routes, there is an existing need to investigate further the safety, stability, pharmacokinetics, and biodistribution attributes before their broad use as drug vehicles or nutritional supplements. Most biodistribution studies rely on DiR-based labeling of the sEVs, while DiR labeling does not affect sEVs morphology [73]; however, molecular dissociation between the dye and the sEVs can occur in an in vivo setting and might lead to misleading pharmacokinetic outcomes. Using foreign RNA sequences that quantitative PCR can assess could lead to more precise information devoid of possible artifacts. The few quality studies presented in the different tables of this review display remarkable features of sEVs being able to resist the harsh acidic environment of the GI tract and reach the intestine. However, most FDVs studies are almost exclusively based on milk-derived sEVs. For this, these findings must be expanded for other FDVs of interest, membrane components, and surface markers depending on the source of isolation. Also, proper readouts must be determined to assess their capacity to elicit significant effects following oral intake. So far, milk-derived sEVs are the most feasible option for a drug delivery application based on their published safety profile, pharmacokinetics, and endogenous stability in the GI tract. Nonetheless, this does not mean, in any case, that these properties are exclusive for this source of sEVs. As we mention in this article, studies evaluating other sources of vesicles in detail could discover an optimal sEVs-based nanocarrier for oral drug delivery. It is interesting to mention that oral administration is getting closer to clinical-stage application. So far, we have found only one clinical trial seeking to characterize the ability of plant-derived exosomes in the delivery of curcumin to colon tissue in the context of colon cancer via oral intake (ClinicalTrials.gov Identifier: NCT01294072). This study is ongoing, and no data or results are available yet. Several preclinical and clinical studies have failed to show a positive association between milk intake and serum miRNA levels. However, the role of miRNA cargo, the extent to which they are exported via the sEVs route, and whether they contribute to cell-cell communication are still controversial. Albanese et al. found that sEVs did not fuse detectably with cellular membranes to deliver their cargo. They engineered sEVs to be fusogenic and documented their capacity to deliver functional messenger RNAs. Engineered fusogenic sEVs, however, did not detectably alter the functionality of cells exposed to miRNA-carrying sEVs. These results suggest that sEVs-borne miRNAs do not act as effectors of cell-to-cell communication, suggesting that the delivery of different RNA species through the sEVs might be an extremely inefficient process [74]. The interaction of xeno-EVs (plant or bovine to human cells) adds another layer of complication to deciphering the fusion and cargo delivery processes, identifying the glycan features and other pathways on the EVs surface responsible for the interaction between plant/animal EVs and human cells. In conclusion, oral administration presents a reliable delivery route for EVs. The pharmacokinetics and activity of these bioactive compounds will depend on their cellular source, cargo content, and interaction with human cells. Data Availability Statement: The data that support the findings of this study are available from the corresponding author upon reasonable request.
v3-fos-license
2019-06-16T13:36:23.097Z
2019-06-15T00:00:00.000
189863356
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GREEN", "oa_url": "https://figshare.com/articles/journal_contribution/Determining_differences_between_critical_closing_pressure_and_resistance-area_product_responses_of_the_healthy_young_and_old_to_hypocapnia_/10200731/1/files/18384839.pdf", "pdf_hash": "b86a88203b9d096d765c977d1a2ed28f9c02f542", "pdf_src": "SpringerNature", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:1695", "s2fieldsofstudy": [ "Medicine" ], "sha1": "78ed2e2be504972c06ff1b5f537df4c5a04772a4", "year": 2019 }
pes2o/s2orc
Determining differences between critical closing pressure and resistance-area product: responses of the healthy young and old to hypocapnia Healthy ageing has been associated with lower cerebral blood flow velocities (CBFVs); however, the behaviour of hemodynamic parameters associated with cerebrovascular tone (critical closing pressure, CrCP) and cerebrovascular resistance (resistance-area product, RAP) remains unclear. Specifically, evidence supports ageing being associated with greater cerebrovascular tone and resistance during exercise with elevated CrCP and RAP in older individuals at rest and during exercise. Comprehensive hemodynamic assessment of CrCP and RAP during hyperventilation-induced hypocapnia in two distinct age groups (young ≤ 49 and old > 50) has not been described. CBFV in the middle cerebral artery (CBFV, transcranial Doppler), blood pressure (BP, Finometer) and end-tidal CO2 (EtCO2, capnography) were recorded in 104 healthy individuals (43 young [age 33.8 (9.3) years], 61 old [age 64.1 (8.5) years]) during a minimum of 60 s of metronome-driven hyperventilation-induced hypocapnia. Autoregulation index was calculated as a function of time, using a moving window autoregressive-moving average model. CBFV was reduced in response to age (p < 0.0001) and hypocapnia (p = 0.023) (young 57.3 (14.4) vs. 44.9 cm s−1 (11.1), old 51.7 (12.9) vs. 37.8 cm s−1 (9.6)). Critical closing pressure (CrCP) increased significantly in response to hypocapnia (young 37.6 (18.5) vs. 39.7 mmHg (16.0), old 33.9 (13.5) vs. 39.3 mmHg (11.4); p < 0.0001). Resistance-area product was increased in response to age (p = 0.001) and hypocapnia (p = 0.004) (young 1.02 (0.40) vs. 1.09 mmHg cm s−1 (11.07), old 1.16 (0.34) vs. 1.34 mmHg cm s−1 (0.39)). RAP and not CrCP mediates differences in cerebrovascular resistance responses to hypocapnia between the healthy young and old individuals. Introduction Carbon dioxide (CO 2 )-induced changes in vasomotor tone are influenced by several interactors including blood pressure, hypoxia, neuronal and autonomic activity [1]. However, the literature surrounding cerebrovascular reactivity (CVR) changes during healthy ageing remains an area of controversy [2,3]. Cerebral hemodynamic parameter behaviours over the physiological range of arterial CO 2 (PaCO 2 ) and improvem e n t s i n c e r e b r a l a u t o r e g u l a t i o n ( C A ) d u r i n g hyperventilation-induced hypocapnia have been well described [4][5][6]. During disease states, namely carotid stenosis and stroke, impaired CVR or CA are associated with increased risk for ischaemic events or worse stroke outcome, respectively [7,8]. The lack of capability of cerebral vessels to modify their calibre in response to CO 2 stimulus represents a crucial biomarker to differentiate health and disease [7]. Unfortunately, in order to progress our understanding, factors influencing CVR and CA including age and sex need to be taken into consideration. Zhu and colleagues [9] demonstrated advanced age is associated with lower resting CBFV and increased CVR during hypocapnia. Interestingly, they found increased CVR during hypercapnia, which contravenes findings from several other studies [10,11]. The association of ageing with lower CBFV appears to be well accepted [10]. Though others have found no age-associated differences in CVR, with suggestions that arterial stiffness explains agerelated differences in cerebrovascular conductance [12]. Importantly, the behaviour of hemodynamic parameters associated with cerebrovascular tone (critical closing pressure, CrCP) and cerebrovascular resistance (resistance-area product, RAP) with respect to ageing remains unclear. Specifically, evidence supports ageing being associated with greater cerebrovascular tone and resistance during exercise with elevated CrCP and RAP in older individuals at rest and during exercise. In 2019, a large study demonstrated once more that advancing age is associated with decreases in CBFV, increases in CVR and reduced vasoconstriction during hypocapnia, though no reference to standard hemodynamic parameters was provided [13]. This aligns with the lack of data on the separate effects of the interaction of age with hypocapnia using RAP and CrCP, instead of CVR. There is a distinct lack of studies assessing vasoconstrictive stimuli as compared to vasodilative studies [14]. McKetton and colleagues [14] described the limitations of using blood oxygenation level-dependent (BOLD) magnetic resonance imaging (MRI) to assess hypocapnic conditions, including inability to hyperventilate without motion artefacts and elevated inter-subject variability. Longitudinal assessment of CVR using transcranial Doppler over a time course of 1 to 3 years has shown no significant differences from baseline CVR, demonstrating robust reproducibility and acceptable intersubject variability [11]. Importantly, Galvin and colleagues [15] demonstrated elevated CVR during hypocapnia in those with coronary artery disease and age-matched controls, suggesting the improved CVR and lower CBFV may provide insight into important mechanisms underlying neurological risk with ageing. Changes in CVR are considered to occur beyond the young-old age category (> 50 years), with veryold (> 80 years) exhibiting similar responses to the young-old [12]. This study aimed to identify if healthy ageing is associated with differences in response to CO 2 change. Specifically, to understand the relationship between key CA metrics, the autoregulation index (ARI) calculated using the linear autoregressive-moving average (ARMA) model [6] to quantify the influence of CA or EtCO 2 on CBFV, as well as assessing resistance-area product (RAP, primary determinant of cerebrovascular resistance) and critical closing pressure (CrCP, measure of cerebral arterial tonus and intracranial pressure). We hypothesise that no differences exist in key hemodynamic parameters (CBFV, HR (heart rate), ABP (arterial blood pressure), CrCP, RAP and ARI) between the young (< 50 years) and old (> 50 years) during hypocapnic challenge. By understanding the differences and interactions between these central and peripheral parameters, there exists an opportunity to determine the mechanisms governing the regulation of cerebrovascular tone and dynamic CA processes during healthy ageing. Subjects and measurements The study was conducted in accordance with the Declaration of Helsinki (2000). Ethical approval was obtained from the University of Leicester Ethics Committee (Reference: jm591-c033) and the Northampton Research Ethics Committee (11/ EM/0369). Healthy volunteers were recruited from a variety of settings including the local community, University departmental staff, students and their relatives. Participants aged above 18 years were included. Exclusion criteria were physical disease in the upper limb, poor insonation of both temporal bone windows and any significant history of cardiovascular, neurological or respiratory disease. Smokers were excluded. All participants provided written, informed consent. The dataset generated from the younger individuals has been used to inform related publications [5,6,16]. Experimental protocol The research was undertaken in the University of Leicester's Cerebral Hemodynamics in Ageing and Stroke Medicine (CHiASM) research laboratory, maintained at a constant ambient temperature of approximately 24°C and free of distraction. For the purposes of the study, participants were asked to refrain from caffeine and alcohol for a minimum period of 4 h prior to measurements being undertaken. Beat-to-beat BP was recorded continuously using the Finometer® device (FMS, Finapres Measurement Systems, Arnhem, Netherlands), which was attached to the middle finger. The servocorrecting mechanism of the Finometer® was switched on and then off prior to measurements. The hand bearing the finger cuff was at the level of the heart to negate any hydrostatic pressure bias. HR was recorded using a standard 3-lead electrocardiogram (ECG). EtCO 2 was measured throughout using small nasal cannulae (Salter Labs) connected to a capnograph (Capnocheck Plus). Bilateral insonation of the middle cerebral arteries (MCAs) was performed using transcranial Doppler (TCD) ultrasound (Viasys Companion III; Viasys Healthcare) with a 2 MHz probe. The probes were secured in place with a head-frame that was adjusted to ensure comfort at the outset. The MCAs were identified according to two main characteristics: signal depth and velocities. Measurements were continuously recorded at a rate of 500 samples/s in the PHYSIDAS data acquisition system (Department of Medical Physics, University Hospitals of Leicester NHS Trust). Systolic and diastolic brachial BP readings (OMRON Model 705IT) were performed at each stage of the measurements (normocapnia and hypocapnia) with a minimum of three recordings per individual. These values were then used to calibrate the Finometer recordings. Hyperventilation induction strategy Following a 20-min supine stabilisation period, a 5-min supine baseline recording was taken of the subject breathing spontaneously at rest. Hyperventilation strategies were conducted at least once, with repeated assessments conducted where possible, with 5-min intervals between each to allow stabilisation of all parameters and return to normocapnia. The hyperventilation induction strategies involved 60 s of rest, with hyperventilation being maintained for a minimum of 90 s whilst supine. Use of a continuous metronome (KORG Metronome MA-30) started at a rate analogous to that of the subject's baseline resting rate. After 30 s to 1 min of baseline recording, the rate was increased gradually over a period of 60 s to reach a hyperventilation rate 40% greater than baseline (around 25 breaths per minute). This was maintained for a further minimum of 60 s and followed by a minimum of 90 s rest. Data analysis Data collected corresponded to individual recordings for each participant at baseline and during hypocapnia. First, recordings were inspected visually and calibrated to measure systolic and diastolic OMRON BP. Narrow spikes (< 100 ms) were removed using linear interpolation and the CBFV recording was then passed through a median filter. All signals were then low pass filtered with a zero-phase Butterworth filter with cutoff frequency of 20 Hz. Automatic detection of the QRS complex of the ECG, to mark the R-R interval, was used, but visual inspection was also undertaken with manual correction whenever necessary. This allowed HR, mean ABP and mean CBFV to be calculated for each cardiac cycle. The peak of the EtCO 2 signal was detected, and breath-by-breath values were linearly interpolated and resampled in synchrony with the cardiac cycle. Baseline files were analysed using a moving window autoregressive-moving average (MW-ARMA) model as described by Dineen and colleagues [17]. Initially, an ARMA model is adopted to estimate the CBFV response to a step change in BP and the autoregulation index (ARI) was estimated by comparison with the 10 template CBFV step responses proposed by Tiecks and colleagues [18] using the first 60 s of data. The 60-s window was then shifted by 0.6 s and a new estimate of ARI was calculated. This process was repeated until the end of the signal was reached thus generating ARI estimates at each 0.6-s intervals [17]. This produced multiple estimates of ARI, which were then averaged to produce a single baseline ARI value for each file. Having estimates of ARI every 0.6 s is sufficient to represent changes that can take place due to hypocapnia, even at very high respiratory rates caused by hyperventilation, which produce estimates of EtCO 2 with time intervals always longer than 2 s. The critical closing pressure (CrCP) and resistance-area product (RAP) were estimated using the first harmonic method [19] as demonstrated by the two equations provided by Panerai [19]: CrCP can also be estimated using a frequency-domain approach and application of the Fourier transform (Eq. 1): where V(f), P a (f) and CrCP(f) are the Fourier transforms of CBFV, ABP and CrCP, respectively. RAP is assumed to be constant. If CrCP is also assumed to be constant, it will be zero for all values of f > 0. Applying this rule to the first harmonic (f = 1) leads to For the hyperventilation strategy, continuous estimates of ARI were produced for each file using the same MW-ARMA model. These were then digitally marked at the point of EtCO 2 increase (signifying the end of hyperventilation) as this proved to be the most recognisable and reproducible point. Marked files were synchronised at 90 s. Statistical analysis Data normality was confirmed with the Kolmogorov-Smirnov test. Baseline measurements were assessed for differences between values derived for right and left hemispheres using a paired Student's t test. These were averaged when no significant differences were found. All peripheral and cerebral hemodynamic values were compared using multiple independent t tests with Bonferroni correction to counteract multiple comparisons. Values of p < 0.05 were considered significant. Pearson's correlation coefficient analysis was used to assess for specific relationships between variables with positive interactions and differences between age groups. Results One hundred and four healthy individuals, 43 young (≤ 49) and 61 old (≥ 50), were studied. The mean age of the younger group was 33.8 (9.3) and older group 64.1 years (8.5) ( Table 1). The younger group had 24 females (56%) and the older group had 28 (46%) females. Figure 1 shows representative recordings of the CBFV response to hyperventilation in a single subject. Effect of hypocapnia on cerebral and peripheral hemodynamics As anticipated, there was a significant reduction in EtCO 2 with hyperventilation which was not age dependent (p < 0.0001). In addition, this was associated with increased HR (p = 0.016) and MAP (p = 0.026) ( Table 1). The population response to the hypocapnic challenge is demonstrated in Main findings The four important findings of this study are (1) demonstration for the first time that young and old healthy individuals have different cerebrovascular resistance mechanisms during hypocapnia; (2) older individuals mount a significantly greater HR response to hypocapnia (3) demonstration that dynamic response to hypocapnia, as expressed by the ARI, is CO 2 -dependent; and (4) this study supports previous work demonstrating increases in CrCP and RAP in response to hypocapnia. Importantly, this study distinguishes differences between RAP and CrCP with reference to healthy ageing (Fig. 4). Effect of age and hypocapnia on cerebral hemodynamics This study does not demonstrate an age-related elevation in cerebrovascular response to hypocapnic challenge as seen in a prior study [15]. CrCP has previously been shown to increase with hypocapnia though age was not considered within this particular study [4]. Importantly in this study, ARI improved with CO 2 change (p = 0.008) but did not vary across age groups, demonstrating comparable ability in older and younger individuals to autoregulate during hypocapnia. This finding concurs with previous work demonstrating a lack of an association between increasing age and ARI change during normocapnia [20]. However, most marked is the dramatic rise in RAP in the older individuals beyond a non-significant rise Values are mean (SD). CBFV, CrCP, RAP and ARI were averaged for the right and left MCAs CBFV cerebral blood velocity, HR heart rate, MAP mean arterial blood pressure; EtCO 2 end-tidal carbon dioxide, CrCP critical closing pressure, RAP resistance-area product, ARI Autoregulation Index in CrCP between the two age groups. RAP characterises vascular resistance according to the slope of the linear ABP-CBF relationship, and CrCP refers to the theoretical pressure at which CBF falls to zero and is thought to be a marker of cerebrovascular transmural wall tension (therefore incorporating intracranial pressure effects) [21]. The rise of RAP in older individuals has been demonstrated at baseline as shown in this study, though not during hypocapnia [21]. This study provides evidence that older normotensive individuals have greater RAP to younger individuals during normocapnia and hypocapnia [19]. Crucially, as has been shown during hypotension, the responses of CrCP and RAP are similar, as was demonstrated in this study, albeit with very mild hypotension induced during hypocapnia. Ogoh and colleagues [22] previously showed older normotensive adults had an elevated RAP but not CrCP, and reassuringly, this has been confirmed within this study [22]. Prior work has demonstrated a greater relative change in CrCP in younger adults during upright posture with a suggestion CrCP response in younger adults maybe more sensitive [21]. This study provides for the first time an arterial gas paradigm to refute inferences from prior studies assessing metabolic demand, which highlight that CrCP is crucial in healthy ageing [21]. Overall, the study provides a more detailed perspective on the influence of CrCP and RAP on governance of mechanisms involved in vasomotor control during healthy ageing and hypertensive states [22]. The results highlight RAP as a key differentiator during which accentuates cerebrovascular resistance during hypocapnia as individuals age. Effect of age and hypocapnia on peripheral hemodynamics Nowak and colleagues [23,24] demonstrated that hyperventilation-induced cerebral vasoconstriction led to a greater HR increment in those with orthostatic intolerance. Cerebral vasomotor response to hypocapnia was increased during this hypocapnia period with an expected lower CBFV. Despite a lack of data supporting this phenomenon in older as opposed to younger individuals, this study confirms the presence of a greater HR response in circumstances of hypocapnia and improved CVR. Brown and colleagues demonstrated that hypercapnia increases minute ventilation with little effect on HR, and prior work has shown that mild hyperventilation does not affect HR variability (HRV) indices [25]. However, this study, in a large cohort of individuals across a wide spectrum of ages, provides a different perspective to previously accepted HRV literature. At rest, during normocapnia, older age individuals have consistently lower HRV than younger people [26]; however, under experimental hypocapnic challenge, there is a paradoxical response. This paradoxical response has been documented during hypoxaemia with older men having smaller percentage increases in sympathetic nervous system activity from their elevated baselines, though demonstrating an attenuated tachycardia during acute hypoxaemia [26]. This study provides novel data highlighting the presence of a similar attenuated tachycardia during hypocapnia in older adults. The behaviour of RAP with reference to HR during hypocapnia is of interest. This relationship between RAP and posture change has been previously assessed in the young and old [21]. This study showed that during change from supine to sit to stand (likely precipitating a HR response) was not associated with a significant change in RAP or indeed CrCP [21]. However, the older had greater RAP during changes in posture with a positive trend. Our study supports the findings for RAP, though refutes the importance of CrCP demonstrated during posture change, for hypocapnia conditions. However, our study does support the lack of significant differences in the behaviour of CrCP between age groups, as shown with posture change. In addition, our study shows an interaction effect between age and CO 2 change for HR response, though we cannot draw comparisons as to the behaviour of RAP and CrCP with this work as this study is the first study to assess such variables across the adult lifespan during hypocapnic conditions [13]. Clinical considerations The presence of raised RAP and CrCP in older adults as compared to younger adults raises the importance of cerebrovascular resistance as rest and during hypocapnia as an important Corresponding shaded areas represent the ±SD boundaries marker of ageing vascularity. In addition, the importance of standardising PaCO 2 operating points for clinical studies comparing data with patient data with healthy controls is paramount [6,27]. This study does not support a HR response being associated with the cerebrovascular resistance index, RAP. This therefore highlights the potential relationship between cerebrovascular disease and cardiac disease with worsening cerebrovascular resistance possibly propagating effects on HR and overall cardiac demand. Importantly, with prior work demonstrating raised RAP and similar CrCP during hypertensive states, there is a suggestion that chronic hypertension may affect the ability of vessels to respond to CO 2 stimulus as opposed to alterations in tone. A number of previous studies assessing CrCP in health and disease states have shown that CrCP is influenced by structural and physiological parameters and responds to changes in pressure within the cranial cavity [19]. This phenomenon was highlighted by Robertson and colleagues [21] during their reference to Bstructural vs functional^, with the former being more closely related to adaptation to chronic hypertension [21]. The key question is the potential for hypocapnic challenge to be used as a biomarker of healthy ageing and feasibility of such assessments. With a well-tolerated response across a wide age range, there is scope to extend this assessment to those with Fig. 2 (continued) disease states for which a vasoconstrictive response would not be considered adverse (i.e. not in acute ischaemic stroke but perhaps in vascular dementia syndromes for which a spectrum of small vessel disease severity exists). Limitations Several limitations are to be considered within this study setup. Firstly, when considering TCD studies, in order to estimate CBF from CBFV, it is presumed the MCA diameter must remain constant. Prior work has shown changes in MCA diameter occur at extreme hypercapnia; however, changes during hypocapnia are less understood [28,29]. Secondly, the authors have previously demonstrated sex differences across the physiological range of PaCO 2 , with a downward parallel shift of the RAP dependency on EtCO 2 . This demonstrated males have higher RAP values than females across the range of PaCO 2 . Therefore, with a similar male and female split in both groups, these effects are unlikely to have been significant, though should be acknowledged. Importantly, recent work has shown that autoregulation is able to withstand the effects of female sex hormones and therefore, the pre-and post-menopausal effects influences are less likely to have affected the results [30]. Thirdly, sampling expired CO 2 with nasal prongs during hyperventilation can underestimate EtCO 2 ; however, unlike the face mask, they do not precipitate a sympathetic response and therefore, it is considered an acceptable pragmatic methodological setup for this cerebral hemodynamic study [31]. Fourthly, the potential influence of current or historical cardiorespiratory fitness is not considered within this study, and this has previously been shown to influence cerebrovascular reactivity to CO 2 [32] and dynamic cerebral autoregulation [33,34]. Lastly, decline in hypercapnic reactivity was shown in the Rotterdam cohort in individuals aged between 75 and 90 years [35]. Although they did not include hypocapnia, this physiological phenomenon, and different behaviour exhibited by the real extremes of age, is beyond the considered age groups within this study. Future directions This study provides further support for vascular responses to PaCO 2 as a biomarker of ageing and describes mechanistic changes associated with ageing in greater detail than prior studies. However, further work is required to elucidate this phenomenon in disease states and to characterise variation between different cerebrovascular pathologies. Specifically, assessing RAP response during exercise whereby HRV and response is broader and reliable correlations between the heart-brain axis can be investigated. In a large group of young and old healthy individuals, we show resistance-area product (RAP) and not critical closing pressure (CrCP) mediates differences in cerebrovascular resistance responses to hypocapnia between the healthy young and old individuals. These novel findings underline the importance of assessing the separate contributions of RAP and CrCP. In addition, our findings support previous studies demonstrating exacerbated heart rate responses in the old.
v3-fos-license
2024-06-05T15:23:00.216Z
2024-06-03T00:00:00.000
270239329
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.frontiersin.org/articles/10.3389/fnagi.2024.1404128/pdf?isPublishedV2=False", "pdf_hash": "ae26ffdd3aaab41ae5b7ebba36ec44fe1540171d", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:1697", "s2fieldsofstudy": [ "Psychology" ], "sha1": "e56e9b8089f1b2aa1733ea2ffb58e374b5ba7905", "year": 2024 }
pes2o/s2orc
The effect of feedback timing on category learning and feedback processing in younger and older adults Introduction Corrective feedback can be received immediately after an action or with a temporal delay. Neuroimaging studies suggest that immediate and delayed feedback are processed by the striatum and medial temporal lobes (MTL), respectively. Age-related changes in the striatum and MTL may influence the efficiency of feedback-based learning in older adults. The current study leverages event-related potentials (ERPs) to evaluate age-related differences in immediate and delayed feedback processing and consequences for learning. The feedback-related negativity (FRN) captures activity in the frontostriatal circuit while the N170 is hypothesized to reflect MTL activation. Methods 18 younger (Myears = 24.4) and 20 older (Myears = 65.5) adults completed learning tasks with immediate and delayed feedback. For each group, learning outcomes and ERP magnitudes were evaluated across timing conditions. Results Younger adults learned better than older adults in the immediate timing condition. This performance difference was associated with a typical FRN signature in younger but not older adults. For older adults, impaired processing of immediate feedback in the striatum may have negatively impacted learning. Conversely, learning was comparable across groups when feedback was delayed. For both groups, delayed feedback was associated with a larger magnitude N170 relative to immediate feedback, suggesting greater MTL activation. Discussion and conclusion Delaying feedback may increase MTL involvement and, for older adults, improve category learning. Age-related neural changes may differentially affect MTL- and striatal-dependent learning. Future research can evaluate the locus of age-related learning differences and how feedback can be manipulated to optimize learning across the lifespan. Introduction Learning occurs throughout the lifespan and is often an error-ridden process.As new learners make errors, error-detection plays a key role in updating incorrect associations in memory (Luft, 2014).Error detection is the recognition that an action conflicts with what is true relative to internal or external criteria (Ohlsson, 1996;Postma, 2000).Detection of errant Nunn et al. 10.3389/fnagi.2024.1404128Frontiers in Aging Neuroscience 02 frontiersin.orgbehavior can be either internally (i.e., self-monitoring) or externally (i.e., feedback) driven.External error detection via feedback is critical when learners are acquiring new information or are unable to monitor the accuracy of their own responses (McCandliss et al., 2002;Pashler et al., 2005).For example, when learners are acquiring new phonological contrasts (e.g., Japanese speakers learning an English /r/ -/l/ distinction; McCandliss et al., 2002) or in certain cases of cognitive and/or linguistic deficit such as aphasia, Alzheimer's, and aging (Schreiber et al., 2011;Nitta et al., 2017;Mandal et al., 2020).Thus often, feedback is not only helpful, but critical to the process of learning. Learning conditions can influence how feedback is detected, processed, and utilized to update memory.Feedback timing (immediate vs. delayed) is one such condition of relevance to the current work.Immediate feedback is hypothesized to recruit dopamine-dependent striatal circuits which code prediction errors and send reward signals to the anterior cingulate cortex (ACC) (Holroyd and Coles, 2002;Nieuwenhuis et al., 2004).When feedback provision is delayed, (≥ 3,500 ms) the fast-acting dopamine-mediated learning is disrupted and processing shifts to the medial temporal lobe (MTL) which supports binding information that is separated by time (Foerde and Shohamy, 2011;Peterburs et al., 2016;Arbel et al., 2017). Event-related potentials (ERPs) collected using electroencephalography (EEG) have been leveraged to elucidate the differences in the processing of feedback during learning.The feedback-related negativity (FRN) is a frontocentral negativity that peaks 250-300 ms after the provision of feedback (Gehring et al., 1995;Miltner et al., 1997). 1 The FRN is hypothesized to capture immediate feedback processing within the fronto-striatal circuit (Holroyd and Coles, 2002;Nieuwenhuis et al., 2004).The amplitude of the FRN is sensitive to feedback valence (negative > positive) (Gehring et al., 1995;Miltner et al., 1997) and feedback timing (immediate > delayed).Reductions in the FRN amplitude in response to delayed feedback is consistent with the hypothesis that delays in feedback timing shift processing away from fronto-striatal circuits (Weinberg et al., 2012;Peterburs et al., 2016;Weismüller and Bellebaum, 2016;Arbel et al., 2017;Kim and Arbel, 2019). The N170 (Bentin et al., 1996) has been used to evaluate the processing of delayed feedback.The N170 is larger for delayed relative to immediate feedback (Arbel et al., 2017;Kim and Arbel, 2019;Höltje and Mecklinger, 2020;Albrecht et al., 2023) and in the context of feedback-based tasks has been hypothesized to reflect activity in the medial temporal lobe (MTL) (Arbel et al., 2017;Kim and Arbel, 2019;Höltje andMecklinger, 2018, 2020;Albrecht et al., 2023).This is supported by neuroimaging studies that find that the MTL activation is heighted by delayed feedback (Foerde and Shohamy, 2011;Lighthall et al., 2018) and double dissociations in which individuals with MTL damage have been found to learn from immediate but not delayed feedback with the opposite pattern observed in individuals with basal ganglia damage due to Parkinson's Disease (Foerde et al., 2013).Outside of the context of feedback-based learning, the N170 has been hypothesized to reflect activity in the MTL (Grippo et al., 1996;Baker and Holroyd, 2013) as well as the adjacent fusiform gyrus (Iidaka et al., 2006;Rossion and Jacques, 2011;Gao et al., 2019).Grippo et al. (1996) associated the N170 with the MTL when they observed a reduction in the amplitude of the N170 with increasing memory load in patients with temporal lobe epilepsy.More recently, Baker and Holroyd (2013) used source localization algorithms to localize the N170 to the MTL during a spatial navigation task.Similar activation during the processing of complex objects is localized to the fusiform gyrus (Iidaka et al., 2006;Rossion and Jacques, 2011).The N170, however, is not restricted to the visual domain and has been found to be elicited and modulated by the timing of auditory feedback; further supporting the notion that in the context of feedback-based learning, the N170 reflects cognitive processes that are not specific to the visual domain (Kim and Arbel, 2019).Albrecht et al. (2023) suggests that it is possible there are two potentially overlapping components that reflect activity in the MTL and fusiform gyrus, respectively.Our decision to use the N170 to gain insight into processes that are potentially supported by the MTL is motivated by (1) research finding that delayed feedback is associated with greater MTL activation, (2) previous studies identifying the MTL as a generator of the N170, and (3) research indicating that the N170 can be elicited by auditory feedback, and thus, likely does not reflect visual processing in the fusiform gyrus when elicited by feedback.In the current study, a larger amplitude N170 in the delayed relative to immediate feedback condition will support the claim that the N170 may reflect processing in the MTL although this cannot be definitively stated in the absence of source localization data which is beyond the scope of the current study. Aging is associated with changes in striatal and MTL functioning and may influence older adults' ability to learn successfully from certain types of feedback.Characterizing how age-related changes in neural functioning influences learning in older adults is key to supporting learning across the lifespan and may be particularly useful in rehabilitation contexts.Rehabilitation services, such as speechlanguage therapy, are primarily provided to adults over the age of 60 (American Speech-Language Hearing Association, 2019), many of whom have experienced an acquired brain injury (e.g., stroke, traumatic brain injury, tumor resection).Research in rehabilitation continually aims to improve the effectiveness of interventions and retention of treatment gains which may be informed by understanding how age-related changes in neural functioning interact dynamically with acquired neurologic damage. In older adults, memory decline has been found to be greater in declarative memory tasks that require recruitment of the MTL relative to non-declarative tasks that require the recruitment of the striatal circuit (Hoyer and Verhaeghen, 2006).This is consistent with findings of a more accelerated loss of volume in the MTL relative to the striatum in later life (Raz et al., 2005(Raz et al., , 2010;;Walhovd et al., 2011).Yet, age-related changes in the striatum are also identified and have been evaluated in the context of reward processing (Mell et al., 2005;Bäckman et al., 2006;Eppinger et al., 2008Eppinger et al., , 2013;;Braver, 2012;Chowdhury et al., 2013;Samanez-Larkin et al., 2014).Older adults show reduced neural response to reward signals such as feedback and differences in the processing of negative and positive rewards relative to younger adults (Holroyd and Coles, 2002;Nieuwenhuis et al., 2002;Eppinger et al., 2008Eppinger et al., , 2013)).Importantly, age-related differences in reward processing have been found to influence learning (Eppinger et al., 2013).Evaluating learning in older adults from immediate and delayed feedback may elucidate how age-related changes in the striatum and MTL effect feedback-based learning. Only one study (Lighthall et al., 2018) to our knowledge, has evaluated learning from immediate and delayed feedback in older adults to determine whether this manipulation alters learning outcomes and neural activity.Lighthall et al. (2018) evaluated probabilistic learning and recognition memory in immediate (1,000 ms after response) and delayed (7,000 ms after response) feedback conditions.They compared younger (M years = 26.3)and older adults (M years = 68.7) to better understand the effects of healthy aging on feedback processing.Behavioral accuracy analyses demonstrated that older adults showed lower rates of optimal response selection relative to young adults, but that both groups showed learning under both timing conditions.Region of interest analyses of functional magnetic resonance imaging data (fMRI) collected during learning demonstrated greater activity in the striatum relative to the hippocampus with immediate feedback in the young adult group, and the opposite pattern with delayed feedback.In contrast, in the older adult group, feedback timing did not lead to significant differences in regional activation during learning.Additional analyses focused on the nucleus accumbens, a region important for dopaminergic reward learning, identified enhanced activation for both groups under conditions of immediate feedback relative to delayed feedback.The authors concluded that findings provide evidence for age-related change in hippocampal mechanisms of learning more so than in striatal mechanisms. The current study will characterize age-related differences in category learning with immediate and delayed feedback.During learning, individual's electrophysiological response to feedback will be captured using ERPs.ERPs provide high-temporal resolution of the processing of feedback under different timing conditions.Category learning is central to human cognition and broadly defined as the ability to organize environmental information into meaningful groups based on patterns (Ashby and O'Brien, 2005).Abstract representations of a concept or "prototypes" can aid in categorization.In A/B prototype category learning, category exemplars are derived from an "A" prototype and "B" prototype."A" category members share relatively more features with prototype "A" while "B" category members share relatively more features with prototype "B." For example, an email may be categorized as phishing because it shares relatively more features with a scam email (from a financial institution, implies urgency, requests personal information, contains typos) relative to an official business email.To our knowledge, no studies have evaluated Prototype A/B category learning under immediate and delayed feedback.The ubiquity of category learning and the novelty of this investigation further supports the impact of the current work. Consistent with previous research (Lighthall et al., 2018) we predict that learning outcomes will be equivalent under both immediate and delayed feedback timing conditions but be associated with different signatures of neural activity.In younger adults, we expect to see a larger FRN in response to immediate feedback relative to delayed feedback and a larger N170 in response to delayed relative to immediate feedback.As is characteristic of the FRN, we expect a larger magnitude FRN to negative relative to positive feedback (Sambrook and Goslin, 2015) in both the immediate and delayed conditions.Yet, we recognize that a disruption of feedback processing in the striatum may also be evidenced by the absence of a valence effect in the delayed condition (valence by timing interaction) indicating atypical extraction of outcome-related information conveyed by feedback.We expect to see the same pattern in the FRN for older adults.However, a larger magnitude N170 in response to delayed relative to immediate feedback may be absent in older adults with this affect potentially related to age-related changes in the MTL.We do not expect to see a valence effect in the N170 because the N170 is hypothesized to reflect the binding of temporally discontiguous events and not the extraction of outcome information conveyed via feedback.The findings of this study aim to further characterize consequences of age-related neural changes on feedbackbased learning, setting foundations for future research examining how age-related changes in neural functioning may interact with learning after acquired neurologic injury. Methods Participants Thirty-eight adults, 18 younger (Female = 13, Male = 5) and 20 older (Female = 15, Male = 5) participated in the study.A power analysis based on previous research evaluating the effect of manipulating feedback timing on the magnitude of the N170 and the FRN (Arbel et al., 2017;Kim and Arbel, 2019;Höltje and Mecklinger, 2020) indicated that the current sample size would enable the detection of the main effect of feedback timing on ERP amplitude with power > 0.95 and α = 0.05.Younger adults ranged from 22-30 years old (M = 24.4,SD = 2.5).Older adults ranged from 55 to 82 years old (M = 65.5, SD = 6.3).55 years old was selected as the lower bound for older adults given changes in neural function and structure that accelerate around 50-years-old (Dohm-Hansen et al., 2024) including volume loss in hippocampal and adjacent parahippcampal regions (Fjell et al., 2013;Luo et al., 2020).The mean age of this sample is comparable to other studies evaluating feedback processing in younger and older adults (Nieuwenhuis et al., 2002;Eppinger et al., 2008;Lighthall et al., 2018).Participants did not have a history of developmental delay, neurologic impairment, or learning disability.All participants scored in the 'no cognitive impairment' range of the Mini-Mental Status Examination (MMSE, Folstein et al., 1975).Three older adults and one younger adult were excluded from the EEG analysis due to technical errors or excessive artifacts.Thus, 17 younger (M = 24.2,SD = 2.4) and 17 older adults (M = 64.4,SD = 4.7) were included in the EEG analysis. Procedure The study procedure was approved by the Institution Review Board of Mass General Brigham Healthcare System.All procedures were completed in a quiet room at the MGH Institute of Health Professions in one 2-3-h session. Category learning task Participants completed two Prototype A/B learning tasks administered using E-Prime (Psychology Software Tools Inc, 2016) across two separate learning blocks, one with immediate and one with delayed feedback separated with a break.We used two stimulus sets, a yellow/grey set and red/blue set, that were developed by Reed et al. (1999) and adapted by Zeithamova et al. ( 2008) (Figure 1).We crossed stimulus sets with timing conditions to create four tasks: red/blue immediate, red/blue delayed, yellow/grey immediate, and yellow/grey delayed.Each participant completed one training block with immediate feedback followed by testing and one training block with delayed feedback followed by testing.During each training block, participants saw a different stimulus set.The order of the blocks was counterbalanced across participants. Each stimulus set varied on 10 binary dimensions (Zeithamova et al., 2008;Vallila-Rohter and Kiran, 2013).For example, for the red/blue stimulus set, binary dimensions included: animal's neck (short vs. long), tail (straight vs. curly), feet (pointed vs. curved), snout (pointed vs. rounded), ears (pointed vs. rounded), color (blue vs. red), body shape (pyramidal vs. round), body pattern (spots vs. stripes), head orientation (downward-facing vs. upward-facing), and leg length (short vs. long).Prototypes A and B were maximally distinctive and differed on all 10 binary features.The stimulus dimensions of the yellow/grey stimulus set were visually distinct from the red/blue set.Category membership was determined by an animal's distance from the prototype (Figure 2).Animals that shared 90-60% of their features with a prototype were considered members of that prototype category.Animals that differed by five features with both prototypes were considered ambiguous and could be correctly categorized into either category.The category structure creates a continuum in which exemplars share 10-40% of their similarity with the opposing prototype.As is typical with Prototype learning tasks, the number of dimensions (n = 10) upon which category members vary likely exceeds working memory capacity and thus, optimal categorization requires that individuals use feedback to acquire cue-outcome relationships among the stimulus dimensions that cannot be easily verbalized (Ashby and Ell, 2001). Training Prior to training, individuals were instructed to base their decisions on the overall appearance of each animal rather than Prototypes for yellow/gray and red/blue stimulus sets.(2008).Participants saw 10 unique animals from category A (2 each at distances of 1 and 4, 3 each at distances of 2 and 3) and 10 unique animals from category B. Each exemplar was presented four times.Prototypical animals never appeared in training.Participants were asked to decide whether each animal lived in the forest (i.e., category A) or in a cave (i.e., category B).In each trial, participants saw one exemplar and a drawing of a forest and a cave.After making a response via button press, participants received feedback immediately (500 ms) or after a delay (6,000 ms).Feedback was in the form of three green checks (correct) and three red X's (incorrect).If participants did not respond within 4,000 ms, they saw a drawing of an hourglass and were instructed to respond faster.Figure 3 shows an example training trial. Testing Testing consisted of 28 trials.The trial structure of testing was identical to training except participants did not receive feedback on their response accuracy.During testing, participants saw 13 exemplars in category A (1 prototype and 3 each at distances of 1, 2, 3 and 4), 13 exemplars in category B, and 2 ambiguous exemplars (distance of 5).A total of 6 stimuli had been included in training while 22 were untrained.The testing list was designed so that participants saw each binary feature (e.g., red/blue) an equal number of times. Data analysis Behavioral data Learning performance was evaluated using slope scores.Slope scores were calculated using a percent "B" response (%BResp) (see Vallila-Rohter and Kiran, 2013).%BResp shows participant accuracy as a function of distance from the prototype and accounts for learners' tendency to "probability match" or respond in proportion to the probability that each stimulus-response feature is reinforced during learning (Knowlton et al., 1994;Vallila-Rohter and Kiran, 2013).Similar to prior studies, the %BResp was predicted to increase 10% for each feature shared between the exemplar and prototype B (Vallila-Rohter and Kiran, 2013).For example, an animal that differs by one feature from the prototype A would have a predicted 10%BResp because it shares 10% of its features with prototype B. Within this model, successful category learning would correspond to a %BResp with a linearly increasing slope of 10.Chance response corresponds to a slope of zero in which participants have a 50%BResp at each distance from the prototype.Prior work has identified that when participants make responses based on multiple features, slope scores approach 10 more so than strategies where one feature determines categorization or a random approach is utilized (Vallila-Rohter and Kiran, 2015). To determine whether individual results were linear, significance levels of regressions for each participant were compared when the independent variable of distance was squared, cubed, and unadjusted.When the non-squared regression reached significance with an alpha value <0.05 and the significance of the squared and cubed terms exceed 0.05, the data was considered linear (Cox and Wermuth, 1994;Gasdal, 2012).Regression lines were fitted to participant results and regression coefficients were used as slope scores.The resultant slope Example training trial with immediate and delayed feedback. EEG data processing and analysis A 32-channel GES 400 System by Electrical Geodesics Inc. was used to obtain EEG data with a 32-channel HydroCel Geodesic sensor net.The net comprised of Ag/AgCl electrodes attached to an elastic net consistent with the international 10-20 system.Per manufacture recommendations, impedances were kept below 50 kΩ.Offline analysis of the EEG data included bandpass filtering (0.1-30 Hz) of raw data and segmentation into 1,000 ms long epochs (200 ms before and 800 ms after feedback presentation).Body movement artifacts were rejected by visually inspecting each trial for drift.Re-referencing to the average reference was performed followed by baseline correction using the signal 200 ms prior to the presentation of feedback.Independent component analysis (ICA) was used to remove ocular, muscular, and other artifacts. EEG was recorded during training.Consistent with previous research, the FRN was captured at fronto-central electrode, FCz and the N170 at occipital-parietal electrodes, P7 and P8.To allow for the analytic reduction of the temporality of the data (Spencer et al., 2001), averaged data from each participant was submitted to a Temporal Principal Component Analysis (TPCA) using EEGLAB (Delorme and Makeig, 2004).TPCA was used to overcome challenges with other ERP analysis methods (e.g., difference waves and mean amplitude) (Dien, 2012).TPCA eliminates the need to select a time window for analysis or calculate difference waves, two aspects of ERP analysis that can bias results (Luck and Gaspelin, 2017) and are relevant to the current study.TPCA decomposes the observed signal into underlying factors representing comparable activity patterns across trials thus eliminating the need to specify a time window for analysis.This is particularly useful for the current study, given that the peak amplitude latency can vary across younger and older adults (e.g., Eppinger et al., 2008;Yi et al., 2012).TPCA also evades the need to take difference waves which requires the subtraction of two conditions (e.g., positive and negative feedback).If the two subtracted waves differ by more than just magnitude but also the kind of signal conveyed or peak latency, the resultant difference wave can produce misleading results (Dien and Frishkoff, 2005)TPCA allows for the disentanglement of underlying components that overlap in the time domain and is well-fit for the current study (Dien, 2012;Scharf et al., 2022).TPCA was conducted for each electrode, FCz, P7, and P8.We identified the temporal factor with peak latencies which overlapped with the FRN and N170 grand average waveforms.The corresponding factor scores were extracted.Factor scores reflected the relative level of magnitude of the FRN and N170 for each participant at a given electrode. Statistical analysis All statistical analyses were performed in R (R Core Team, 2021).To evaluate whether learning slope scores varied across groups and feedback-timing conditions, a 2 (group: younger adult vs. older adult) by 2 (feedback timing: immediate vs. delayed) mixed analysis of variance (ANOVA) with slope scores as the dependent variable was conducted.To evaluate the relationship between group, feedback timing, feedback valence, and ERP magnitude, we planned to conduct three 2 (group: younger adult vs. older adult) by 2 (feedback timing: immediate vs. delayed) by 2 (feedback valence: positive vs. negative) mixed ANOVAs with factor scores derived from TPCA as the dependent variables.Because a separate TPCA was performed for each electrode, the resultant factor scores cannot be compared across electrodes and thus, were analyzed in separate ANOVAs. Pairwise comparisons with Holm correction revealed a steadily increasing %BResp with each increase in distance from Prototype A. This is consistent with the prediction that participants will probability-match their responses based on how many features an exemplar shares with the prototype and supports the decision to analyze %BResp data over accuracy data which does not reflect this prediction.50 out of 76 task response patterns met the criteria used to assess the assumption of linearity confirming that the predominant relationship between distance and %BResp was linear.7 of the 26 response patterns had significant non-squared and quadratic and/or cubic terms and thus, in our conservative classification, were not considered "linear" even though these response patterns demonstrated an increase in %BResp when an exemplar shared more features with Prototype B. 57 out of 76 (75%) of participants showed evidence of an increasing %BResp when an exemplar shared more features with Prototype B. See Figure 4 for examples of response patterns that met the criteria for "linear" and "non-linear." Figure 4B shows an example of a response pattern in which the non-squared term was significant as well as the quadratic term. Of the 26 task response patterns that did not meet the criteria used to assess the assumption of linearity 14 were in the immediate condition (5 younger adults, 9 older adults) and 12 in the delayed condition (5 younger adults, 7 older adults).Five individuals (3 younger adults, 2 older adults) had response patterns failing to meet the criteria for the assumption of linearity across both the immediate and delayed tasks.Our evaluation of the data revealed that the predominant pattern was linear and that slope scores reflected the degree to which individuals learned the overall category structure.The planned ANOVA was conducted to evaluate the effect of group and feedback timing on slope scores.See Table 1 for full ANOVA results.Levene's test was not significant (p = 0.15).The main effect of group approached but did not achieve significance (p = 0.056).The main effect of feedback timing was significant.Slope scores in the delayed condition indicated better learning (M = 7.40, SD = 4.14) compared to the immediate condition (M = 5.28, SD = 5.18).Differences across groups were revealed in the significant interaction between group and timing.Pairwise comparisons with a Holm correction revealed that older adults had slope scores closer to 10 in the delayed (M = 7.43, SD = 4.35) compared to the immediate condition (M = 3.19, SD = 5.54, p = 0.02).There was no difference in slope scores across timing conditions for younger adults (Delayed: M = 7.37, SD = 4.02, Immediate: M = 7.59, SD = 3.65, p = 0.8).These findings suggest that while older and younger adults performed comparably on the delayed feedback condition, older adults showed decreased learning under the immediate feedback condition (see Figure 5). ERP data FRN Levene's test was significant indicating unequal variances in FRN magnitude across the younger and older adults (p < 0.001).Thus, we were unable to conduct the planned mixed ANOVA with group as a between-subjects variable for the ANOVA with FRN factor score as the dependent variable.Instead, we conducted two separate ANOVAs for older and younger adults. For each group, we conducted a 2 (feedback timing: immediate vs. delayed) by 2 (feedback valence: positive vs. negative) ANOVA with FRN amplitude as the dependent variable.Figure 6 contains a grandaverage waveform of the FRN by group.Figure 7 presents the factor scores which reflect the relative magnitude of the FRN.See Tables 2, 3 for the full ANOVA results. FRN: younger adults There was a main effect of feedback valence.The negative deflection of the FRN was larger for negative (M = −0.21,SD = 1.09) compared to positive (M = 0.41, SD = 0.80) feedback, as expected.The main effect of feedback timing was not significant.However, there was a significant interaction between timing and valence.Pairwise comparisons with a Holm correction revealed a significant difference between positive and negative feedback in the immediate (p = 0.006) but not the delayed (p = 0.3) timing condition.Thus, delaying feedback Mean slope scores for younger and older adults across feedback timing conditions.Grand averaged ERPs elicited by positive and negative feedback in Response to immediate and delayed feedback at FCz. disrupted the typical signature of the FRN in which feedback is larger for negative relative to positive feedback. FRN: older adults There were no significant main effects.While numerically the FRN magnitude was larger for negative (M = 0.13, SD = 0.90) relative to positive (M = 0.35, SD = 1.10) feedback, this trend did not reach statistical significance indicating that older adults did not show the expected pattern of the FRN in either timing conditions. Post hoc analyses at FCz Upon visual inspection, it was observed that the older adult group appeared to show a larger N100 at FCz with immediate relative to delayed feedback.The N100 is evoked by visual stimuli and found to be larger to attended relative to unattended stimuli (Luck et al., 1994).To evaluate whether there was a significant difference in this negative deflection across conditions, we conducted a 2 (feedback timing: immediate vs. delayed) by 2 (feedback valence: positive vs. negative) ANOVA on the factor closest aligned with the negativity in older adults.There were no significant main effects or interactions. N170 Levene's test was not significant for P7 or P8 (p > 0.05), suggesting equal variance across groups.Figure 8 contains a grand-average waveform of the N170 by group and electrode.Figure 9 presents the N170 factor scores for each electrode and across conditions.See Tables 4 and 5 for full ANOVA results. P7 There was a main effect of feedback timing.The negative deflection of the N170 was larger for delayed (M = −0.94,SD = 1.03) compared to immediate (M = −0.43,SD = 0.91) feedback.There were no other significant main or interaction effects measured at P7. P8 There was a main effect of timing.The negative deflection of the N170 was larger for delayed (M = −1.07,SD = 0.99) compared to Factors scores reflecting the relative magnitude of the FRN.The y-axis is reversed because the FRN is a negative-going waveform.Grand average ERPs elicited by positive and negative feedback in Response to immediate and delayed feedback at electrodes P7 (left side of scalp) and P8 (right side of scalp). immediate (M = −0.41,SD = 0.89) feedback.There were no other significant main effects.Differences across groups were revealed within interaction effects. There was an interaction between group and valence.Pairwise comparisons revealed that the N170 was larger for younger compared to older adults when feedback was negative (Younger adults: M = −0.974,SD = 0.803; Older adults: M = −0.464,SD = 0.809, p = 0.02) but not when feedback was positive (Younger Adults: M = −0.78,SD = 1.05;Older Adults: M = −0.75,SD = 1.25, p = 0.09).There was also a significant three-way interaction.For older adults, N170 magnitude was larger for delayed relative to immediate feedback regardless of feedback valence.For younger adults, N170 magnitude was only larger for delayed relative to immediate feedback when feedback was positive. Post-hoc analyses: P100 at P7 and P8 Upon visual inspection, it was observed that the older adult group appeared to show a larger P100 at P7 and P8 in response to immediate relative to delayed feedback.The P100 is thought to reflect top-down regulation of sensory information processed by the visual cortex (Luck and Kappenman, 2011).Within this interpretation, the P100 amplitude is expected to be larger for attended relative to unattended stimuli.To evaluate the potential for differences in attentional allocation across conditions we conducted a 2 (group: younger adults vs. younger adults) by 2 (feedback timing: immediate vs. delayed) by 2 (feedback valence: positive vs. negative) ANOVA on the factor closest aligned with the P100.Separate ANOVAs were conducted for P7 and P8.We found a main effect of feedback timing at P7 (F(1, 32) = 9.01, p = 0.005, η 2 = 0.09) and P8 (F(1, 32) = 4.20 (p = 0.049, η 2 = 0.05)).The positive deflection of the P100 was larger for immediate (P7: M = 1.0,SD = 1.0;P8: M = 1.1, SD = 1.06) relative to delayed (P7: M = 0.42, SD = 0.91; P8: M = 0.63, SD = 0.89) feedback.At P7, we also found a group by timing interaction (F(1, 32) = 4.5, p = 0.04, η 2 = 0.04).The difference between immediate and delayed feedback was significant for older (p < 0.001) but not younger (p = 0.44) adults. Discussion The current study aimed to evaluate how altering the timing of feedback influenced feedback processing and learning for younger and older adults.Behavioral data revealed that at the group-level participants learned the category structure as evidenced by an increase in %BResp when an exemplar shared more features with prototype B. A small subset of participants (~25%) did not show evidence of a linearly increasing relationship between %BResp and number of features shared with prototype B. Slope data allowed for the reduction of %BResp to a single score to evaluate response as a function of distance from the prototype.Slope score analysis revealed that older and younger adults had comparable learning in the delayed feedback condition but not the immediate feedback condition.Older adults performed worse in the immediate relative to delayed condition.Delaying feedback may warrant further exploration as a potential means to improve learning in older adults and potentially as a means to equalize performance across younger and older learners. Electrophysiological data may further clarify behavioral findings.Younger adults showed the expected pattern in the FRN data.There was a larger FRN to negative compared to positive feedback in the immediate but not the delayed condition.These findings are consistent with fast-acting, dopamine-driven processing of immediate feedback that shifts to other circuits when feedback is delayed.Importantly, older adults showed a reduced FRN effect when learning from immediate feedback, suggesting a disruption of feedback processing in the striatum regardless of feedback timing.Age-related limits on Factors scores reflecting the relative magnitude of the N170.Because the N170 is a negative-going waveform, the y-axis is reversed.older adults' ability to effectively recruit the striatum to process immediate feedback may explain why younger adults outperformed older adults when feedback was immediate and dependent upon fastacting subcortical reward processing (Nieuwenhuis et al., 2002;Eppinger et al., 2008).The N170 amplitude was larger for both groups in the delayed relative to the immediate feedback condition.Again, this supports the notion that delaying feedback alters the neural processing of feedback and that this is captured by the N170.One potential neural generator is the MTL (Arbel et al., 2017;Kim and Arbel, 2019;Höltje and Mecklinger, 2020) which supports the integration of temporally discontiguous information.Specifically, research finds that the hippocampus contains "time cells" that encode key events that are separated by a temporal gap (MacDonald et al., 2011(MacDonald et al., , 2013;;Kitamura et al., 2015).In the context of the current task, the response (e.g., This animal lives in a cave) and feedback (e.g., incorrect) are temporally discontiguous events in the delayed condition (i.e., separated by a 6,000 ms time gap).What is known about the MTL and the demands of this task, make the MTL a potential candidate for the current task.However, further research is necessary to investigate the neural generator in this context and potential alternative sources (e.g., the fusiform gyrus). Older adults learned better in the delayed relative to immediate condition suggesting that delaying feedback may be a fruitful way to support feedback-based learning in older adults.The locus of the gain in learning for older adults remains to be elucidated.One potential explanation is that delayed feedback allowed older adults to rely on the MTL to update cue-response contingencies when separated by a delay.The MTL may be better suited to support Prototype A/B learning in the setting of age-related neural changes compared to the striatum.While previous research has suggested a steeper age-related decline in the functioning of the MTL relative to the striatum in older adults, there is also a body of work which characterizes age-related dysfunction in reward processing within the striatal system with aging (Mell et al., 2005;Bäckman et al., 2006;Eppinger et al., 2008Eppinger et al., , 2013;;Braver, 2012;Chowdhury et al., 2013;Samanez-Larkin et al., 2014).Changes in reward processing may also disrupt learning within the current task.A series of studies (Holroyd and Coles, 2002;Nieuwenhuis et al., 2002;Eppinger et al., 2008) have evaluated age-related reduction in feedback processing using the FRN.Eppinger et al. (2008) found that even when controlling for performance differences, older adults showed an atypical pattern in the processing of negative and positive feedback.The current findings are consistent with an age-related decline in dopaminergic processing reflected in symmetric processing of negative and positive feedback in older adults.Considering the current results within this context, age-related changes in the dopaminergic reward system may be more detrimental to performance on a Prototype A/B learning task than age-related changes in the MTL. Alternatively, it could be that the delaying of feedback altered attentional recruitment in a manner that was advantageous for older adults.Post-hoc analyses revealed an interaction in which at P7 older, but not younger adults, showed a larger P100 for immediate relative to delayed feedback.Thus, immediate but not delayed feedback was associated with increased attentional allocation.One potential interpretation of this finding is that when older adults are unable to effectively rely on striatal mechanisms to process immediate feedback, more attention is allocated to the feedback signal. Differences in demands on cognitive processing speed across the immediate and delayed feedback tasks may also contribute to age-related differences.Aging has long been associated with a gradual reduction in cognitive processing speed (Cerella and Hale, 1994).Delaying feedback may give adults sufficient time to process incoming stimuli and the resultant feedback despite reduced rates of processing.Potential explanations as to why older adults seemingly benefitted from delays in feedback timing are not mutually exclusive and warrant further evaluation to understand neural underpinnings. The findings of the current study do not align with Lighthall et al. (2018) in which age-related decline was more evident in the hippocampal relative to striatal regions.Differences across studies may come from variations in the timing of immediate feedback.Immediate feedback was presented at 500 ms in the current study and at 1000 ms in Lighthall et al. (2018).Worthy et al. (2013) suggests that timing differences as small as 500 ms can affect learning by influencing the intracellular chemical 10. 3389/fnagi.2024.1404128Frontiers in Aging Neuroscience 13 frontiersin.orgconcentrations at the time of the dopaminergic reward signal.In support of this hypothesis, Worthy et al. (2013) found that during an information integration category learning task accuracy was highest when feedback was presented at 500 ms compared to 0 ms or 1,000 ms.These findings suggest that differences in timing on the scale of milliseconds may influence learning driven by the striatum and could potentially explain differences in findings across otherwise comparable studies.Of course, other methodological differences across the current study and Lighthall et al. (2018) may also drive differences across studies; thus, future research should evaluate the reproducibility of the current findings.One such difference is the task type.We chose a prototype learning task given its relevance across the lifespan and that the effect of feedback timing has yet to evaluated.However, in a prototype A/B learning task, feedback may not be informative on a trial-by-trial basis and thus, affect the electrophysiological response to feedback.If a consistent advantage for learning from delayed feedback is identified in older adults, or if individual differences arise, manipulating feedback timing may serve to optimize learning in this population.Techniques aimed at enhancing learning may be particularly useful for older adults with acquired neurologic impairments.Adults with acquired neurologic injury often receive rehabilitation services in which they re-learn skills and learn new compensatory strategies.Rehabilitation specialists must administer treatments in ways that target the impaired system but also successfully engage functioning systems of learning to induce neuroplastic change.As rehabilitation fields continue to move toward theory-driven treatments that delineate which treatment ingredients engage proposed mechanisms of action and how identified ingredients should be administered, feedback timing should not be overlooked. Limitations In the current task, similar to probabilistic tasks, feedback was not intended to be useful on a trial-by-trial basis.Individuals needed to use feedback over the course of learning to slowly build a conceptual representation of the category structure.While this type of learning is similar to how humans acquire knowledge of complex category structures, the trial-by-trail utility of feedback may moderate FRN amplitude (Arbel et al., 2013(Arbel et al., , 2014)).Future research evaluating the effect of manipulating feedback timing on feedback processing in declarative learning tasks may provide more insight into how feedback can be leveraged in contexts where adults make responses and receive useful corrective feedback on the accuracy of that response. Conclusion This work suggests that in a prototype A/B category learning task, the timing of feedback (immediate vs. delayed) may have distinct consequences for learning in younger and older adults.Differences in learning across groups and timing conditions were associated with differences in electrophysiological response to feedback.Notably, older adults learned better from delayed relative to immediate feedback, potentially due to age-related changes in the neural mechanisms responsible for processing feedback.Future research is needed to determine the probable locus of the learning advantage with delayed feedback in older adults and can work toward understanding how feedback timing may be manipulated to promote learning across the lifespan. Publisher's note All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers.Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher. FIGURE 2 FIGURE 2Prototype A/B learning task category structure. FIGURE 4 FIGURE 4Example response patterns.Panel A shows an ideal linear response pattern reflective of learning.Panel B data align with both linear and quadratic trends suggesting an incomplete increase in %B response with increasing distance from Prototype B. Panel C data do not align with linearly increasing, quadratic or cubed trends suggestive of a random response pattern, or no learning. TABLE 1 2 × 2 ANOVA results, with slope score as the dependent variable.indicates degrees of freedom numerator.dfDen indicates degrees of freedom denominator.SSNum indicates sum of squares numerator.SSDen indicates sum of squares denominator.η 2 dfNum g indicates generalized eta-squared.*indicatessignificant effect. TABLE 2 Young adult 2 × 2 ANOVA results, with FRN amplitude as the dependent variable. dfNum indicates degrees of freedom numerator.dfDen indicates degrees of freedom denominator.SSNum indicates sum of squares numerator.SSDen indicates sum of squares denominator.η 2 g indicates generalized eta-squared.*indicatessignificant effect. TABLE 3 Older adult 2 × 2 ANOVA results, with FRN amplitude as the dependent variable. dfNum indicates degrees of freedom numerator.dfDen indicates degrees of freedom denominator.SSNum indicates sum of squares numerator.SSDen indicates sum of squares denominator.η 2 g indicates generalized eta-squared.No significant effects observed.
v3-fos-license
2018-06-29T00:30:55.150Z
2018-06-22T00:00:00.000
207362364
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0199436&type=printable", "pdf_hash": "eaac7666abedad6c426e4382cf94f92680544cd1", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:1698", "s2fieldsofstudy": [ "Medicine", "Political Science" ], "sha1": "eaac7666abedad6c426e4382cf94f92680544cd1", "year": 2018 }
pes2o/s2orc
Predictors and trajectories of antibiotic consumption in 22 EU countries: Findings from a time series analysis (2000–2014) Background This study analyzes the trajectories of antibiotic consumption using different indicators of patients’ socioeconomic status, category and age-group of physicians. Methods This study uses a pooled, cross-sectional, time series analysis. The data focus on 22 European countries from 2000 to 2014 and were obtained from the European Center for Disease and Control, Organization for Economic Co-operation and Development, Eurostat and Global Economic Monitor. Results There are large variations in community and hospital use of antibiotics in European countries, and the consumption of antibiotics has remained stable over the years. This applies to the community (b = 0.07, p = 0.267, 95% -0.06, 0.19, b-squared <0.01, p = 0.813, 95% = -0.01, 0.02) as well as the hospital sector (b = -0.02; p = 0.450; CI 95% = -0.06, 0.03; b-squared <0.01; p = 0.396; CI95% = > -0.01, <0.01). Some socioeconomic variables, such as level of education, income, Gini index and unemployment, are not related to the rate of antibiotic use. The age-group of physicians and general practitioners is associated with the use of antibiotics in the hospital. An increase in the proportion of young doctors (<45 years old) leads to a significant increase in antibiotics consumption, and as the percentage of generalist practitioners increases, there use of antibiotics in hospitals decreases by 0.04 DDD/1000 inhabitants. Conclusions Understanding that age-groups and categories (general/specialist practitioners) of physicians may predict antibiotic consumption is potentially useful in defining more effective health care policies to reduce the inappropriate antibiotic use while promoting rational use. Introduction Antibiotic resistance is a major public-health problem of global importance because it is related to treatment failure, increased use of health care services and increased mortality [1,2]. Consumption and over-consumption of antibiotics are recognized as the main cause of antibiotic resistance. In response to this problem, the European Union put in place over time community strategies and action plans supporting, as pillar, antimicrobial stewardship, defined as a coherent set of actions which promote using antimicrobials responsibly. Antimicrobial stewardship aimed to provide evidence-based data on possible links between consumption of antimicrobial agents and the occurrence of antimicrobial resistance in humans and food-producing animals and, then, to develop EU guidelines for the prudent use of antimicrobials in human medicine and to assist Member States implement EU guidelines for the prudent use of antimicrobials in veterinary medicine [3]. At national level in Europe, many countries have implemented antibiotic stewardship programmes, at national or regional level. These initiatives provide for local surveillance of antibiotic consumption, systematic measuring, evaluating & improving quality of antibiotic usage, regular training of prescribing physicians and other relevant healthcare workers in diagnostics, treatment and prophylaxis of infections, focusing on appropriate use of antimicrobial agents as well as prevention and control of antimicrobial resistance. For example some antibiotic stewardship strategies in European countries are based on educational resources (UK and Germany), on public reporting with the data on antibiotic consumption and resistance for hospital and primary care publicly available on a website (UK) or on crosssectoral antibiotic stewardship networks implemented in different settings (hospital, primary care, long-term care facilities) and at local, regional and national levels (Sweden France, Spain) [4,5]. Reliable and comparable data on the patterns of national antibiotic drug use and distribution are the starting point for analyzing the antibiotic resistance problem. Since late 2005, the understanding and the measurement of inequalities in health and health-care use has been identified as a priority by the WHO Commission on Social Determinants of Health [6]. Several studies have investigated the role of socioeconomic determinants in facilitating inequalities in health and health-care use. Most of the previous studies that assessed the impact of socioeconomic determinants on antibiotic consumption have focused on different countries and time-points [7,8]. The few ones analyzing the role of socioeconomic determinants on antibiotic consumption at the European level have focused on different settings, such as the hospital or outside the hospital [9,10]. Several studies measure particular aspects of socioeconomic parameters, such as educational level, rather than investigate a set of two or more determinants. This study analyzes the trajectories of antibiotic use across 22 EU countries and it assesses: (i) how the antibiotic consumption has changed in the community and hospital sectors over a 15-year period; (ii) the correlations between antibiotic use and a variety of socioeconomic determinants; (iii) the correlations between antibiotic use and categories of prescribing physicians or age of physician, which is used as a proxy for experience, and the background of the prescribing physician. Methods This study used a pooled cross-sectional time series analysis of secondary data for 22 European countries between 2000 to 2014. These countries and years were chosen based on the availability of data. The unit of analysis was each country in each year (country-year). The countries included in the study were the following: Austria, Belgium, Czech Republic, Denmark, Estonia, Finland, France, Germany, Hungary, Iceland, Ireland, Italy, Latvia, Lithuania, Luxembourg, Netherlands, Norway, Slovakia, Slovenia, Spain, Sweden and the United Kingdom. Official data were obtained from the European Center for Disease and Control (ECDC), Organization for Economic Co-operation and Development (OECD), Eurostat and Global Economic Monitor (GEM). The indicators considered are related to: the consumption of antibiotics in two different sectors, that are primary care and hospital; the prescribing physicians stratified in five age groups, which are 35-44, 45-54, 55-64, 65-74; the physician categories, that are generalist, specialist and not further defined medical doctors; and related to socio-economic determinants such as the GINI coefficient, household income and education level of the population aged 25-64, stratified into three classes. These indicators are shown in Table 1, which illustrates the definition and source for each of them. α: The DDD is the assumed average maintenance dose per day for a drug used for its main indication in adults. This indicator is used to assess antibiotic use in the hospital sector and the community and it is an internationally accepted parameter for making comparisons between countries β: Gini Coefficient: the Gini coefficient measures the extent to which the distribution of income within a country deviates from a perfectly equal distribution. The perfect equality is expressed with 0 and full inequality is expressed with 100. γ: total disposable income of a household is calculated by adding together the personal income received by all of household members plus income received at household level; that includes: all income from work (employee wages and self-employment earnings); private income from investment and property; transfers between households; all social transfers received in cash including old-age pensions. Statistical analyses To assess the relationship between dependent and independent variables over the 15-year study period, the study adopted a pooled, cross-sectional, time series design. Specifically, this design involved observing the variables for different cross-sections over a given timespan [11]. The dependent variables were indicators 1 and 2, while the independent variables were indicators 3 to 16. A fixed-effects linear regression was done by performing a Hausman test with Sigmamore option [12] because the random-effects specification was found to be inappropriate for country-level effects in the adopted model. One advantage of fixed-effects models is that they control for time-invariant heterogeneity among countries [13]. The presence of exogenous time trends in both the dependent and independent variables (i.e., time-fixed effects) was controlled by adding dummy variables to the model for each of the study years except the first. To avoid model overfitting, indicators 3 to 16 were halved by collapsing age groups, while secondary and tertiary education levels (6 and 7) were merged because they gave similar results. Additionally, no results were reported for indicators 9 and 16 because they complement the 100 indicators shown in the Results section. The relationship between all the remaining dependent and independent variables were separately examined, resulting in 12 distinct fixed-effects models. This choice was driven primarily by concerns about model over-fitting and multicollinearity. For all the analyses, the significance level was set at p < 0.05, and listwise deletion was used. The significance of each independent variable was assessed using robust standard errors due to results obtained from performing a modified Wald test for group-wise heteroskedasticity in the regression residuals [14]. All data sets were analyzed using the Stata software package, version 13 (StataCorp. 2013, Stata Statistical Software: Release 13; StataCorp LP, College Station, TX, USA). Results The antibiotic use patterns in hospital and community settings is shown in Fig 1. The figure reveals considerable variability among the countries regarding the consumption in primary care sector in 2014. The consumption pattern in the Netherlands was approximately 10 DDD, while in Belgium, France, Italy and Luxembourg, the pattern was approximately 30 DDD at the end of the observation period. Five countries (Belgium, France, Italy, Luxembourg, and Slovakia) had DDD values above 24 in 2014. However, the DDD values were already high for these countries in 2000. The results show that hospital use of antibiotics, which is poorly reported in the ECDC database, is very low. Eight countries (Latvia, Finland, France, Lithuania, Denmark, Italy, Slovakia, and the UK) had DDD values above 2 in 2014. In the hospital sector, the variability was very high, indeed 3 of the countries (Latvia, Finland, France) started with a higher value, 2 of the countries (Lithuania, UK) had a constant DDD and 3 of the countries (Italy, Slovakia, Denmark) started with a lower value. According to the results, some socioeconomic variables, such as level of education, income, Gini index and unemployment, were not related to the rate of antibiotic consumption both in hospital and in the community (Table 2). Similarly, the variables related to the age and category of prescribing physicians were not related to the rate of antibiotic consumption in the community. The only significant findings relate to the association between the consumption of antibiotics in the hospital and some prescribing physicians' characteristics, age and category. Our results highlight that an increase of a percentage-point of doctors between 35 and 44 years of age led to an average increase of 0.08 in daily doses per 1000 inhabitants, while a one percent increase in doctors between 45 and 54 years old led to a consumption decrease by an average of 0.10 doses. The results of older age classes (55-64; 65-74) were not significant, but the results confirm the trend observed in the 45-54-year-old group. Additionally, as the percentage of GPs increases, there is an average decrease of 0.04 DDD per 1000 inhabitants in hospitalization consumption of antibiotics. Discussion It is difficult to compare the antibiotic consumption in the community and the hospital sector among the EU countries due the heterogeneous methodologies used to collect data and due to poor availability of data over the years under consideration. Nevertheless, it is clear that there are large variations in community and hospital use of antibiotics in European countries and that the consumption of antibiotics has remained stable over the years. The Netherlands shows the lowest total consumption of antibiotics (11.55 DDD) whereas Belgium, France and Italy had the highest consumption (>30 DDD) in 2014. There was a three-fold difference between countries with the highest and the lowest use of antibiotics. This difference was already present in 2000. This variability among countries may be affected by antimicrobial stewardship programmes performed in the different European countries. Netherland, which has the lowest levels of antibiotic use, was one of the first countries to adopt a universal, national strategy for AMS in all settings. The Netherland programme started by a Working Party on Antibiotic Policies founded in 1996, supported by the government since 1999. It aimed to provide educational tools in the form of national antibiotic use tests, and in the publication of a national blueprint for an antibiotic policy that can help fellow physicians and pharmacists involved in making such policies at the local level in hospitals and other stations where antibiotics are dispensed [15]. Italy, instead, adopted its first National Action Plan on Antimicrobial Resistance (PNCAR) 2017-2020 in November 2017. The PNCAR represents the tool for implementing the Italian strategy. In order to face the increasing resistance and spread of antibiotic-resistant microorganisms, the PNCAR provides for national coordination, specific objectives and actions through the synergy between national, regional and local levels, the different key stakeholders involved and a governance, in which the roles of the institutions are clearly defined [16]. In Belgium, the Belgian Antibiotic Policy Coordination Committee (BAPCOC) was officially established in 1999 by royal decree. The overall objective of BAPCOC was to promote judicious use of antibiotics and to promote infection control and hospital hygiene, reducing antibiotic resistance and optimizing care. To address these specific tasks BAPCOC founded five multidisciplinary working groups, wich included ambulatory care, hospital care, awareness campaigns, infection control. Specifically, BAPCOC focused on hospital setting through funding of dedicated staff and technical support to antibiotic stewardship (ABS) teams in all Belgian hospitals, the training of over 500 healthcare professionals in ABS, integration of surveillance programs on antibiotic use in hospital [17,18]. It is not easy to explain the reason for high level of antibiotic use in France. Over the last decade, France implemented three national plans to reduce antibiotic prescriptions. As part of the plans, the French government initiated a long-term nationwide campaign to reduce antibiotic overuse and control the dissemination of resistant bacteria in the community. The national program, named "Keep Antibiotics Working," was launched in 2001, targeting both the general public and health care professionals, to encourage surveillance of antibiotic use and resistance and to promote better-targeted antibiotic use. Since 2002, a public service campaign is launched each winter with the primary goal of decreasing prescriptions. Unfortunately, the impact of these policies has levelled of the antibiotic use, illustrating again the need for an effective antimicrobial stewardship [19]. Antibiotics consumption in the community continues to account for between 85% and 95% of total consumption in 2014. The highest use in the community was in Belgium, France, and Italy (!28 DDD9) and in the hospital sector in Finland and the UK. The Netherlands had the lowest use in both of the settings (10.6 DDD and 0.9 DDD, respectively). These findings of substantial stability may explain the behavior of patients and doctors as underlined by the extant literature. A recent review [20] concluded that the patient's demand and the need to give quick relief to their symptoms seem to favor antibiotics prescriptions as doctors, especially those in the community, respond to patients' needs by prescribing antibiotics rather than providing explanations of why antibiotics may not be needed. In addition, previous studies suggest that some doctors prescribe because they think that patients expect to have antibiotics even when the specific case does not require it [21,22]. To be sure, much of the literature is unanimous on the idea that practitioners' judgment of patients' expectations is a major influence on prescribing patterns of antibiotics [23]. According to the results, some socioeconomic variables, such as level of education, income, Gini index and unemployment, are not related to antibiotics consumption in both settings. However, some other extrapolations may be made. Although the impact is not significant, it seems that increasing inequality (Gini index) and unemployment rates reduce the consumption of antibiotics in the community. There are no studies that investigate the correlation between income inequality, unemployment and consumption of antibiotics. A study conducted in Switzerland reported a weak association between income inequality and antimicrobial consumption, but no causal link has been established [24]. To justify this correlation, mediating factors are needed, such as the unequal access to health services. Inequality in income distribution and poverty can considerably reduce access to health services even if they are available [25]. Another factor is inequity in the clinical consultation and the prescribing physician favoring patients who are financially better-off in several of the OECD countries [26]. The results also show that increasing the average income reduces antibiotic consumption. Filippini's study on socioeconomic determinants of regional differences in outpatient antibiotic consumption supports this finding, highlighting the notion that income is negatively related to antibiotics use [7]. In addition, the importance of wealth has been reported again and again by the WHO [27,28]. In this regard, the WHO believes that the likelihood of doctors receiving continuing medical education increases with national income, with high-income countries being more likely to implement policies regarding rational antibiotics use. Another important finding in this study is that an increase in the proportion of young doctors (<45 years old) may lead to a significant increase in antibiotics prescription and consumption, while an increase in the percentage of older physicians (45-40 years old especially but still > 50 years old) reduces the consumption of antibiotics. One possible hypothesis is that practitioners' experience and professional development activities influence physicians practice, and these factors are responsible for the differences in antibiotics prescribing patterns. This suggestion is supported by some literature [29,30], whereas several qualitative studies have demonstrated that physicians tend to meet patients' antibiotics demands over time instead of educating them properly about rational drug use [23,31]. Another important but unexpected result is that as the number of general practitioner increases, the antibiotics consumption is reduced both in the hospital and community sectors. It is difficult to find an explanation for this phenomenon. A previous study has shown some relationship between ambulatory antibiotic use and hospital use. It highlighted the idea that when the use of antibiotics in one country is high in the community context, it is also likely to be high in the hospital context and vice versa. Assuming that there is a causal link between the percentage of general practitioners and antibiotics consumption in the hospital, the mediating factor that may be most important is the antibiotics consumption pattern in the community. The reduced use of antibiotic may be attributed to the fact that by increasing the number of general practitioners in the area, they are able to visit patients, establish a relationship of trust and prescribe antibiotic therapy more appropriately. The main weakness of this study is that the analysis of the trend of use was conducted for all antimicrobials without studying the trajectory of individual antibiotic groups. A focus on individual typologies would help to better understand what may justify the stability of the trend and the typologies with a divergent behavior. A second weakness lies in the fact that the epidemiology of the population of the European areas has not been analyzed. A study of the prevalent pathologies would allow a further explanation of this trend in antibiotics use. Using different indicators of socioeconomic status and different categories of prescribing physicians and insights from extant literature [32,33], this study's use of a variety of possible predictors tends to be its strength. Our analyses contribute to the debate on the pattern of prescription and use of antibiotics. This study provides some understanding about relevant determinant and suggests that DDDs are significantly shaped by the category and the age-group of prescribing physicians. Although it is a known fact that antibiotics consumption varies between countries, the current study confirms this link through analyzing a 15-year period and 22 EU countries as well as shows that the consumption of antibiotics has remained stable over the years. These results may promote and help to define more effective health care policies to reduce the inappropriate use of antibiotics.
v3-fos-license
2018-04-04T00:06:17.678Z
2018-03-29T00:00:00.000
4823545
{ "extfieldsofstudy": [ "Psychology", "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0193337&type=printable", "pdf_hash": "5420fe05d938951b3c01be60f5816ac13c0ab606", "pdf_src": "Adhoc", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:1699", "s2fieldsofstudy": [ "Sociology" ], "sha1": "1757c738338f7fa2b89aaee74ce1c7cbce125841", "year": 2018 }
pes2o/s2orc
Are diverse societies less cohesive? Testing contact and mediated contact theories Previous research has demonstrated that there is a negative relationship between ethnic diversity in a local community and social cohesion. Often the way social cohesion is assessed, though, varies across studies and only some aspects of the construct are included (e.g., trust). The current research explores the relationship between diversity and social cohesion across a number of indicators of social cohesion including neighbourhood social capital, safety, belonging, generalized trust, and volunteering. Furthermore, social psychological theories concerning the role of positive contact and its impact on feelings of threat are investigated. Using a sample of 1070 third generation ‘majority’ Australians and structural equation modelling (SEM), findings suggest ethnic diversity is related to positive intergroup contact, and that contact showed beneficial impacts for some indicators of social cohesion both directly and indirectly through reducing perceived threat. When interethnic contact and perceived threat are included in the model there is no direct negative effect between diversity and social cohesion. The theoretical implications of these findings are outlined including the importance of facilitating opportunities for positive contact in diverse communities. Introduction Across Europe and North America, research has found that ethnic diversity is detrimental to a range of social cohesion indictors, most commonly trust and volunteering [1]. These findings are concerning given the high rates of immigration characterizing almost every modern society. Particularly amongst members of majority ethnic groups, it is argued that ethnic diversity causes people to withdraw from society in general [1]. Moreover, people living in ethnically heterogeneous neighbourhoods perceive a greater threat to resources (such as jobs) and to their way of life, which also negatively impacts social cohesion. Conversely, research also suggests that immigration affords positive contact experiences between members of different ethnic groups leading to cooperation and respect, which can improve social cohesion [2]. Given a1111111111 a1111111111 a1111111111 a1111111111 a1111111111 these competing theoretical frameworks and findings, it is imperative that further research is conducted in order to gain a clearer understanding of the relationship between ethnic diversity and social cohesion. The current research investigates whether the diversity and social cohesion relationship highlighted by Putnam's [1] seminal work leads people to withdraw from social life. Also examined are the role of perceived threat from minority groups and intergroup contact (contact theory [3]), particularly when it also reduces threat (mediated contact theory [2]). This work is important as it is the first study of its kind using an Australian sample and offers significant advancement on previous work. The significance and innovation includes 1) a test of the role of threat and intergroup contact in helping to explain the relationship between ethnic diversity and social cohesion; 2) the inclusion of a comprehensive range of indicators of social cohesion (i.e., generalized trust, volunteering, neighbourhood social capital, safety and belonging) [4]; 3) using an Australian sample where as a multicultural nation there are more opportunities for intergroup contact; and 4) using a total effects analysis to assess whether the relationship between diversity and social cohesion is in fact negative when all of the relevant theoretical constructs are included in the one model [5]. Before the models are outlined in detail, key constructs are defined and support or otherwise for the existing theoretical frameworks are described. Defining ethnic diversity and social cohesion Ethnic diversity can be understood in a number of different ways, including as the likelihood that two randomly selected people from the same neighbourhood will have different nationalities (ethnic fractionalization) or as the proportion of immigrants within a particular neighbourhood (relative group size) [6]. Fractionalization has been criticized for being "colorblind", as a community with 80% whites and 20% blacks has the same degree of ethnic diversity as a community with 20% whites and 80% blacks; for this reason relative group size is preferred. Ethnicity can also be defined using a range of characteristics including physical presentation, language, citizenship or country of origin [7]. The measurement of ethnic diversity varies between studies, however there is a plethora of research to support the use of linguistic measurements, as language is both an objective and salient feature of ethnicity [6,[8][9][10][11][12]. Thus, in the current research a measure of relative group size and ethnicity defined by language are used. There is also variability in the definition and measurement of social cohesion [1,4,13]. Schiefer and van der Noll [4] define social cohesion as "a descriptive attribute of a collective, indicating the quality of collective togetherness" (p. 17). Three characteristics can be used to evaluate the quality of togetherness within a given community; 1) social relations (the quality and quantity of people's relationships with other members of their community); 2) attachment or belonging (meaning identification with the social unit to which you belong); and 3) orientation towards the common goal (including feelings of responsibility for the common good and willingness to comply to social rules and norms). Having defined the key constructs, the research regarding the relationship between ethnic diversity and social cohesion will be outlined in more detail. The main findings are 1) that ethnic diversity is negatively related to social cohesion (ethnic diversity-social cohesion relationship), 2) that perceived threat negatively mediates this relationship (role of threat), enhancing the detrimental effect of diversity and 3) that intergroup contact positively mediates this relationship, both directly (contact theory), as well as, 4) by reducing threat perceptions (mediated contact theory). Each of these patterns are also further investigated in the current research. The ethnic diversity and social cohesion relationship Research suggests that neighbourhoods with high levels of ethnic diversity have correspondingly lower levels of social cohesion [1]. More recently, a large body of research has investigated the correlates of neighbourhood level diversity [1,14]. In Putnam's [1] seminal study, almost all indicators of social cohesion (e.g., attitudes to government and media, happiness, number of close friends, time spent watching television, likelihood of volunteering, and trust in others, amongst others) were negatively related to ethnic diversity, the only exception being organisational involvement. No subsequent study has used such a broad range of indicators however, the relationship has been consistently replicated with outcomes including trust, volunteering, and organisational involvement [5,15]. The finding that ethnic diversity is negatively related to social cohesion could have important social consequences. Before drawing firm conclusions, though, across a number of studies evidence suggests that the effects of ethnic diversity cannot be generalized across different indicators of social cohesion. Such findings raise questions about social cohesion as a unified construct and qualify conclusions concerning the relationship between diversity and social cohesion. Even in Putnam's [1] study, aspects of social cohesion such as organizational activity (including religious activity) and political engagement were found to be positively (not negatively) related to ethnic diversity. Research in the UK found that whilst ethnic diversity was negatively related to generalized trust, there was no evidence that it impacted on frequency of volunteering or attitudes towards neighbours [14,16]. The above research suggests that ethnic diversity differentially effects various aspects of social cohesion. These findings raise concerns about much of the research to date that examines the impact of diversity and perceived threat using only one or two dimensions of social cohesion [3,[17][18]. If, as this research suggests, it is not possible to generalize results across measures of social cohesion, then the relationship between ethnic diversity and social cohesion can only be understood by accounting for a diverse range of indicators of social cohesion. A possible explanation for this discrepancy provided by van der Meer and Tolsma [7] is that the negative effects of heterogeneity are limited to intra-neighbourhood cohesion. Ethnic diversity impacts trust and cooperation between neighbours, but does not negatively impact social cohesion on a country or city level. Research examining ethnic diversity and social cohesion, at the country or city level, has consistently failed to demonstrate a negative relationship; suggesting that this effect can only be found at the neighbourhood level [19][20][21][22]. A notable exception is the US, where support for a negative relationship between diversity and social cohesion (referred to as constrict theory where there is withdrawal from one's own and other ethnic groups [1]) has been found for measures of social cohesion other than intra-neighbourhood cohesion [7]. Causal mechanisms, which could explain these findings, are currently lacking, however working theories concern the salience of ethnic group sizes. In other words, the ethnic profile of your neighbourhood is easier to judge than the ethnic profile of your country, thus the influence of neighbourhood ethnic diversity, on outcomes such as trust and volunteering, is more pronounced. Additionally, socio-structural and economic factors have been found to be important. For instance, in US samples, the relationship between ethnic diversity and social cohesion becomes non-significant when the effects of segregation (when members of different ethnic groups are isolated from each other) are taken into account [2,[23][24]. As such, although the previous research supports a negative relationship between diversity and social cohesion, there are important caveats and a need for further research. Explaining the relationship between diversity and social cohesion Putnam [1] has found a negative relationship between diversity and social cohesion and using a wide range of indicators. According to Putnam [1] this occurs because in ethnically heterogeneous communities there is increased threat and fear that can lead to a withdrawal from social relationships and community life either for one's own and other ethnic groups (constrict theory) or other ethnic groups specifically (conflict theory). Stemming from this work [1] the first aim of this paper is to extend previous work by exploring the relationship between ethnic diversity and social cohesion using a wider range of social cohesion indicators. H1: Ethnic diversity will be negatively related to social cohesion (ethnic diversity-social cohesion relationship). As shown in Fig 1, the relationship from X to Y (path c (c')) will be negative. The second aim is to examine if perceived threat negatively mediates the diversity-social cohesion relationship, enhancing the detrimental effect of diversity. The central idea is that ethnic diversity leads to perceived threat (symbolic and realistic) where people fear, hold negative attitudes toward, and withdraw from members of other ethnic groups [24]. Perceived threat refers to the belief, held by majority ethnic group members, that their physical well-being and worldview will be threatened, when minority ethnic group members increase in size [24][25][26]. In a number of studies, threat perceptions amongst members of the majority ethnic group, were positively related to immigration levels, exclusionary attitudes and antisocial behaviours, including prejudice towards minority group members [24,[27][28]. Consequently, it seems that the negative relationship between ethnic diversity and social cohesion is enhanced by the perception, held by majority ethnic groups, that increased ethnic heterogeneity is detrimental to their well-being [24,[27][28]. H2: Threat will negatively mediate the relationship between ethnic diversity and social cohesion (role of threat). As shown in Fig 1, the relationship from X to Y through M2 (path a 2 , path b 2 , and indirect effect path d 2 ) will be significantly negative. More recently, social psychology has applied insights regarding the beneficial effects of intergroup contact, to help explain the relationship between ethnic diversity and social cohesion [2,29]. Intergroup contact refers to interactions between members of different ethnic groups. Not unexpectedly, such interactions become more common when the number of minority group members within a population increases. There is a well-established field of research regarding the potential for intergroup contact to promote positive intergroup outcomes, especially under the optimal conditions of equal status, common goals, intergroup cooperation, and authority support [2,29]. Much of the research on intergroup contact has focused on the impact of positive contact, such as interethnic friendship, which facilitates these optimal conditions and positive intergroup relations [2,29]. Recent studies have been able to demonstrate that the effect of positive interethnic friendship and social interactions in heterogeneous neighbourhoods is to improve outgroup attitudes, mitigating the negative effects of increased ethnic diversity [3,[30][31]. Along these lines, the current research employs a measure of interethnic friendship which is commonly used as a proxy for positive intergroup contact. H3: Intergroup contact will positively mediate the relationship between ethnic diversity and social cohesion (contact theory). As shown in Fig 1, the relationship from X to Y through M1 (path a 2 , path b 2 , and indirect effect path d 1 ) will be significantly positive. Research also suggests that contact impacts social cohesion through a reduction in prejudice; this effect is known as mediated contact theory [5,32]. Thus, intergroup contact can mediate the ethnic diversity and social cohesion relationship in two ways; 1) by directly increasing social cohesion and 2) by reducing threat perceptions. Finally, when all of the direct and indirect effects described above are taken into account, ethnic diversity and social cohesion are no longer negatively related [5]. In a study by Schmid and colleagues [5], when a total effect was calculated (which was the sum of the direct relationship and the three mediation effects), ethnic diversity was not related to intergroup trust or out-group attitudes; although it was significantly negatively related to intragroup trust and neighbourhood trust. H4: Intergroup contact will negatively mediate the relationship between ethnic diversity and threat and in turn this effect will be positively related to social cohesion (mediated contact theory). As shown in Fig 1, the relationship from X to Y through M1 and M2 (path a 1 , path b 1 , path a 2 , path b 2 , path a 3 , path d 1 , path d 2 and path d 3 ) will be significantly positive in total. As such, the ethnic diversity and social cohesion relationship cannot be understood without accounting for the mediation effects of negative outgroup attitudes and group contact. The research is important because it integrates the various trajectories of work that have surrounded Putnam's [1] original study and includes a range of indicators of social cohesion [5,32]. Putnam [1] is the only study to examine the relationship between ethnic diversity and social cohesion using a comprehensive measure of social cohesion. The aim of the present study is to extend previous work by exploring the relationship between ethnic diversity, and social cohesion, threat, and intergroup contact using a wider range of social cohesion indicators. In line with Chan et al. [13], the current study will measure generalized trust, volunteering, neighbourhood social capital, safety and belonging as indicators of social cohesion. The present study The measure of ethnic diversity used in this study was the proportion of people within a neighbourhood who have immigrated to Australia from a non-English speaking country. It is more common to measure diversity as the likelihood that two randomly selected members of the population will come from different racial groups. Basic categorisations of race (such as Hispanic, non-Hispanic white, non-Hispanic black and Asian) are used which vary between studies. As discussed above, the present study uses a linguistic ethnicity measure because as Fearon's [9] argues language is objective and easily observable. Another important aspect of this study is the use of an Australian sample. Two previous studies have been conducted in Australia, but only with respect to trust [33][34]. It has been argued that research regarding social cohesion and social capital has a tradition of being conducted in ethnically homogenous, traditional societies [18]. By contrast, approximately 1 in every 3 people residing in Australia are immigrants (comparatively, approximately 14% of the population residing in the United States are migrants) [35]. By conducting this research in an immigrant nation, namely Australia, the current study addresses this gap and explores how social cohesion works where there are many ethnic groups and more opportunities for intergroup contact. This study also controlled for a number of variables, including: age, education, employment status, gender and socio-economic status. As discussed above, these variables have been used in past studies and have been shown to be highly correlated with social cohesion [1,14]. It is, therefore, important to account for the effect of these variables on the relationship between ethnic diversity and social cohesion. Existing data This study was approved by Monash University Human Research Ethics Committee, reference number CF07/1240-2007000319. Data used in the present study was collected as part of the 2014 Mapping Social Cohesion study, conducted by the Scanlon Foundation [36]. There have been seven Mapping Social Cohesion reports published since 2007. The aim of these reports is to summarise results from an annual survey that explores the social impact of immigration on Australian society. Each survey builds on the previous year and is used to create the Scanlon Monash Index of Social Cohesion. The 2014 Mapping Social Cohesion report used the same data set as the current study [36]. Participants Participants were all born in Australia and both their parents were also born in Australia. The current participants thus consist of only third plus generation immigrants or non-immigrants when immigrants are defined as people born in other countries or whose parents were born overseas [37]. These Australians are the most likely to experience feelings of threat from immigration. Table 1 displays the demographic information for this sample. There were 1070 Australian participants consisting of 557 males (52.06%) and 513 females (49.74%). This is almost equivalent to the proportion of third plus generation Australians within the general population who are males (49.17%) and females (50.83%) [38]. Given the third generation sample is predominately Anglo-Australians there was no ethnicity demographic question included in the survey. The largest age category was 25-34 (21.40% of participants) [38]. Materials and procedure Participants were recruited online and invited to participate directly via email. Respondents were incentivised 300 points (equivalent to $3.00 AUD) for their participation in this survey. The survey included a range of measures and administrative data concerning diversity levels and demographic backgrounds. Socio-economic indicators for neighbourhoods were available from the Australian Bureau of Statistics [39]. Of the total 80 items in the Scanlon Mapping Social Cohesion survey, the current study analysed 21 in order to test the proposed hypotheses. Each measure that was modelled in the current study is described below (see all items S1 File). Where possible, all psychological constructs of threat and social cohesion sub-factors were examined with confirmatory factor analysis (CFA) in order to test whether the items validly indicate the respective latent construct. Reliability analysis results are presented below in the current section. Intergroup contact: Mediator 1. Intergroup contact was construed using two behavioural items (α = .74). Both items (e.g., "Overall, approximately how many of your friends are from other cultures?" and "How many of your best friends are from other cultures?") concerned quantitative interethnic friendships. Participants reported how many of their friends are from other cultures. There were 11 possible responses from "none" to "more than 10" in the original survey. These items were recoded into a 1-7 scale from "no friends" to "more than 5". This scale could be used as a continuous variable rather than a truncated count variable, which was preferable, as well as being more reflective of other intergroup contact measures used in the existing literature [40]. Threat: Mediator 2. Threat was measured using five items (α = .81). Four of the items (e.g., "Do you feel positive, negative or neutral about the following categories of people coming to living in Australia as permanent or long-term residents?"-(1) "Skilled workers [e.g., doctors or nurses, plumbers etc.]", (2) "Those who have close family living in Australia [i.e., parents or children]", (3) "Refugees who have been assessed overseas and found to be victims of persecutions and in need of help", and "Young people who want to study in Australia") were on five-point scales ranging from "strongly positive" to "strongly negative". The fifth item ("What do you think of the number of immigrants accepted into Australia at present? Would you say it is. . .?") was on a three-point scale from "too low" to "too high" and was reverse coded so that a higher score would reflect a greater threat perception. All items were then z-scored before creating the threat measure. Social cohesion sub-factors: Outcome variables. Consideration of item content as well as a review of the literature suggested that 11 of the items provided in the Mapping Social Cohesion survey should be used in the present study to measure social cohesion. These items were theorized to belong to three distinct components, which were named neighbourhood social capital, safety, and belonging. In all cases where there was variability in response formats, items were turned into z-scores before forming scales. 1. The first factor, neighbourhood social capital consisted of four items. The first three items (e.g., "People in my local area are willing to help their neighbours?", "My local area is a place where people from different national or ethnic backgrounds get on well together.", "I am able to have a real say on issues that are important to me in my local area.") were on a five-point scale from "strongly agree" to "strongly disagree". The fourth item (e.g., "Would you say that living in your local area is becoming better or worse, or is it unchanging?") was on a five-point scale from "much better" to "much worse". These items measured the amount of social capital (including informal help and social ties) which participants thought they had within their local area (α = .77). Items were reverse coded so that higher scores reflected more perceived neighborhood social capital within their local area. 2. The second factor, safety, was comprised of two items. The first item ("How safe do you feel walking alone at night in your local area?") was originally reported on an eight-point scale, however four of the possible responses ("neither safe nor unsafe", "never walk alone at night", "didn't know" and "preferred not to answer") made up less than five percent of the total responses and were not appropriate for a Likert-type scale. Thus, the first item was recoded to a four-point scale from "very safe" to "very unsafe" and all other responses were treated as missing data. The second item ("Thinking about all types of crime in general, how worried are you about becoming a victim of crime in your local areas?") was on a fourpoint scale from "very worried" to "not worried at all". Together, these items were thought to reflect participants' beliefs about the safety of their local area (α = .77). Items were reverse coded so that a higher score reflected a greater feeling of safety. 3. The belonging factor was comprised of three items. The first two items ("To what extent do you take pride in the Australian way of life and culture?" and "To what extent do you have a sense of belonging in Australia?") were on four point scales from "to a great extent" to "not at all". The third item ("In the modern world, maintaining the Australian way of life and culture is important.") was on a five point scale from "strongly agree" to "strongly disagree". These items were thought to reflect whether the Australian way of life was important to participants and was named belonging (α = .79). Items were reverse coded so that perceiving the Australian way of life as important was reflected by a high score. The remaining measures include 4) one item measuring generalized trust ("Generally speaking would you say that people can be trusted or that you can't be too careful?", using a three point scale from "can't be too careful" to "can be trusted") and 5) one item measuring volunteering ("How often do you participate in voluntary activity?", using a five point scale from "less than once a year" to "at least once a week") were also included in the analysis. The generalized trust measure has been widely used [18. 31] and represents an important indicator of social cohesion [13]. Diversity index. A diversity index [39] using 2011 census data was merged to the survey data set and used as an independent variable. Diversity is the proportion of people residing in each postcode who were born either in Australia or overseas in an English-speaking country. English-speaking countries were limited to the UK, Ireland, US, Canada, New Zealand, and South Africa. The formula used to calculate the diversity score for each postcode was (1 -(number of residents born overseas in a non-English speaking country Ä total number of residents)). This item was reverse coded in the current study, so that higher scores would be indicative of greater ethnic diversity. Socioeconomic Indexes for Areas (SIEFA). SEIFA scores were calculated using 2011 census data by the Australian Bureau of Statistics (ABS) [39]. SEIFA combines four indexes that have been created from social and economic census data. A score for a Statistical Area Level 1 (SA1) is created by adding together the weighted characteristics of that SA1 and standardizing the total (mean = 1000, standard deviation = 100; [41]). A lower score indicates that an area is relatively disadvantaged compared to an area with a higher score. Demographic covariates. Based on previous research discussed above, it was deemed necessary to control for a number of individual and socio-economic factors including gender, age, employment status, education, financial status and the SEIFA score of the participants' local area [1,14]. Analytical strategy. Multilevel analysis was not necessary even though the participants were nested in postcodes because the mean number of participants in each postcode was 1.81 so design effects were negligible [42]. The main SEM analysis using Mplus included both observed and latent variables. Fig 1 shows the hypothesized paths between diversity, intergroup contact, perceived threat, and social cohesion. The three social cohesion factors, as well as items measuring trust and volunteering, were tested one by one as outcome variables. Each of the social cohesion and perceived threat factor variables were examined as a measurement model (i.e. as a latent variable with item loadings). Modification indices were used during the main SEM analysis to improve model fit. Correlations between unknown common random errors among some items (only items belonging to the same factor) were estimated, rather than assuming that they were fixed at zero, when the residual correlations were deemed large and made theoretical sense. A series of hierarchical and contrasting SEMs were conducted for each of the social cohesion outcomes. Model 1 examined the impact of six demographic covariates on social cohesion. Model 2 included the independent variable of ethnic diversity in addition to the covariates in explaining social cohesion. Model 3 further included perceived threat as another explanatory variable, whereas Model 4 examined the mediation effect of threat in the relationship between diversity and social cohesion with all covariates controlled. Model 5 added contact as another explanatory variable of social cohesion and subsequently Model 6 tested contact theory, which concerned whether contact mediates the impact of diversity on social cohesion when demographic variables are controlled. Model 7 tested mediated contact theory which required estimating three indirect effects. In addition to the mediation effects of threat and contact, described above, this model tested whether contact mediates the relationship of ethnic diversity to threat and the effect of this mediation on its subsequent relationship to social cohesion. Data screening Prior to main analysis, variables were screened for missing data, normality, outliers and the assumptions of multivariate normality. For the main SEMs, the maximum likelihood (ML) method of expectation maximization (EM) was used to impute missing data [43]. Compared to traditional regression methods, EM provides a less biased set of imputed values [44]. In order to account for non-normality of variables, the ML estimator with robust standard errors (MLR) was selected with MPlus coding for the model coefficients [45]. Finally, inspection of bivariate scatterplots confirmed the linearity and homoscedasticity of the data. Table 2 displays the inter-variable correlations for the variables of interest. The interscale correlation matrix for all variables is presented in Table 3 including the control variables gender, age, employment, finances, education and SEIFA index. There was no evidence of multicollinearity or singularity, which occurs when variables are correlated too highly (.90 and above) [44]. Reliability analysis was performed for the measures of perceived threat, intergroup contact and social cohesion. As detailed above, Cronbach-Alpha levels ranged from .74 to .81 suggesting acceptable internal consistency [44]. In addition, all corrected item-total correlations were above .30 indicating that items correlate to an acceptable degree with other items in the scale. Main analysis: Structural Equations Modelling (SEM) The measurement models and path analysis using SEM allowed testing of all of the direct and indirect relationships between ethnic diversity and social cohesion. There were five social cohesion measures thus six groups of hierarchical models were run in total. As displayed in Tables 3 and 4, in all cases, the χ2 statistic was significant which suggested poor model fit. However, large sample sizes will cause χ2 to be significant even when there is no significant difference between the observed and predicted correlation matrices [46]. For this reason, other fit indices were considered, including relative χ2, RMSEA, CFI and SRMR. Each of the eight models had acceptable model fit indices suggesting the theorized models fit the observed data. Explained variance (R 2 ) was also calculated for each measure. As can be seen in Tables 3 and 4, Model 7 accounted for around 20% of total variance in the case of neighbourhood social capital, safety and belonging. Also, Model 7 accounted for around 10% of the variance in volunteering and generalized trust. Post-Hoc power analysis showed that, with a sample size of 1070, power exceeded .99 for each of these effect sizes. Large sample size gave this study high statistical power, thus both significance level and effect size should be considered when interpreting results. Generalized trust. Generalized trust was significantly positively related to age (ß = .10, p = .016) and education (ß = .13, p < .001). Table 3 shows Model 7 with all effects examined. Older people and those who were more highly educated, reported higher levels of generalized trust. Ethnic diversity was not directly related to generalized trust (H1). The indirect effect of diversity on generalized trust via contact, was not significant, and neither was the indirect effect via threat (H2; H3). However, the indirect effect via contact and threat was significantly positive (H4; ß = .02, p < .001). More diversity was associated with more contact and more contact was associated with less threat which led to more generalized trust. The total relationship between ethnic diversity and generalized trust was significantly positive (ß = .08, p = .014) meaning that, when all the effects of contact and threat were taken into account ethnic diversity was associated with higher levels of generalized trust. Volunteering was significantly negatively related to diversity (H1; ß = -.06, p = .034) meaning that increases in ethnic diversity were associated with decreases in frequency of volunteering. Intergroup contact significantly mediated the relationship between ethnic diversity and volunteering (H2; ß = .02, p = .037). The increase in intergroup contact which accompanied ethnic diversity led to greater frequency of volunteering. No other mediation effects were significant (H3; H4). The total relationship between ethnic diversity and volunteering was not significant. Neighbourhood social capital. Neighbourhood social capital was significantly positively related to age (ß = .15, p < .001), education (ß = .11, p < .001), and income (ß = .20, p < .001). Older, more educated people, and those who were financial security had more positive views of their communities (see Table 3). Neighbourhood social capital was not significantly related to ethnic diversity (H1). The indirect effect via contact was significantly positive (H3; ß = .07, p < .001) as was the indirect effect via contact and threat (H4; ß = .02, p < .001). This finding suggests that neighbourhoods higher in ethnic diversity experienced more intergroup contact which was related to greater neighbourhood social capital partly because of reduced threat perceptions. No other indirect effects were significant (H2). The total relationship between ethnic diversity and neighbourhood social capital was not significant. Safety. As shown in Table 4, safety was positively related to income (ß = .24, p < .001) and SEIFA score (ß = .09, p = .047), suggesting that people who were more financially secure and lived in areas with high socio-economic status believed their neighbourhoods were safer. Safety was significantly negatively related to gender (ß = -.19, p < .001) suggesting that men were more likely to report living in a safe area. Are diverse societies less cohesive? Safety was significantly negatively related to ethnic diversity (H1; ß = -.07, p = .031) meaning that people who lived in highly diverse neighbourhoods reported feeling less safe than people in neighbourhoods with more ethnic homogeneity, even after controlling for socioeconomic factors. Neither intergroup contact nor perceived threat mediated the relationship between ethnic diversity and safety on their own (H2; H3). The indirect effect via contact and threat was found to be significantly positive (H4; ß = .01, p = .010). The total relationship between ethnic diversity and safety was not significant. This suggests that increases in diversity were associated with increases in contact, and increases in contact with lower threat perceptions, such that increases in diversity did not adversely affect perceived safety. belonging than young people and women reported more belonging than men. People who were employed, retired or students reported higher belonging than people who had home duties or other. Belonging was not related to ethnic diversity (H1). Intergroup contact positively mediated the relationship between ethnic diversity and belonging (H3; ß = .03, p = .028). No other indirect effects were significant (H2; H4). The total relationship between ethnic diversity and belonging was not significant. This suggests that people living in more ethnically heterogeneous areas did not report lower sense of belonging. Discussion This study offers significant advance on previous work; not only because the data could be used to assess the role of threat and intergroup contact on the diversity-social cohesion relationship but also because a more comprehensive range of indicators of social cohesion (i.e., generalized trust, volunteering, neighbourhood social capital, safety, and belonging) are included. Additionally, a total effects analysis was conducted in order to assess whether the diversity and social cohesion relationship is in fact negative when all the constructs (and paths) are included in the one model [5]. More specifically, it was hypothesized that the relationship between ethnic diversity and social cohesion would be negative (H1). It was also hypothesised that threat would negatively mediate the relationship between diversity and social cohesion (H2). Drawing more explicitly on recent social psychological theory and research, it was predicted that intergroup contact would positively mediate the relationship either directly (contact theory; H3) or through perceived threat (mediated contact theory; H4). Evaluation of hypothesis Using an Australian sample there was some evidence to support the notion that ethnic diversity is associated with less social cohesion but only for certain social cohesion indicators (see Table 5 for summary). Ethnic diversity was significantly negatively related to volunteering and safety (H1). Perceived threat did not negatively mediate the relationship between ethnic diversity and the social cohesion variables (H2). Greater ethnic diversity had some negative implications for safety and volunteering, but no other aspect of social cohesion, and this relationship was not related to threat perceptions. Exploring the role of intergroup contact (contact theory), there was significant mediation for the social cohesion indicators of volunteering, neighbourhood social capital and belonging (see Tables 3 and 4). There was also support for mediated contact theory, as there was evidence of a significant positive effect from ethnic diversity to generalized trust, neighbourhood social capital, and safety, when intergroup contact was included as a mediator of the relationship between ethnic diversity and perceived threat (H3). Together, this suggested that for all the social cohesion variables, the increase in intergroup contact which accompanied increased ethnic diversity, showed beneficial impacts. Looking at the total relationships (the sum of direct and indirect effects), none of the various social cohesion indicators assessed in the Mapping Social Cohesion survey were negatively related to ethnic diversity (H4; see Table 5). These findings are important because they suggest that ethnic diversity does not have an adverse effect on social cohesion in this Australian sample. Moreover, for generalized trust (which is the most commonly used indicator used to assess social cohesion) ethnic diversity was found to have a beneficial impact. The total relationship between ethnic diversity and generalized trust, which took into account the effects of intergroup contact, was significantly positive. The role of threat and intergroup contact The findings of the current study suggest support for constrict theory [1] with respect to volunteering and safety, when contact and threat were not in the model. This conforms to previous research which has shown that these variables are negatively related to neighbourhood ethnic heterogeneity [1,33]. However, it was also found that ethnic diversity was not directly related to generalized trust, neighbourhood social capital, or belonging which is not in line with some previous research. In Putnam's [1] work, all of these variables were lower in ethnically heterogeneous neighbourhoods. Several other studies have been able to demonstrate a negative relationship between ethnic diversity and generalised trust [3,18]. As such, the results of the current study support past research concerning the relationship between diversity and volunteering and safety but in a limited manner because once other mediators are taken into account the relationship becomes non-significant [1,33]. The lack of a direct negative relationship between ethnic diversity and social cohesion contradicts past findings including the previous research conducted in Australia [3,19]. Role of threat. The present research attempted for the first time to test the perceived threat mediation effect on variables such as generalized trust, belonging, neighbourhood social capital, and safety. Previous research has shown that perceived threat negatively mediated the relationship between ethnic diversity and social cohesion variables, including prejudice and volunteering [24,28,40]. The present research found that this was not the case. More research is needed to establish whether unique aspects of this study, including the use of an Australian sample, as well as the measures of threat and social cohesion that were used, can account for this disparity in results. Contact theory. The current finding that intergroup contact positively mediated the relationship between ethnic diversity and social cohesion, is consistent with previous literature [3,[30][31]. For instance, past research has shown that, the increase in intergroup contact, which is associated with greater ethnic diversity, has positive implications for generalized trust, volunteering, and prejudice when contact is positive. These findings are consistent with the current research. Our study showed that intergroup contact positively mediated the relationship of ethnic diversity to neighbourhood social capital and belonging. Overall, the results indicate that the relationship between ethnic diversity and social cohesion is mediated by positive intergroup contact [3,[30][31]. Mediated contact theory. Finally, studies have found that the increase in intergroup contact, which accompanies ethnic diversity, significantly reduces perceptions of threat when the contact is positive. Moreover, this reduction can significantly improve generalized trust and reduce negative outgroup attitudes [15,31,45]. The present study found support for mediated contact theory for additional indicators of social cohesion, other than generalized trust. The mediated contact effect had positive implications for safety (that is the perceived safety of the participants' neighbourhood). Looking at the total effect, with all the core constructs in the model, ethnic diversity was not negatively related to any aspect of social cohesion. This finding contradicts past research [1,14,18]. One previous study which tested total effects, found that ethnic diversity was not related to out-group trust and was negatively related to in-group and neighbourhood trust [40]. Extending this work, using an Australian sample and more diverse indicators of social cohesion, the current research found that the relationship between ethnic diversity and most social cohesion variables was significantly positive not negative. Possible theoretical explanations for this finding are explored below. Implications for theory The results of this study have several theoretical implications. First, the findings suggest that the relationship between various aspects of social cohesion is not well understood. Ethnic diversity was differentially related to the various indicators of social cohesion, meaning that its effects could not be generalized. Previous research has been limited to a relatively small number of social cohesion indicators, including trust and organizational involvement [30,40]. The results of the present study suggest that this approach has not produced a comprehensive understanding of the relationship between ethnic diversity and social cohesion. This current research has also contributed to theoretical understandings of the ethnic diversity-social cohesion relationship by further exploring the effect of positive intergroup contact. Few studies have incorporated the key social psychological construct of positive contact when examining the diversity and social cohesion relationship. The results indicate that positive contact between groups can lead to respect and liking between members of the different ethnic groups, meaning that people are less likely to withdraw from society. Although ethnic diversity can have some detrimental effects, these are mitigated by the increased opportunities such diversity creates for contact between ethnic groups and the associated benefits when intergroup contact is positive. On the basis of this research it is difficult to draw firm conclusions about the relationship between ethnic diversity and social cohesion mainly because the patterns varied depending on the indicator of social cohesion (safety, neighbourhood social capital, belonging, generalized trust and volunteering). What is clear however, is that the role of positive contact in helping to explain this relationship has been undervalued by previous work, including Putnam's [1] seminal study, creating a falsely negative view of ethnically heterogeneous neighbourhoods. This research calls for a more nuanced understanding of the ethnic diversity and social cohesion relationship, such that future research should identify what settings are optimal for creating positive contact experiences and how interventions in heterogeneous communities can be used to create more intergroup contact and therefore, improve social cohesion outcomes. Limitations and suggestions for future research Despite the important insights provided by this research, there were several limitations to its design which need to be acknowledged. First, factor analysis is weakened when three or fewer items load on a factor [46][47]. In the present study intergroup contact, neighbourhood social capital, safety, and belonging were composed of less than three items. A smaller number of items per factor (p/f) inflates standard errors. Additionally, when factors include less than three items, the increase in efficiency is negligible [46][47]. The current measurement models were weakened by latent variables which only had two or three items. In addition generalized trust and volunteering were assessed using one item. Measures of sub-group trust towards one's own ethnic group and different ethnic groups were not included in the original survey. Such measures would have enabled a more direct and systematic analysis of Putnam's [1] main theoretical arguments with respect to constrict theory and conflict theory. Generalized trust was used as an indicator of social withdraw from others but more specific measures would have been informative. Future studies need to strengthen measurement of social cohesion by developing more items and valid indicators. The present study may have failed to reproduce the perceived threat mediation effect due to the specific measure employed. Wording of perceived threat items in past research has directly addressed threat perceptions, for instance "people from ethnic minority backgrounds threaten White British people" or "foreigners living here threaten my way of life" [28,40]. Items used in this study, though, asked how positively or negatively respondents felt towards immigrants, as well as whether they felt the number of immigrants was too high. These items are more indirect measures of threat and do not specify sources of perceived threat. Again, this study calls for future research to test conflict theory with more comprehensive measures of perceived threat. The present study measured a proxy for positive intergroup contact-interethnic friendship, but we did not assess how negative intergroup contact impacts the relationship between ethnic diversity and social cohesion. The conclusions we draw relative to intergroup contact is specific to the role of positive contact and cannot be generalized to contact experiences more broadly. For example, research suggests that negative intergroup contact is associated with prejudicial attitudes and can be detrimental for positive intergroup outcomes [48][49]. The work on negative intergroup contact highlights that contact can be experienced in a variety of ways, which can differ in intensity, frequency, and proximity and can be positive and/or negative [49][50][51][52]. Also recent research suggest that negative contact experiences may increase prejudice to a greater extent than positive contact decreases it [49]. Thus, it is plausible that positive and negative intergroup contact differentially impact on the ethnic diversity-social cohesion relationship, but more research is needed to expand upon the measurement of intergroup contact employed in the current study. As of yet, no research has compared the mediation effects of positive and negative intergroup contact. A further limitation of the present study was that ethnicity was not examined with respect to different groups [53][54][55]. It should be noted that there are marked differences in patterns of contact with and attitudes towards different ethnic groups. For instance, previous research in Australia found that participants held more negative attitudes towards immigrants who were Arabic and Lebanese than immigrants from other non-English speaking countries such as Vietnam and China [54]. Future research is needed to establish whether these differences in attitude have a significant impact on levels of intergroup contact or perceived threat and subsequently on the ethnic diversity and social cohesion relationship. The linguistic measure of ethnic diversity may limit generalisability of results. In Australia, the formal diversity index by Australian Bureau of Statistics (ABS) is based on the birth country and the criterion is English speaking [40]. In future work, other ways to assess diversity should be considered. Further, the measurement used in this work may not be appropriate for other countries where ethnic or immigrant status may be a more appropriate indicator of diversity. The diversity measure used in the current research was restricted to countries with a colonial history tied to the UK (i.e., UK, Ireland, USA, Canada, New Zealand, and South Africa) but yet these countries still have potential cultural differences from Australia, which have not been fully explored. Future work could explore foreign country of birth as a diversity measure rather than language. It is unclear how to assess countries for cultural distance (being born in the UK compared to Egypt) from the host nation which also is a variable of interest. Finally, recent data suggests that ethnic segmentation in Australian cities is increasing; partially because people who are not happy with increasing ethnic diversity are moving away [55][56]. This trend and possible implications were not captured by the present study. As discussed above, previous research in America has found that controlling for segregation accounted for a large part of the negative relationship between ethnic diversity and social cohesion [22][23]. Longitudinal research is needed in order to establish whether this trend will change the nature of the relationship between ethnic diversity and social cohesion in Australian population samples. The results of this study emphasize future directions for refining measurement of social cohesion. As discussed above, the effects of ethnic diversity were not generalized across measures of social cohesion, therefore in future work it is necessary to go beyond the conventional variables (which are widely used in national surveys) of trust, organizational involvement, and outgroup attitudes. It is also necessary to consider through scale development whether there is a general social cohesion construct that incorporates a range of sub-factors. For this to be achieved, further work will be needed on the operational definition of social cohesion and its measurement. Finally, longitudinal research is necessary in order to address uncertainty about the causal directions of the relationship between ethnic diversity and social cohesion as well as of that between intergroup contact and perceived threat. It is likely that there is an intersection between positive intergroup contact and social cohesion, where these constructs are likely reciprocally related. This would also be a fruitful direction for future research. The present study, did not allow us to make any causal inferences about the nature of the relationships being tested; a limitation which is shared by past research in this field. Conclusion Given the high levels of immigration in almost all modern societies, research showing that ethnic diversity can negatively impact social cohesion, is deeply concerning. Our research shows that this concern is not well founded. The results of the current research indicate that when all the key constructs are included in the models (i.e., diversity, positive contact, perceived threat and social cohesion indicators), ethnic diversity does not have a detrimental effect on any social cohesion variable. Ethnic diversity was positively related to generalized trust through the mediators of intergroup contact and perceived threat. This current research highlights the importance of accounting for the effects of intergroup contact, including its simple mediation effect and its complex mediation effect through perceived threat. The results suggest that further understanding the intergroup contact experience and how to facilitate positive contact, are important for building stronger cohesive communities.
v3-fos-license
2019-04-15T13:11:20.773Z
2018-06-01T00:00:00.000
114020366
{ "extfieldsofstudy": [ "Engineering" ], "oa_license": "CCBY", "oa_status": "HYBRID", "oa_url": "https://link.springer.com/content/pdf/10.1007/s10518-017-0131-9.pdf", "pdf_hash": "6f7a64e2b3c4e906fadf181efda453b6c8feedb8", "pdf_src": "ScienceParsePlus", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:1700", "s2fieldsofstudy": [ "Geology" ], "sha1": "403e2807dea170b76ff851d192bdbf67db4f2a21", "year": 2017 }
pes2o/s2orc
Selecting time windows of seismic phases and noise for engineering seismology applications: a versatile methodology and algorithm Seismic signal windowing is the preliminary step for many analysis procedures in engineering seismology (standard spectral ratio, quality factor, general inversion techniques, etc.). Moreover a noise window is often necessary for the data quality control through the signal-to-noise verification. Selecting the noise window can be challenging when large heterogeneous datasets are considered, especially when they include short pre-event noise signals. This study proposes a fully automatic and configurable (i.e., with default parameters that can also be user-defined) algorithm to windowing the noise and the P, S, coda and full signal once the P-wave (TP) and S-wave (TS) first arrivals are known. An application example is given on a KiK-net dataset. A Matlab language implementation of this algorithm is proposed as an online resource. Introduction Selecting specific signal phases (i.e., P, S, or coda waves) is required for diverse applications in seismology. For instance, the early part of shear-wave phase is often used for site effects assessment (e.g., Borcherdt 1970;Jongmans and Campillo 1993;Horike et al. 2001;Satoh et al. 2001) and is the basis of the evaluation of the kappa parameter (Anderson and Hough 1984;Ktenidou et al. 2014), while the quality factor (Q) related to the attenuation is regularly estimated from the later coda arrivals (e.g., Aki and Chouet 1975;Mayor et al. 2016). Moreover, to estimate the quality of a signal and its frequency range of validity, the signal-to-noise ratio (SNR) is computed from the ratio between the Fourier amplitude spectrum (FAS) estimated on parts of the signal and the FAS generally evaluated for a noise window of the same duration that is often selected before the event. While numerous studies have proposed automatic picking algorithms to determine the P-wave (T P ) and/or S-wave (T S ) first arrivals (e.g., Baer and Kradolfer 1987;Sleeman and van Eck 1999;Zhao and Takano 1999;Zhang et al. 2003;Stefano et al. 2006;Wong et al. 2009;Küperkoch et al. 2010;Tan and He 2016), there have only been a few that have offered a solution for windowing the different phases of the earthquake signal. Most studies have considered a constant window duration for every event, without taking into account the earthquake rupture duration or the expansion of the signal duration as waves are being propagated to larger distances (Phillips and Aki 1986;Bonilla et al. 1997;Drouet et al. 2010;Douglas et al. 2010). Recently, some studies have proposed more complex approaches based on the signal analysis (e.g., Maggi et al. 2009) or based on a model using the information extracted from seismic bulletins (e.g., Kishida et al. 2016). When working with a large and heterogeneous dataset, once the T P and T S first arrivals have been picked, defining a specific window can be complicated. This complexity increases when a noise window has to be assessed for SNR computation. Indeed, timeseries extracted from triggered instruments and/or generated automatically from regional or national networks, often present short and variable pre-event noise durations. When a large dataset has to be processed, as for generalized inversion techniques (GIT-e.g., Drouet et al. 2008) or ground motion prediction equations (e.g., Laurendeau et al. 2013), then a complex automatic procedure has to be used to avoid the introduction of poor quality data into the processing and to minimize the number of rejected data due to difficult window selection. The motivation behind the present study was to provide a suitable windowing tool for spectral estimation on different phases with due account to signal-to-noise ratio issues. A method to select the phase windows of any dataset for which the T P and T S first arrivals are known is proposed, and a suitable solution to estimate the noise window from heterogeneous datasets with variable noise level and duration is provided. An automatic Matlab algorithm was developed and tested on a KiK-net Japanese dataset that is composed of more than 2000 manually picked events with short and variable durations of pre-event noise (Laurendeau et al. 2013). The records are accelerograms from local to regional events that are used between 0.25 and 30 Hz mostly. This study was initially developed for the application of GIT to a specific KiK-net subset (Foundotos et al. 2016-same issue), and for the correction of the KiK-net surface records for local site effects for prediction of hard-rock reference ground motion prediction (Laurendeau et al. 2016-same issue): more details on the dataset can be found in these papers. After a short reminder on the relationships between window duration and the associated minimum frequency valid for the FAS, (f min ), we present the model proposed for the seismic-phase windowing in a first step, and the methodology used for the noise window selection in a second step. In a final step, some examples are given to discuss the windowing obtained for local, regional and teleseimic events, as well as for complex examples including after-or fore-shocks. Spectral resolution and sensitivity to time window duration Many studies have tested the sensitivity of their data to signal window duration (e.g., Satoh et al. 2001;Ktenidou 2010;Douglas et al. 2010). These have mostly reported only limited dependence, provided that the same seismic phase is considered. These observations have sometimes been used to justify the choice of a constant window duration for every event. In addition to the potential differences in the input seismic signal delimited by different windows, the resolution of the FAS differs as well. Indeed, the longer the selected time window, the higher the number of wavelengths considered for the FAS at each frequency. For instance, a 10-s-long window contains only one wavelength of a 10-s period, but 10 wavelengths of a 1-s period, and 100 wavelengths of a 0.1-s period, while a 1-s-long window contains one tenth of the wavelengths at each of these periods. If N is the number of wavelengths necessary to insure a good spectral resolution, then the minimal reliable frequency for the FAS is given by: where D is the duration of the window considered. The higher is N, the better is the spectral resolution. Based on our experience, taking N = 10 is enough to give the assurance of good spectral resolution. However, taking such a high N number is not always possible, as this depends on the seismic phase or noise duration available, and on the minimal frequency necessary for the application. When it is required, the N-value can be optimized by tested it values for different signals. Figure 1 shows the sensitivity of the FAS to the S-wave window duration (D S ). The colors from blue to yellow show the results for window lengths from 2.5 to 60 s. In this example and for various other KiK-net signals tested (not represented here), the minimal N-value for this dataset is around three. Indeed, clear discrepancies appear at low frequency for the shortest window, generally just below the f min criteria (vertical lines) obtained with N = 3. Because the KiK-net dataset present very limited duration of noise for the analyses and tests with N = 3 appear satisfying, we keep this N-value in the following examples. In agreement with the literature, we find a good stability of the FAS over the frequency range where the N-value criterion is satisfied (Fig. 1). Thus, the FAS seems weakly dependent on the duration of the signal considered, provided the most energetic part of the signal is common to every window. It means that small variations on the duration of the selected phase window would lead to negligible change on the FAS. Seismic-phase windowing Phase windowing consists of using the first arrival time of P waves and S waves to automatically select different windows: for P, S, coda, and full signals. The nomenclature for the phase intervals and the different times considered is given in Fig. 2. In addition to T P and T S , the ending signal time (T end ) can be defined as well. This time is used as an upper limit for the duration of the S phase, the coda phase, and the full signal. We recommend selecting T end directly from the spectrogram with precautions in the chosen color scale, to be able to detect the end of the coda waves at low frequency, as well as eventual strong noise or aftershocks at each frequency. If the T end value is not provided, then it is automatically taken as the time corresponding to 95% of the cumulated energy evaluated on the three components between T P and the end of the record. It is particularly useful to pick T end for low SNR records and in the case of close aftershocks or strong transient noise, which can be included by the cumulated energy approach. Moreover, the cumulated energy approach presents the drawback that it depends on the duration and level of noise included in the record around the signal, especially when the latter is weak. The time of the initial sample of the record is denoted T i , while the final sample is given by T f . The time for the earthquake occurrence is noted as T 0 . The method and its associated algorithm have been developed for FAS processing purposes. The window's edge must be tapered to satisfy the infinite signal assumption made for the Fourier transform on a finite window. Thus, the rate of tapering (tx) can be specified in the window selection process, to enlarge the windows and apply the tapering outside the accurate delimitation of the phase windows. a The time histories of the three components and the selected window are shown. The window is a little larger because the cosine taper is applied outside the limit interval of the defined S-wave window. b The corresponding FAS are shown. FAS in gray correspond to the noise spectrum. The vertical lines indicate the minimum frequency associated with the S-wave window and allowed to have at least three wavelengths (N = 3). In this example, it is necessary to have at least 10 s of signal to have a reliable spectrum at 0.3 Hz P-wave windowing The P-wave window is the easiest phase to delimit as it starts at T P and ends at T S . The duration of the P-wave phase (D P ) can be written as: where tx takes into account the single edge enlargement in the noise before the P-wave onset, to apply the tapering on this pre-event noise and thus to avoid losing some parts of the P-wave phase in the tapering process for the FAS evaluation. Finally, the P-wave window interval is defined as: Fig. 2 East-West component record from a M L 2.2 earthquake at 56 km epicentral distance and occurring at T 0 . The P, S, coda and all phases are represented by the gray bands indicating I P , I S , I C and I All , respectively, with their corresponding durations (D P , D S , D C , D All ) and considering a rate of tapering of 5% (tx = 0.05). The first arrival times (T P , T S , T C ), phases ending (T end ) and beginning (T i ) and ending (T f ) of the record are also shown. Bottom spectrogram for the seismic energy as a function of the time and frequency S-wave windowing The S-wave window duration is given according to a source term through the inverse of the corner frequency (f c -Brune 1970), and a propagation term taking into account the difference time between the P-wave and S-wave first arrivals (T S -T P ): Here, the window is being enlarged by a factor tx for both edges. Thus, the enlargement of the first edge includes a small portion of P waves in the S window, although P waves are already included in the S-wave phase anyway. A minimal duration (D smin ) can be defined for the question of spectral resolution at low frequency. f c is estimated directly by the Brune (1970) relationship [Eq. (4)] from the seismic moment (M 0 ), considering a stress drop (Dr) of 10 bars and a mean shear-wave velocity (b S ) for the crust of 3500 m/s: Dr and b S can be, however, easily adapt to the target region if needed. The seismic moment can be deduced from the moment magnitude (M W ) according to Eq. (6) (Hanks and Kanamori 1979): If the moment magnitude is not available, M W can be approximated by the local magnitude (M L ) extracted from the seismic catalog. This estimation of the source duration is anyway approximate, but is supported by the observed stability in the spectrum evaluated from windows with different D S as shown in Fig. 1. Kishida et al. (2016) proposed a similar formulation for D S , with a part related to the source with different durations defined empirically according to the magnitude, and a part related to the propagation defined as one tenth of the hypocentral distance (0.1R h ). First, for the source term, f c is high for small and moderate earthquakes (M \ 5), and thus D S is only controlled by the propagation term. Then, the source duration can be neglected for M \ 5, making Eq. (4) usable without the need for parameters other than T S and T P . The use of Eq. (4) for earthquakes with M [ 7.5 can lead to very large source durations (see Kishida et al. 2016). A maximal S-wave window duration (D smax ) can be chosen in this context. Secondly, for the propagation term, (T S -T P ) is widely accepted to be close to a 1/8 of the hypocentral distance given in kilometers, making both expressions similar. However, the formulation in Eq. (3) has the advantage of being independent of uncertainties in the source localization (especially the depth) given by the seismic catalog. The S-wave interval is finally defined as: Figure 3 shows the variation of source and path component duration of D S as defined by Eq. (4) with the magnitude and rupture distance for the KiK-net dataset example. The maximum duration due to the source is around 17 s, and that due to the propagation is around 38 s. The total duration has a minimum of 1.2 s and a maximum of 55 s. However, in the following, we consider a target minimal frequency of 0.3 Hz and a minimum of three wavelengths included inside the window (N = 3). Equation (1) finally gives a minimal duration D Smin = 10 s for applications on the KiK-net dataset. Coda-wave windowing Aki (1969) defined the beginning of the coda phase as twice the S-wave travel time (2(T S -T 0 )) after earthquake occurrence T 0 . To be independent of the information extracted from the seismic bulletin, we propose a formulation equivalent to the Aki (1969) definition, but based only on the T P and T S parameters. Using the approximation commonly accepted that R h / 8ðT S À T P Þ and that b S % 3:5 km=s and considering R h =b S % ðT S À T 0 Þ, we easily find 2 T S À T 0 ð Þ%4:6 T S À T P ð Þ . The time of the first 'arrival' of the coda phase (T C ) is finally T C ¼ 4:6 T S À T P ð Þþ T 0 , which is equivalent to T C ¼ 2:3 T S À T P ð Þþ T S . This formulation is also empirically confirmed through the good coefficient of determination R 2 = 0.98 for the linear regression on the local events (R h \ 500 km) between Aki (1969) and our T C expression. This definition of T C has the advantage that it is independent of the uncertainty on the time origin of the earthquake. Finally, the coda-wave interval can be written as: Its associate duration is simply: A minimal coda wave window duration (D Cmin ) can be defined. If D C \ D Cmin , then no coda wave window is returned by the algorithm since the signal is too weak and the amplitude of the coda waves falls rapidly below the noise level. A coda signal is generally available only for events with good SNR (i.e. [10) in a large frequency range. Full-signal windowing A full-signal interval (I All ) is also proposed that starts at T P and finishes at T end . This takes into account the enlargement on the first edge for the tapering, and it is defined as: This full signal window is particularly useful when no specific phase is mandatory, and to obtain long enough windows (D All ) to assess spectra with good resolution up to low frequencies. Noise window selection The noise window selection generally consists of taking the duration of the target window and reporting it before the first P-wave arrival. Here, a more complex scheme has been developed, to take into account the availability of data with short windows of pre-event noise. Figure 4 shows the pre-event noise duration that is available for the KiK-net dataset. This shows generally that these durations are short and variable, which makes the noise window selection difficult. Only one noise window of duration (D N ) is selected to assess the SNR of several seismic phase windows of different durations. The duration of the target noise window (D t ) has to be at least as long as the longest seismic signal window requested (D P , D S , D C or D All ), or long enough to satisfy the f min criterion. Thus, FAS estimated from these windows of different durations have to be normalized by the square root of the duration to obtain the Fourier amplitude spectrum density (FASD ¼ FAS= ffiffiffi ffi D p ) that is length-independent, for SNR purposes especially. Fig. 4 The number of records versus the pre-event noise duration available In the present example, we consider a minimal noise duration D min of 10 s to be consistent with the minimal S-wave window duration taken previously. However, 10 s of noise is not available before the event for all of the records of the KiK-net dataset, as shown in Fig. 4. The dataset contains 311 records without 10 s of pre-event noise. Thus, different noise window definitions are tested, for which the energy was then compared. The idea was to select a noise window with sufficient duration, and also a window with a representative level of energy (without strong seismic signal included). To do this, one preevent noise window (I N1 ) of duration (D N1 ) is tested as well as two post-events noise windows: a short window I N2 of duration D N2 , and a long window I N3 of duration D N3 . ; T P À 0:1 2 4 3 5 ; ð11Þ The pre-event window (I N1 ) is the preferred one, as no earthquake signal can be introduced in the noise evaluation. However, if the pre-event noise that is available is too short, or if there is a fore-shock before the mainshock, then other windows have to be considered. The post-event window I N3 is longer than I N2 only when the target phase duration (D t ) is greater than D min and D N1 . No S wave can be introduced in the post-event window, while the coda wave can be accepted to minimize the number of rejected signals due to a lack of noise available. An example is given in Fig. 5 that illustrates the noise window selecting process for the S-wave duration targeted on a record presenting a limited pre-event noise duration available. In this example, I N3 is preferable to the two other noise windows because it is the only one that has the same duration as the targeted S-wave window while it provides similar FASD than the two other noise windows even if some coda waves may be included inside I N3 . This latter verification is carried out on the comparison of the mean energy of the different windows. A minimum pre-event noise duration of 1 s is mandatory for these comparisons. To be able to compare just the representativeness of each noise window, the mean energy (E) is estimated for the three noise windows (E N1 , E N2 , E N3 ) according to the following definition: where N min is the index of the minimum frequency (f min ), Nf is the number of frequency samples, and FAS are the Fourier amplitude spectra computed for the three components. A scheme can take into account the length of each window as well as their mean energy, to determine the best noise window for the noise FASD evaluation, as detailed in Fig. 6. The energy for the three noise window comparisons is weighted by some factors defined by the user (F 1 , F 2 , F 3 , F 4 ). This allows I N1 , I N2 , or I N3 to be favored, depending on the duration available for each one, and the number of rejected records to be minimized due to lack of noise for the SNR evaluation. In addition to the noise and the P, S, coda and all signal windows, the algorithm returns a Flag value that indicates which noise window has been selected and how. For the KiK-net application the scheme of the noise window selection process is configured with: D min ¼ 10 s; F 1 ¼ 5; F 2 ¼ 3; F 3 ¼ 2; and F 4 = 0.67. Fig. 6 Scheme of the noise window selection methodology. D N1 , D N2 and D N3 are the pre-event and the short and the long post-event noise window duration respectively. The corresponding spectral energy of these noise windows are given by E N1 , E N2 and E N3 . Few other parameters can be easily adapted to each dataset: the minimal and target duration (D min and D t ) and the weighted factors F 1 -F 4 The main idea is that the pre-event window is favored when both the pre-event and postevent windows are longer than D min and D t (Flag 1). However if the pre-event window is shorter than D t , then the longest post-event window is preferred (Flag 3). In the same way, if I N1 is shorter than D min , then the long post-event is preferred if not too much signal is included inside it (Flag-3), otherwise the short post-event window is chosen (Flag-2). These Flags for the noise selection are indicated in Fig. 6 and are given in Table 1, with the corresponding number of events that were selected for the KiK-net dataset example. Most of the noise windows selected were pre-event windows. The post-event windows selected were mostly constituted by the long window. Only a few signals are removed from the dataset due to the absence of a noise window for the SNR evaluation. All of the parameters and factors given in this article can be adjusted as an input of the algorithm. Application examples and discussion The advantage of our windowing formulation is illustrated in Fig. 7 by the comparison with two simple formulations: a 30-and a 10-s constant S-wave window. Three earthquakes recorded in Provence (Southeastern France) with increasing epicentral distance are presented. The influence of the S-wave window duration is visible on the FASD and the SNR. For the closest earthquake (7a), the 30-s window includes a lot of coda waves and underestimates greatly the FASD obtained with the 10-s window and our formulation. For the intermediate epicentral distance (7b), our formulation is in between the two constant window leading to a FASD that is close to the average between the three window definitions. Only the beginning of the S-wave window is included in the two constant windows for the regional earthquake (7c) while our formulation provides longer duration leading to a slightly lower FASD amplitude even if the three curves are very similar. Thus, a constant window duration is not adapted when working with datasets composed by both local and regional earthquakes while our formulation gives a suitable solution. Concerning the other phases of the signal, we used the classical P wave formulation that seems appropriate, while our coda wave window formulation begins very close to the one predicted by Aki (1969), as expected. The T end95 cumulative energy estimation is always later than the manually picked one, and this duration increase may be accentuated for longer records. The three noise windows have the same duration and present similar FASD. However, long pre-event noise is available and this may not be true for each record of every dataset. Figure 8 presents two examples recorded during the 2014 Cephalonia seismic crisis in Greece, for which the selection of noise and phase windows is uneasy and might be unreliable. Here, the phase ending time has not been picked and is defined by T end95 . In the 458 km (c) from the recording site. The P-wave window (I P ), the S-wave window (I S ), the coda wave window (I C ) and the full signal window (I All ) are represented. To make the comparison between our formulation and simple constant window formulation, 30-and 10-s long windows (I S30 and I S10 ) are also represented as well as their corresponding amplitude spectral density (FASD) and associated signal-to-noise ratio (SNR). The picked phase ending (T end ) and the one predicted by the 95% fractile of the cumulative energy (T end95 ) are represented as well as the Aki (1969) coda beginning formulation (T Caki ) that can be compared to our formulation (I C ). Identically to Fig. 5, the noise window selection process is also represented by the pre-event noise window (I N1 ) and the short and long post-event noise window (I N2 and I N3 ) and their associated FASD and SNR. Here, the pre-event noise duration is long enough to target D S leading to the same duration for the three noise windows (D N1 = D N2 = D N3 = D S ) first case (a), the target earthquake is followed by a stronger one that strongly biases the T end95 estimation and leads to include the P-and S-wave phases of the second earthquake in the coda and the full signal phase of the first event. However, the duration of the coda wave is not long enough here to satisfy the minimal coda wave duration criteria (D C \ D Cmin = 10 s). Moreover, for this particular example, the noise selection leads to the rejection of this record (Flag = 0) since the pre-event noise window is too short and the FASD of the two post-event noise windows are too different from the pre-event one. In the second example, a small fore-shock is present in the pre-event noise while the record is cut before the actual end of the signal. Here the signal phases are not biased but the noise selection is complex. Indeed, I N1 is enriched at high frequency compared to I N2 and I N3 while it is the opposite for low frequency due to the presence of longperiod coda waves in the post-event noise window. When using our parameterization (D min ¼ 10 s; F 1 ¼ 5; F 2 ¼ 3; F 3 ¼ 2; and F 4 = 0.67), the algorithm selects the preevent noise window since it exhibits sufficient duration for good spectral resolution. The SNR is, however, significantly lower at higher frequency. To avoid such difficulty, we strongly recommend visualizing and flag such records when picking P-and S-wave first arrivals. Picking T end or testing more accurate automatic procedures than the cumulative energy one is also required for an accurate coda and full signal phase window ending. Fig. 8 Examples of uneasy signal phase and noise windowing when no phase ending (T end ) has been picked and when either after-(a) or fore-shock (b) are present in the record. Similarly to Fig. 7, all the signal phases (I P , I S , I C and I All ) and noise windows (I N1 , I N2 and I N3 ) are represented with their corresponding FASD and SNR. Here, the pre-event noise is limited, leading to test different post-event noise windows durations Conclusions Seismic signal windowing is the preliminary step for many applications in seismology and for SNR verification. While the vast majority of previous studies have used very simple windowing formulations, such as constant duration, we propose a more complex method that takes into account source and propagation terms. This study provides a suitable solution for heterogeneous datasets where the P-wave and S-wave first arrivals have been picked beforehand. The earthquake signal phases are selected exclusively from the T P and T S parameters for the majority of the events, which makes the windowing independent of the uncertainties present in the information given by the seismic bulletin. For strong earthquakes (M [ 6) that have source durations that are not negligible, a source term is estimated through the inverse of the corner frequency evaluated from the magnitude. Selecting the noise window can be challenging when large heterogeneous datasets are considered, especially if for some events the duration of the available pre-event noise is short. The noise window has to be the most representative of the noise level, and long enough to allow SNR estimation with good resolution at low frequency. To get around this issue, we defined and tested three different windows: one pre-event and two post-event windows. A scheme is proposed for selecting of the best noise window in terms of duration (as long as possible) and mean energy (as low as possible), without including undesirable transients. This approach gave good results on the KiK-net dataset, with a very limited number of rejected signals. A Matlab algorithm was developed and can be adapted to each dataset through a few parameters to be defined by the user. This algorithm is freely available as electronic online supplementary material of this paper and ready to be used for every windowing application.
v3-fos-license
2023-01-19T21:04:16.095Z
2020-09-22T00:00:00.000
255981778
{ "extfieldsofstudy": [], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://doi.org/10.1186/s13027-020-00321-8", "pdf_hash": "9db78d51b4f675173b8af7f9816b8eb43ba32a6d", "pdf_src": "SpringerNature", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:1701", "s2fieldsofstudy": [ "Biology" ], "sha1": "9db78d51b4f675173b8af7f9816b8eb43ba32a6d", "year": 2020 }
pes2o/s2orc
A study of the mechanism of lncRNA-CR594175 in regulating proliferation and invasion of hepatocellular carcinoma cells in vivo and in vitro Hepatocellular carcinoma (HCC) is one of the cancers of highest incidence and mortality worldwide. The proliferation and invasion of tumor cells are the main reason for poor prognosis after HCC surgery. Long non-coding RNA (lncRNA) has been shown to play a key role in the progression of HCC. LncRNA-CR594175 is one of the highly expressed lncRNAs in HCC tumors and their metastatic tumors that we have obtained by the High-throughput screening method. To elucidate the role of lncRNA-CR594175 in regulating the proliferation and invasion of human hepatoma cell line, HepG2, we operated through lncRNA-CR594175 silencing to inhibit the progression of HCC, either through in vitro or in vivo experiments. We found that lncRNA-CR594175 was lower in adjacent non-cancerous tissues than in primary HCC, and was lower in primary HCC than in its metastasis. Silencing of lncRNA-CR594175 inhibited the proliferation and invasion of HepG2 cells and growth of subcutaneous tumors. The results revealed that lncRNA-CR594175, as a RNA sponge, broke the negative regulation of hsa-miR-142-3p on Catenin, beta-1 (CTNNB1), and once lncRNA-CR594175 was silenced, the hsa-miR142-3p regained its negative regulation on CTNNB1 which can promote HCC progression by activating the wnt pathway. Our present study demonstrated for the first time that lncRNA-CR594175 silencing suppressed proliferation and invasion of HCC cells in vivo and in vitro by restoring the negative regulation of hsa-miR-142-3p on CTNNB1, laying a solid theoretical base for using lncRNA-CR594175 as genetic target therapy for HCC and offering a reasonable explanation for inactivation of miRNA in different tumors or in the tumor at different stages. Background Hepatocellular carcinoma (HCC) is one of the cancers of highest incidence and mortality worldwide [1], featuring a low complete resection rate and a high postoperative recurrence rate [2,3], which is mainly driven by high invasiveness and intrahepatic and/or extrahepatic metastasis [4]. Therefore, the study of the mechanisms involved in HCC cell proliferation and invasion may be of great significance to identify prognostic markers in HCC patients. At present, surgery combined with pre-and postoperative chemotherapy is the mainstay of treatment for HCC but this traditional therapeutic method doesn't work for postoperative recurrence and metastasis. So, in recent years, searching for new therapeutic targets for HCC treatment never stopped. Both microRNAs (miRNAs) and lncRNAs play important roles in regulating cellular processes [5][6][7]. Going through all lncRNAs identified by the screening and sequencing of the transcriptome, lncRNA-CR594175 was the lncRNA most differentially expressed among adjacent non-cancerous tissues, HCC and metastases. The expression of lncRNA-CR594175 increases from adjacent non-cancerous, to primary HCC, to metastasis tissues, which suggests that lncRNA-CR594175 may be involved in proliferation and invasion of HCC. According to our screening data, CTNNB1 was highly correlated with the process of HCC development. The pathogenesis of HCC is complex and Wnt/ CTNNB1 signaling pathway plays a central role in the hepatocarcinogenesis. According to medical basic research, Wnt/CTNNB1 signaling pathway can affect the process of HCC mainly by expression regulation of downstream genes and proteins. The increasing expression of CTNNB1 leads to cell proliferation according to the Wnt/CTNNB1 signaling pathway. It makes CTNNB1 a critical gene in the research of HCC regulation pathways. The inhibitory effects of Wnt signaling pathway by nonsteroidal anti-inflammatory drugs and valproic acid has been used as adjuvant for HCC therapy [8], therefore, CTNNB1 inhibitors become the new study direction in preventing precancerous lesions such as hepatitis and liver cirrhosis from deteriorating. It has already been verified that R-Etodolac, as the inhibitor of CTNNB1, can suppress proliferation of HCC cell lines HepG2 and Hep3B effectively [9]. In addition to this, a variety of miRNAs has been confirmed to have an influence on the process of HCC through the regulation of Wnt/ CTNNB1 signaling pathway [10][11][12] and the targets are different. It's uncommon for miRNA to involve Wnt signaling pathway targeting CTNNB1 until now, but our study revealed that miRNA hsa-miR-142-3p has a negative effect on the HCC progression by affecting the CTNNB1 pathway. The data indicated an increase in the expression level of hsa-miR142-3p from HCC metastasis to primary lesion and then to adjacent non-cancerous tissues, but it's puzzling that the expression of CTNNB1 protein remains stable. So, it's crucial for the theoretical research and clinical treatment to find the reason why antineoplastic miRNAs such as hsa-miR-142-3p are inactivated in the progression of HCC. The study on the interaction between miRNAs and lncRNAs will revolutionize our knowledge about cell structural network and regulatory network, and bring in immeasurable scientific and clinical value. Cell culture Human hepatoma cell line, HepG2, obtained from the Cell Bank of the Chinese Academy of Sciences (Shanghai, China), were maintained in RPMI-1640 (Invitrogen, CA, USA) supplemented with 10% Fetal bovine calf serum (FBS, Invitrogen, CA, USA). 293TN cells, purchased from ATCC (MD, USA), were maintained in Dulbecco minimum essential medium (DMEM, Invitrogen CA, USA) supplemented with 10% FBS. All these adherent cells were passaged by 0.25% trypsin digestion (Invitrogen CA, USA) and incubated in an atmosphere of 5% CO 2 at 37°C. Assessment of lncRNA-CR594175, hsa-miR142-3p, CTNNB1 protein and Wnt pathway related protein expression levels in HCC tumors and their metastasis Adjacent non-cancerous tissues, HCC, the metastatic from 24 patients (diagnosed in the First Affiliated Hospital of Zhengzhou University and detailed patient information was shown in Table 1) were collected, followed by total RNA extraction and Quantitative Real-time PCR (RT-qPCR) for measurement of lncRNA-CR594175 and hsa-miR142-3p level and total proteins were extracted and used for CTNNB1 and Wnt pathway related proteins (E-cadherin, C-myc, CyclinD1 and MMP-9) detection by western blotting. Lentivirus packaging A siRNA sequence complementarily binding to lncRNA-CR594175 was chosen. The target sequences of siRNA (5′-GAATCCTCGGAGACAGCAG-3′) are homologous to lncRNA-CR594175.The oligonucleotide templates of these shRNAs were chemically synthesized and cloned into the linear pSIH1-H1-copGFP shRNA Vector (System Biosciences, CA, USA) which was obtained through digestion by BamH I and EcoR I (Takara, Dalian, China) and purification by agarose gel electrophoresis. An invalid siRNA sequence (5′-AATCGTCGAGGGCCAG ACA-3′) was used as a negative control (NC). Sequencing was used to confirm the vectors constructed (pSIH1-shRNA-CR594175 and pSIH1-NC). The CDS sequence of human CTNNB1 (NM_001904.3) was amplified by using the primers 5′-GGAATTCGCC ACCATGGCTACTCAAGCTGATTTG-3′ and 5′-CGGGATCC TTACAGGTCAGTATCAAACC-3′, which contain an EcoRI cutting site and Kozak sequence and a BamhI cutting site, respectively, with the cDNA prepared by reverse transcription of RNA isolated from 293TN cells. The PCR product was digested and cloned into pcDH1-CMV lentiviral expressing vector; the recombinant vector was named pcDH1-CTNNB1. The products of the vectors were confirmed by DNA sequencing. Endotoxin free DNA was prepared in all cases. One day before transfection, 293TN cells were seeded into 10-cm dishes (Corning, NY, USA). 2 μg of each pSIH1-shRNA-CR594175 vector or pSIH1-NC and 10 μg pPACK Packaging Plasmid Mix (System Biosciences) were co-transfected using Lipofectamine 2000(Invitrogen) in accordance with the manufacturer's protocol. The medium was replaced with DMEM plus 1% FBS. Forty eight hours later, the supernatant was harvested and then cleared by centrifugation at 5000×g at 4°C for 5 min, and passed through a 0.45 μm PVDF membrane (Millipore, MI, USA). The titer of virus was determined by gradient dilution. The packaged lentiviruses were named as Lv-shRNA-CR594175 and Lv-NC. Recombinant lentivirus Lv-CTNNB1 and Lv-miR142-3p were packaged by following the same protocol. Genetic intervention through a lentiviral approach Cells were divided into four groups: a control group, Lv-NC group (infected with Lv-NC), Lv-shRNA-CR594175 group (infected with Lv-shRNA-CR594175) and Lv-CTNNB1 group (infected with Lv-CTNNB1). HepG2 in logarithmic phase growth were seeded to 6-well plates at 5 × 10 5 cells/well. One day later, viral solution was added at a multiplicity of infection (MOI) of 10. The infection efficiency was evaluated by observing and analyzing the fluorescent mark 72 h after infection. Total RNA and protein were isolated from the cells and subjected to RT-qPCR and western blotting for lncRNA-CR594175 and CTNNB1 protein. Luciferase experiment Total RNA was extracted from HepG2, reversetranscribed into cDNA, and 2 μl of the reaction product subsequently used as a template for PCR. Primers targeting the 3′-UTR of the CTNNB1 gene were designed such that flanking XbaI restriction sites were introduced into the 127 bp (base-pair) PCR product containing the 5′-AACACTA-3′ hsa-miR-142-3p target site. The forward and reverse primer sequences were 5′-GCTC TAGATTAAGAATTGAGTAATGG-3′ and 5′-GCTC TAGA ACTAATTGGACCATTTTC-3′, respectively. PCR reaction conditions were as follows: 35 cycles of a 94°C denaturing step for 30 s, a 55°C annealing step for 30 s, and a 72°C elongation step for 10 s. The PCR product was digested with XbaI (Takara) and cloned into the pGL3-promoter luciferase reporter vector (Promega, MI, USA) to generate the vector pGL3-wt-CTNNB1. The hsa-miR142-3p target site in the pGL3-WT-CTNNB1 vector was mutated from 5′-AACACTA − 3′ to 5′-CATAACA − 3′ to construct the mutated reporter vector, pGL3-mt-CTNNB1. The products of all cloning and mutagenesis reactions were confirmed by DNA sequencing. Endotoxin free DNA was prepared in all cases. The hsa-miR142-3p mimic(5′-UGUAGUGUUUCCUACUUU AUGGAtt-3′), the hsa-miR142-3p inhibitor (5′-UCCAUAAAGUAGGAAACACUACAtt-3), and negative control miRNA (NC,5′-UGUAGUGUUUCCUA CUUUAUGGAtt-3′) were all chemically synthesized (Invitrogen). We used Targetscan (http://www.targetscan.org/) to predict whether a hsa-miR142-3p binding site exists within the 3′-UTR of human CTNNB1 mRNA. The results showed that a seven-base hsa-miR142-3p seed sequence is present in the 3′-UTR of CTNNB1 mRNA. The same tool was used to predict the binding sites of hsa-miR142-3p on lncRNA-CR594175.A suspension of 293TN cells in logarithmic phase growth was prepared and the number of viable cells counted using a hemocytometer in conjunction with trypan blue staining. The cells were seeded into six-well plates at a concentration of 2 × 10 5 cells per well and maintained in Dulbecco's Modified Eagle's medium supplemented with 10% FBS at 37°C for 24 h in a 5% CO 2 atmosphere. The transfection of plasmid DNA and RNA was performed using Lipofectamine 2000 (Invitrogen). Transfection of cells with pRL-TK (100 ng) served as a reference for luciferase detection. Luciferase activity was measured using the dual luciferase reporter assay system (Promega) 48 h after transfection. The experiment to observe the effect of lncRNA-CR594175 depletion on the inhibition of luciferase by hsa-miR142-3p mimics was carried out in 293TN and HepG2 cells; the plasmid transfection and luciferase activity assay were the same as used in the validation of target sites of hsa-miR142-3p. Cellular proliferation assay HepG2 cells were divided into seven groups: a control group, Lv-NC group, Lv-shRNA-CR594175 group, Lv-miR142-3p group, Lv-shRNA-CR594175 and Lv-miR142-3p group, Lv-CTNNB1 group, and Lv-shRNA-CR594175 and Lv-CTNNB1 group. Fourty-eight hours after infection, HepG2 cells groups were trypsinized, and seeded into 96-well plates at a density of 1 × 10 4 cells per well. The cells were cultured under normal conditions and cell viability was examined using CCK-8 at 24-, 48-, and 72-h time points. Briefly, 10 uL CCK-8 solution (Dojindo, Japan) was added, and the cells then cultured under normal conditions for an additional 4 h before measurement of absorbance at 490 nm. Cell invasion assay Cell invasion experiments were performed using the QCMTM 24-well Fluorimetric Cell Invasion Assay kit (Chemicon, International, MI, USA) according to the manufacturer′s instructions. The kit uses an insert polycarbonate membrane with an 8-μm pore size. The insert was coated with a thin layer of EC Matrix that occluded the membrane pores and blocked migration of noninvasive cells. Culture medium (500 μl) supplemented with 10% FBS was used as chemoattractant. Cells that migrated and invaded the underside of the membrane were fixed in 4% paraformaldehyde. The invading cells were stained by Calcein-AM, and the number was then determined by fluorescence and reported as relative fluorescence units (RFUs). Effect of lncRNA-CR594175 silencing on the protein levels of CTNNB1, E-cadherin, C-myc, CyclinD1 and MMP-9 HepG2 cells were divided into three groups: a control group, Lv-NC group and Lv-shRNA-CR594175. Cells in logarithmic phase were seeded to 6-well plates at 5 × 10 5 cells/well. One day later, viral solution was added and the infection efficiency was evaluated by observing and analyzing the fluorescent mark 72 h after infection. Proteins were isolated and subjected to western blotting for CTNNB1, E-cadherin, C-myc, CyclinD1 and MMP-9 protein, respectively. Animal xenografts Nude mice were purchased from Shanghai SLAC Laboratory Animal Co.,Ltd. (Shanghai, China) and housed at the animal experiment center of Zhengzhou University, where the implantation experiment was performed. All the protocols were previously approved by the Zhengzhou University Animal Ethics Committee. HepG2 cells (1 × 10 6 ) were suspended in 200 μl medium, and injected subcutaneously into the flank regions of 48 female athymic nude mice. Two weeks after inoculation, visible subcutaneous tumors were detected, and the tumors were measured approximately 2.5 mm in diameter 3 weeks after inoculation. All animals were randomly divided into 3 groups (8 mice per group): the Model group, the NC group, and the lncRNA-CR594715silencing group. For the intervention groups, each animal received 30 μl recombinant lentivirus (1 × 10 8 IFU) twice a week since the second week for 4 weeks, while the model group received the same volume of saline instead. Tumor diameter was measured weekly since the second week, and the data was used to plot the tumor growth curves. The formula for calculating the tumor volume was: V = 1/2 a × b2, a and b are the long and short diameters of the tumor, respectively. Statistical analysis All data are expressed as mean ± SD, and analyzed by one way ANOVA. Least Significant Difference (LSD) was used for multiple comparisons between any two means. P-values < 0.05 were considered statistically significant. All statistical analysis was performed using SPSS 13.0 software. Results Assessment of mRNA and protein levels of CTNNB1 and hsa-miR-142-3p and lncRNA-CR594175 levels through RT-qPCR and Western blotting in adjacent non-cancerous, primary HCC and metastatic tissues The data of CTNNB1 mRNA and protein levels demonstrated that in comparison with adjacent noncancerous tissues, CTNNB1 protein was increased in HCC and its metastasis (p < 0.01), and more in HCC metastasis than in primary HCC (p < 0.05); but there were no obvious differences between the mRNA levels in the three groups of tissues (p > 0.05). These results suggest that the high expression of CTNNB1 is due to inactivation of post-transcriptional regulation. The levels of lncRNA-CR594175 and hsa-miR142-3p in the adjacent tissues, HCC and its metastasis were positively correlated with CTNNB1 protein levels and was higher in the HCC and their metastasis than that in the adjacent tissues (p < 0.05) (Fig. 1a). We also evaluated the expression of downstream functional proteins of Wnt pathway, E-cadherin, C-myc, CyclinD1, and MMP-9, in HCC, in metastasis and adjacent tissue (p < 0.01), and we observed that protein levels were significantly higher in HCC metastasis than in primary HCC (p < 0.05). A reverse trend was observed in E-cadherin to the three proteins mentioned above in these tissues (Fig. 1b). Effect of lncRNA-CR594175 silencing and CTNNB1 expression via lentiviral strategy on HCC cells Recombinant lentiviruses, Lv-NC, Lv-shRNA-CR594175 and Lv-CTNNB1, were used to infect HepG2. GFP (Green fluorescent protein) was detected in most of the cells 72 h after infection, and the proportion of GFP-expressing cells suggested that the gene delivery efficiency was higher than 95% in the HepG2 (Fig. 2a). LncRNA-CR594175 was significantly decreased by Lv-shRNA-CR594175 (p < 0.05), and no change in cells infected with Lv-CTNNB1(p>0.05) was obsserved; CTNNB1 protein level was significantly increased by Lv-CTNNB1 and decreased by Lv-shRNA-CR594175 (p < 0.05) (Fig. 2b). These findings suggest that lncRNA-CR594175 silencing down-regulated CTNNB1 expression in HepG2and that the overexpression of CTNNB1 had no obvious effect on lncRNA-CR594175. Luciferase experiments Our bioinformatics analysis identified a 7-base pair hsa-miR-142-3p in the 3′ UTR of CTNNB1 mRNA. We therefore constructed luciferase reporter vectors to verify whether this site represents a valid hsa-miR142-3p target. Reporter vectors containing the wild-type CTNNB1 3′-UTR or a variant were generated. The variant vector had the has-miR142-3p target site within the 3′-UTR of CTNNB1 mutated. Both reporter constructs expressed luciferase at a high level. However, the miR142-3p mimic significantly inhibited luciferase activity in cells transfected with the reporter vector encoding the wild type 3′-UTR (42.15 ± 3.98 vs. 8.07 ± 0.88; p < 0.01), while the miR142-3p inhibitor significantly increased luciferase activity in these cells (42.15 ± 3.98 vs. 52.81 ± 9.04; p < 0.05) (Fig. 3a). Conversely, in cells transfected with the reporter vector encoding the mutated hsa-miR142-3p target site, neither the miR142-3p mimic nor the miR142-3p inhibitor had any significant effect on luciferase activity (p > 0.05). Co-transfection of miR142-3p-NC (non-targeting control) had no effect on the luciferase activity of either of the vectors (p > 0.05). These results verified the presence of a hsa-miR142-3p target site in the 3′-UTR of CTNNB1 mRNA and demonstrated that binding of hsa-miR142-3p to this target site down-regulated CTNNB1 expression (Fig. 3a). Interestingly, miR142-3p mimics lost its inhibition on the activity of luciferase expressed by wild-type (wt) luciferase reporter vector in HepG2, and regained the Fig. 1 Detection of expression levels of lncRNA-CR594175, hsa-miR142-3p, CTNNB1 and proteins related to Wnt pathway in adjacent normal tissue (ANT), the HCC tumors and their metastasis. a The levels of lncRNA-CR594175, hsa-miR142-3p and CTNNB1 mRNA were detected by Real-Time PCR; in the right panel the CTNNB1 protein evaluated by western blotting shows a higher level in metastatic tissues. b mRNA and protein levels of E-cadherin (95 kDa), C-myc (56 kDa), CyclinD1 (36 kDa) and MMP-9 (102 kDa) in non-tumor adjacent tissues, HCC and metastatic tissues were detected by Real-Time PCR and western blotting, respectively: representative blots; the optical density of the target band divided by the optical density of the β-actin band; Data are expressed as mean ± SD. *, p < 0.05 and **, p < 0.01, t-test inhibition after lncRNA-CR594175 silencing (Fig. 3b). Taken together, these data suggested that lncRNA-CR594175 silencing could restore the negative regulation of hsa-miR-142-3p on its target gene CTNNB1 in HepG2 cells. Compared with the control group, the invasion ability of HepG2 in the lncRNA-CR594175 silence group was significantly weakened (p < 0.01 vs. control group), significantly enhanced in CTNNB1 overexpression group and lncRNA-CR594175 silence combined CTNNB1 overexpression group (p < 0.05 vs. control group), and there was no significant change between the NC group or Fig. 2 Genetic intervention through a lentiviral approach. a GFP expression 72 h after HepG2 were infected with recombinant viruses Lv-shRNA-CR594175 and Lv-CTNNB1. The infection rate was estimated by dividing the number of the cells expressing GFP with the number of all the cells in each view. For statistics, five views were randomly selected, and the mean was calculated; b Evaluation of lncRNA-CR594175 and CTNNB1 levels after infection with lentiviruses. For RT-qPCR and western blotting, the β actin was used as internal reference. ** p < 0.01, vs cell group. The tests were carried out on three biological triplicates, and data are expressed as the mean ± SD miR142-3p overexpression group and control group (p > 0.05 vs. control group). The invasive ability of HepG2 in the lncRNA-CR594175 silence group combined miR142-3p overexpression group was significantly weakened than that of the control or NC group (p < 0.01 vs. control group or NC group), but there was no significant difference from the lncRNA-CR594175 silence group (p > 0.05 vs. miR142-3p overexpression group) (Fig. 4b). In vivo experiment showed that 4 consecutive weeks of treatment with Lv-shRNA-CR594175 significantly reduced the tumor volume. After administration for five weeks, the tumor volume of the model group was 701.21 ± 54.13 mm 3 , the NC control group was 672.34 ± 49.06 mm 3 and the lncRNA-CR594175 silencing group was 212.31 ± 57.71 mm 3 . The tumor inhibition rates in the NC group and lncRNA-CR594175 silenced group were 4.12 and 69.73%, respectively, with a statistically significant difference between the lncRNA-CR594175 silenced group and the other two groups (p < 0.01, vs. Model group or NC group) (Fig. 4c). Effect of lncRNA-CR594175 silencing on expression of downstream functional protein of Wnt pathway We assessed E-cadherin, C-myc, CyclinD1 and MMP-9 in the lncRNA-CR594175 silenced HepG2 cells. The results showed that C-myc, CyclinD1 and MMP-9 were decreased and E-cadherin was increased by lncRNA-CR594175 silencing significantly (p<0.01) but no change in protein levels was observable after treatment with Lv-NC (p>0.05) (Fig. 5). The results indicated that lncRNA-CR594175 silencing could regulate the classic Wnt pathway by reducing CTNNB1 protein levels and the other Wnt pathway activation related proteins modulating cell proliferation and invasiveness (Fig. 5). Fig. 3 Hsa-miR142-3p binds to CTNNB1 3'UTR, which is interfered by lncRNA-CR594175. a 293TN cells were transfected with pGL3-wt-CTNNB1 or pGL3-mt-CTNNB1 in the presence or absence of miR142-3p-mimic or inhibitor and subjected to luciferase activity assay 48 h later. Left, predicted binding site of hsa-miR142-3p in 3′-UTR of CTNNB1; Right, effects of hsa-miR142-3p on the expression of a luciferase cassette encoding the CTNNB1 3′-UTR. The histogram shows the relative firefly luciferase activity for the different experimental groups. *, p < 0.05, and **, p < 0.01, compared with the group transfected with the same vector but without the miR142-3p mimics or miR142-3p inhibitor. b Effect of lncRNA-CR594175 silencing on regulation of CTNNB1 by hsa-miR142-3p in HepG2 cells. Left, HepG2 cells were transfected with the indicated vectors and subjected to luciferase activity assay 48 h later. The histogram shows the relative firefly luciferase activity for the different experimental groups; right, prediction of the binding sites of hsa-miR-142-3p in lncRNA-CR594175. *, p < 0.05, compared with the group transfected pGL3-wt-CTNNB1 and miR142-3p-mimics. Data are expressed as mean ± SD of at least three independent experiments Discussion The invasion and metastasis of cancer refers to cancer cells that break away from the primary focus of the tumor and transfer to the neighbor where they proliferate into cancer of the same nature [13]. This process depends on the interaction between cancer cells and tumor microenvironment promoting their survival, growth, and angiogenesis, as well as invasion and metastasis [14]; therefore, inhibiting of proliferation and invasion of tumor cells is the key factor to inhibit tumor metastasis. As an important type of regulators, lncRNAs exert their functions through a variety of ways. Although they were firstly regarded as by-products by RNA polymerase II, or transcriptional noise, recent studies have shown that lncRNAs are associated with multiple biological processes such as chromosome silencing, chromatin modification and transcriptional regulation [15,16]. The proportion of lncRNAs in the total transcripts of genome is far larger than that of encoding RNAs. And lncRNAs play crucial roles in the regulatory network by their interaction with DNA, RNA and proteins. In Fig. 4 Effects of lncRNA-CR594175 depletion on proliferation and invasion of HCC cells and In vivo tumor suppression. a Cell proliferation activity assay. HepG2 were infected with the indicated lentivirus and then seeded to 96-well plates and used for the detection of cell viability at the 0,24,48 and 72 h. The x-coordinate represents proliferation at different time points and the y-coordinate represents the absorbance at 490 nm. b Cell invasion assay: HepG2 cells 48 h after infection with the indicated lentiviruses; representative images of cells that seeded into the upper chamber of a transwell and passed through the basement membrane. c Growth curves of tumor in vivo. The x-coordinate represents the period of virus injection and the y-coordinate represents the tumor volume (mm 3 ). The formula for calculating the tumor volume was: V = 0.5 × a × b × b, where a and b are the long and short diameters of the tumor. The number of animals in one group was 12 (n = 12). ** p < 0.01, *p < 0.05, t-test, data are expressed as the means ± SD addition to gene expression regulation, lncRNAs are closely related to evolution of species, embryonic development, metabolism and tumorigenesis. The evidence on involvement of lncRNAs in diseases including cancers will provide basis and target for diagnosis and treatment of diseases. Sun Shu-han et al. have found lncRNA-Dreh can inhibit hepatocellular carcinoma metastasis [17]. We screened for differential lncRNAs in several pairs of selected HCC and adjacent tissue by using lncRNA chips. The reason why lncRNA-CR594175 caught our attention was that its expression was not only increased in HCC than in adjacent non-cancerous tissues, but also was increased more in HCC metastases than in primary HCC, indicating that lncRNA-CR594175 would be associated with the process of HCC occurrence and metastasis. We knocked down lncRNA-CR594175 in HepG2 cells, and found that the proliferation and invasion were reduced, as well as downstream proteins of Wnt pathway cMyc, CyclinD1 and MMP-9, through Wnt pathway while E-cadherin shows an opposite behavior. So we believed that high lncRNA-CR594175 levels may contribute to metastases formation and tumor progression by regulating proteins downstream of Wnt pathway. Wnt signaling pathway is one of the key signaling pathways for cell proliferation and differentiation. To figure out how lncRNA-CR594175 promotes Wnt pathway in HCC, we analyzed key proteins involved in invasion and migration in HCC cells with lncRNA-CR594175 silencing and control cells, and found that the expression of CTNNB1 was consistent with lncRNA-CR594175, which was confirmed in primary HCC, metastatic HCC and adjacent tissue. Interestingly, our RIP (RNA Binding Protein immunoprecipitation) experiment showed that lncRNA-CR594175 did not bind to CTNNB1 directly (Data not shown). Next we quantified the transcription and protein levels of CTNNB1 in HCC with lncRNA-CR594175 silencing and found that CTNNB1 expression was abnormal at the post-transcription level, suggesting a mechanism of regulation of CTNNB1 expression following the high expression of lncRNA-CR594175. As a typical post-transcription regulating factor, miRNA naturally became our pointcut to investigate the relation between lncRNA-CR594175 and CTNNB1. Bioinformatics suggests that there is a 7 base-pair seeding region of hsa-miR142-3p on CTNNB1's 3'UTR and 2 seeding regions on the 600 base-pair lncRNA-CR594175. As a result, we speculated that elevated lncRNA-CR594175 bound to hsa-miR142-3p as miRNA sponge and disabled the negative regulation of CTNNB1 by hsa-miR142-3p, so CTNNB1 expression was increased and resulted in proliferation and invasion of HCC cells. The interaction between lncRNAs and miRNAs has an important influence on the onset and development of Fig. 5 Effects of CR594175 lncRNA-CR594175 silencing on E-cadherin, C-myc, CyclinD1 and MMP-9. HepG2 were infected with Lv-NC (negative control) or with Lv-shRNA-CR594175 and 72 h later were subjected to western blotting to test protein levels downstream of the WNT pathway. β actin (43 kDa) was used as the loading control. Data are representative of at least three independent experiments. ** p < 0.01, *p < 0.05.t-test cancer [18]. MiRNAs are able to regulate lncRNAs in a targeted way: a study has shown that miR-21 targets lncRNA GAS5 in addition to protein coding genes [19]. LncRNAs can also affect the onset and development of cancer by regulating expression of miRNAs [20]. According to existing studies, lncRNAs regulate miRNAs through three ways: (1) combining competitively to 3′-UTR of mRNAs so inhibiting negative regulation by miRNAs. Faghihi et al., for example, found that an antisense RNA can bind to BACE1 mRNA, competitively inhibiting the negative regulation of BACE1 by miRNA [21]; (2) to regulate target genes by forming pre-miRNAs after RNA splicing and producing specific miR-NAs [22,23]; and (3) to act as endogenous miRNA sponge to suppress miRNA function, so as to affect malignant biological behavior of cancer cells [24]. The most important finding of this study is that lncRNA-CR594175 silencing could restore the negative regulation of CTNNB1 by hsa-miR142-3p to inhibit cancer, directly based on following facts: (1) hsa-miR142-3p negatively regulated CTNNB1 by binding to its 3'UTR, which was found in HCC with lncRNA-CR594175 silencing but not those with high lncRNA-CR594175 expression levels; (2) LncRNA-CR594175 silencing inhibited proliferation and invasion of HCC cells, which was reversed by overexpression of CTNNB1; (3) Overexpression of hsa-miR142-3p had no observable effect on proliferation and invasion of HCC cells, but inhibited proliferation and invasion of HCC cells when lncRNA-CR594175 was depleted. Considering that CTNNB1 overexpressed by the lentiviral system has no wild 3'UTR, it would not be affected by miRNA. So we think that there is a lncRNA-CR594175/hsa-miR-142-3p/CTNNB1 axis regulating metastasis formation in HCC. Conclusion The study demonstrates that lncRNA-CR594175 plays a key role in the process of HCC metastasis, and offers a possible explanation about why hsa-miR142-3p loses its basic function of resisting HCC tumors. In the long run, lncRNAs will not only be a direct target for gene therapy but it can also be used together with miRNAs for a better effect.
v3-fos-license
2022-10-14T15:17:29.451Z
2022-10-01T00:00:00.000
252888620
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.mdpi.com/2306-7381/9/10/549/pdf?version=1664976426", "pdf_hash": "5e4caf3a37afec3eaa4c284ca8b9b28ac73242dd", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:1704", "s2fieldsofstudy": [ "Biology", "Medicine" ], "sha1": "a5380a39e360ecc498e1263a182720a561eca414", "year": 2022 }
pes2o/s2orc
Mott Cell Differentiation in Canine Multicentric B Cell Lymphoma with Cross-Lineage Rearrangement and Lineage Infidelity in a Dog Simple Summary The scientific literature regarding Mott cell differentiation in canine lymphoma is scarce. Mott cells are defective in immunoglobulin secretion and are derived from plasma cells, and lymphoma is a severe condition characterized by the proliferation of neoplastic lymphoid cells. Lymphoma can be divided into B- or T-cell according to their origin. Whether the origin of lymphoma is B- or T-cell can be confirmed by PCR for antigen receptor rearrangement or flow cytometry assay. However, the phenomenon in which B- and T-cells are simultaneously identified in PCR for antigen receptor rearrangement and flow cytometry is called cross-lineage rearrangement and lineage infidelity, respectively, and is known to be occasionally found in canine lymphoma. These phenomena have not been reported in canine lymphoma with Mott cell differentiation. This study is the first report of Mott cell differentiation in canine B-cell lymphoma with cross-lineage rearrangement and lineage infidelity. This study describes the clinical features, diagnosis, and treatment of this unknown type of cancer in a 4-year-old female mongrel dog. Abstract Lymphoma is a severe condition characterized by the proliferation of neoplastic lymphoid cells. A 4-year-old female mongrel dog presented with solitary lymph node enlargement. Significant right prescapular lymphadenopathy and abdominal enlargement were observed during physical examination. A complete blood count revealed lymphocytosis, and a peripheral blood smear revealed lymphoblastosis and Mott cells. Fine needle aspiration cytology (FNAC) of the right prescapular lymph node revealed a predominant population of lymphoblasts and Mott cells. Based on the FNAC and blood smear results, the patient was diagnosed with leukemic state multicentric B-cell lymphoma with Mott cell differentiation. Subsequent PCR for antigen receptor rearrangement and flow cytometry revealed that the patient exhibited cross-lineage rearrangement (CLRA) and lineage infidelity (LI), respectively. CHOP-based chemotherapy was initiated, however, the patient’s disease was progressive. The patient died three months after the initial presentation. Mott cell differentiation in canine B-cell lymphoma (MCL) has rarely been reported in the veterinary literature and seems to show an unusual clinical course. To the best of our knowledge, no reports of MCL with CLRA and LI exist. We report the clinical features, diagnosis, and treatment of MCL with CLRA and LI. Introduction Lymphoma (lymphosarcoma, LSA) is the uncontrolled proliferation of neoplastic lymphoid cells that arises in lymphoid-or other tissues in the body. It is commonly encountered in small animal practices and comprises approximately 83% of canine hematopoietic cancers [1]. No sex predisposition has been reported. LSA can occur at any age but is most common in middle-aged to older dogs. One study reported an age-adjusted overall Vet. Sci. 2022, 9, 549 2 of 10 incidence of 1.5/100,000 for dogs <1 year of age and 84/100,000 for 10-year-old dogs [2]. Although the etiology of canine LSA is poorly understood, it is likely multifactorial (e.g., genetic, molecular, environmental, immunological, and infectious factors) [3]. LSA can be classified based on anatomic location (e.g., multicentric, gastrointestinal, mediastinal, and cutaneous LSA), histopathologic features, immunophenotypic characteristics (i.e., B-and T-cell LSA), and World Health Organization (WHO) clinical staging system [3][4][5]. The diagnosis of LSA usually depends on morphological characteristics identified using fineneedle aspiration cytology (FNAC) and/or histopathology. Additionally, the presence of lymphoblasts exceeding 50% is often used as a diagnostic hallmark of LSA [4]. In addition to these tests, additional diagnostics such as immunohistochemistry (IHC), flow cytometry (FC), and PCR for antigen receptor rearrangement (PARR) have been developed to assist in the diagnosis and classification of LSA. As canine LSA is systemically affected in most patients, chemotherapy is the mainstay of treatment. Despite numerous chemotherapy protocols being available, CHOP-based chemotherapy (i.e., cyclophosphamide [C], doxorubicin [hydroxydaunorubicin], vincristine [oncovin], and prednisone/prednisolone [P]) is considered the most effective treatment protocols for canine LSA [2,6]. LSA prognosis varies depending on numerous factors, such as the stage, location, clinical remission in response to chemotherapy, immunophenotypes, and characteristics of the neoplastic cells [7,8]. Mott cells are defective in immunoglobulin secretion and are derived from plasma cells [9][10][11]. The multiple spherical inclusions of Mott cell cytoplasm represent immunoglobulin (Ig) accumulation in the rough endoplasmic reticulum, called the Russel body [9][10][11]. Mott cell differentiation is associated with pathological conditions in some diseases, including chronic inflammation, autoimmune diseases, multiple myeloma, plasma cell dyscrasias, and LSA [11]. Since Mott cells are differentiated B cells, they can appear in B-cell LSA [12]. To the best of our knowledge, nine reports of Mott cell differentiation in canine B-cell lymphoma (MCL) have been written to date [13][14][15][16][17][18][19][20][21]. The exact etiology and clinical course of MCL are poorly understood. Since most patients with MCL in previous studies were euthanized at the time of diagnosis or after a short course of chemotherapy, data on MCL regarding treatment, response to chemotherapy, and prognosis are lacking. MCL is rare and reportedly has an unusual clinical course [13]. Therefore, investigating various MCL cases that have studied the clinical effectiveness of chemotherapy and prognosis is important. Furthermore, MCL with cross-lineage rearrangement (CLRA) or lineage infidelity (LI) has not been reported to date, and its response to chemotherapy and clinical course are entirely unknown. Herein, we report a case of MCL with CLRA and LI, along with the clinical features, diagnosis, and treatment thereof. Case Presentation A 4-year-old female mongrel dog presented with solitary lymph node enlargement. There was no historical evidence of toxicant exposure or infection. Physical examination revealed significant right prescapular lymphadenopathy (3.9 × 3.5 cm 2 ) and abdominal enlargement. Complete blood count (CBC, ADVIA ® 2120, Siemens Healthcare Diagnostics, Deerfield, IL, USA) revealed severe lymphocytic leukocytosis (lymphocyte: 16 Figure 1). The results of serum biochemistry (BS-200 Chemistry Analyzer; MINDRAY TM , Shenzhen, China) were unremarkable. Abdominal ultrasonography detected prominent intraabdominal lymphadenopathy, ascites, and a splenic mass with a honeycomb sign ( Figure 2). Fine needle aspiration cytology (FNAC) of the right prescapular lymph node revealed predominant lymphoblasts and several Mott cells (Figure 1). The results of serum biochemistry (BS-200 Chemistry Analyzer; MINDRAY TM , Shenzhen, China) were unremarkable. Abdominal ultrasonography detected prominent intraabdominal lymphadenopathy, ascites, and a splenic mass with a honeycomb sign ( Figure 2). Fine needle aspiration cytology (FNAC) of the right prescapular lymph node revealed predominant lymphoblasts and several Mott cells ( Figure 1). The serosanguineous ascites were identified as exudates (total nucleated cell count: 148,300 cells/µL and total protein: 4.1 g/dL) using CBC and a refractometer, and cytology revealed predominant lymphoid cells, consistent with the features of neoplastic effusion ( Figure 3). A PARR (IDEXX Laboratories, Westbrook, ME, USA) assay using the right prescapular lymph node (LN) revealed that the patient presented with clonal rearrangement of both IG and T-cell receptor genes, a phenomenon known as CLRA. Immunophenotyping via FC assay was performed on right prescapular LN aspirates and peripheral blood samples. In order to select the best gating strategy for the determination of the lymphoid cells in the right prescapular LN and peripheral blood, lymphoid cells were identified and gated by forward scatter (FSC) and side scatter (SSC) characteristics. After doublet exclusion of the gated lymphoid cells, the singlets were gated ( Figure 4). Then, immunophenotyping was performed on gated singlet lymphoid cells of the LN and peripheral blood using CD3, CD5, CD21, and CD34 antibodies. Lymphoid cells from Vet. Sci. 2022, 9, 549 4 of 10 the LN and peripheral blood showed a homogenous population (approximately 90%) of CD3+/CD5−/CD21+/CD34− and CD3−/CD5−/CD21+/CD34−, respectively (Table 2, Figure 4). Since CD3 represents the T-cell marker and CD21 represents the B-cell marker, a phenomenon known as LI was found on the right prescapular LN. The serosanguineous ascites were identified as exudates (total nucleated cell count: 148,300 cells/µL and total protein: 4.1 g/dL) using CBC and a refractometer, and cytology revealed predominant lymphoid cells, consistent with the features of neoplastic effusion ( Figure 3). A PARR (IDEXX Laboratories, ME, USA) assay using the right prescapular lymph node (LN) revealed that the patient presented with clonal rearrangement of both IG and T-cell receptor genes, a phenomenon known as CLRA. Immunophenotyping via FC assay was performed on right prescapular LN aspirates and peripheral blood samples. In order to select the best gating strategy for the determination of the lymphoid cells in the right prescapular LN and peripheral blood, lymphoid cells were identified and gated by forward scatter (FSC) and side scatter (SSC) characteristics. After doublet exclusion of the gated lymphoid cells, the singlets were gated (Figure The serosanguineous ascites were identified as exudates (total nucleated cell count: 148,300 cells/µL and total protein: 4.1 g/dL) using CBC and a refractometer, and cytology revealed predominant lymphoid cells, consistent with the features of neoplastic effusion ( Figure 3). A PARR (IDEXX Laboratories, ME, USA) assay using the right prescapular lymph node (LN) revealed that the patient presented with clonal rearrangement of both IG and T-cell receptor genes, a phenomenon known as CLRA. Immunophenotyping via FC assay was performed on right prescapular LN aspirates and peripheral blood samples. In order to select the best gating strategy for the determination of the lymphoid cells in the right prescapular LN and peripheral blood, lymphoid cells were identified and gated by forward scatter (FSC) and side scatter (SSC) characteristics. After doublet exclusion of the gated lymphoid cells, the singlets were gated (Figure 4). Then, immunophenotyping was performed on gated singlet lymphoid cells of the LN and peripheral blood using CD3, CD5, CD21, and CD34 antibodies. Lymphoid cells from the LN and peripheral blood showed a homogenous population (approximately 90%) of CD3+/CD5−/CD21+/CD34− and CD3−/CD5−/CD21+/CD34−, respectively (Table 2, Figure 4). Since CD3 represents the T-cell marker and CD21 represents the B-cell marker, a phenomenon known as LI was found on the right prescapular LN. Based on the results of FNAC, PARR, FC, and the presence of Mott cells, the patient was diagnosed with stage Va (i.e., leukemic state) multicentric B-cell LSA with Mott cell differentiation, CLRA, and LI. CHOP-based chemotherapy using cyclophosphamide (250 mg/m 2 , IV), vincristine (0.7 mg/m 2 , IV), doxorubicin (1 mg/kg, IV), and prednisolone (2.0 mg/kg, PO, q24h for seven days, then slowly tapered until the final visit), was initiated. The patient was monitored weekly for clinical response to chemotherapy throughout the protocol, and abdominal ultrasonography was repeated at the end of each chemotherapy cycle. Based on the results of FNAC, PARR, FC, and the presence of Mott cells, the patient was diagnosed with stage Va (i.e., leukemic state) multicentric B-cell LSA with Mott cell differentiation, CLRA, and LI. CHOP-based chemotherapy using cyclophosphamide (250 mg/m 2 , IV), vincristine (0.7 mg/m 2 , IV), doxorubicin (1 mg/kg, IV), and prednisolone (2.0 mg/kg, PO, q24h for seven days, then slowly tapered until the final visit), was initiated. The patient was monitored weekly for clinical response to chemotherapy throughout the protocol, and abdominal ultrasonography was repeated at the end of each chemotherapy cycle. The initial response to chemotherapy was favorable during the first cycle of CHOP therapy. The enlarged superficial and intra-abdominal LNs decreased in size by approximately 40%, and lymphocytosis on CBC was within the reference interval (4.05 × 10 9 cells/L). However, complete remission (CR) was not achieved, and partial remission (PR) was maintained for the first four weeks. In the 6th week of chemotherapy (i.e., on the first day of the second cycle), the patient's superficial and intra-abdominal LNs and spleen were observed to be enlarged, which was considered a progressive disease (PD), followed by CHOP chemotherapy and L-asparaginase administration (400 U/kg, SC). The response to re-instituted chemotherapy over four weeks was consistently poor, and palliative therapy using chlorambucil (6 mg/m 2 , PO, q24h) and prednisolone (1.0 mg/kg, PO, q24h) was initiated after L-CHOP chemotherapy at the request of the owner. Unfortunately, the response to this treatment was also consistently poor, and superficial and intra-abdominal LNs and the spleen gradually increased in size until the final visit ( Figure 5). The patient eventually became anorexic and lethargic three months after the initial presentation and expired (survival time of 81 days after the initial treatment). maintained for the first four weeks. In the 6th week of chemotherapy (i.e., on the first day of the second cycle), the patient's superficial and intra-abdominal LNs and spleen were observed to be enlarged, which was considered a progressive disease (PD), followed by CHOP chemotherapy and L-asparaginase administration (400 U/kg, SC). The response to re-instituted chemotherapy over four weeks was consistently poor, and palliative therapy using chlorambucil (6 mg/m 2 , PO, q24h) and prednisolone (1.0 mg/kg, PO, q24h) was initiated after L-CHOP chemotherapy at the request of the owner. Unfortunately, the response to this treatment was also consistently poor, and superficial and intra-abdominal LNs and the spleen gradually increased in size until the final visit ( Figure 5). The patient eventually became anorexic and lethargic three months after the initial presentation and expired (survival time of 81 days after the initial treatment). Discussion In the present report, we describe the diagnostic and therapeutic trials of a dog with multicentric B-cell LSA, as well as Mott cell differentiation, CLRA, and LI. Although histopathological examination could not be performed via IHC assay due to the owner's refusal, we were finally able to diagnose the patient with MCL based on the results of FNAC (existence of lymphoblastosis and Mott cell differentiation) and FC analysis of peripheral blood (immunophenotyping of CD3−/CD5−/CD21+/CD34−). Mott cells are derived from terminally differentiated B cells, and neoplastic lymphoid differentiation combined with Mott cell differentiation is indicative of B-cell LSA [12]. These results suggest multicentric B-cell LSA with Mott cell differentiation, CLRA, and LI in this patient. In addition, numerous lymphoblasts and Mott cells were found in the patient's peripheral blood smears. Discussion In the present report, we describe the diagnostic and therapeutic trials of a dog with multicentric B-cell LSA, as well as Mott cell differentiation, CLRA, and LI. Although histopathological examination could not be performed via IHC assay due to the owner's refusal, we were finally able to diagnose the patient with MCL based on the results of FNAC (existence of lymphoblastosis and Mott cell differentiation) and FC analysis of peripheral blood (immunophenotyping of CD3−/CD5−/CD21+/CD34−). Mott cells are derived from terminally differentiated B cells, and neoplastic lymphoid differentiation combined with Mott cell differentiation is indicative of B-cell LSA [12]. These results suggest multicentric B-cell LSA with Mott cell differentiation, CLRA, and LI in this patient. In addition, numerous lymphoblasts and Mott cells were found in the patient's peripheral blood smears. Lymphoblasts in the peripheral blood are representative signs of leukemic LSA or leukemia. It is difficult to differentiate between LSA and leukemia based on the presence of lymphoblasts in the blood alone. LSA can be differentiated from leukemia with clinical findings such as the presence of generalized lymphadenopathy and flow cytometric analysis of CD34. CD34 is a transmembrane phosphoglycoprotein encoded by CD34 in various species, including humans and dogs [22]. CD34-expressing cells are normally found in the umbilical cord and bone marrow as hematopoietic cells [23]. Because of these characteristics, CD34 is often used clinically to differentiate leukemic state LSA from acute lymphocytic leukemia (ALL) and ALL from chronic lymphocytic leukemia (CLL), although it is not always definitive [4]. Since both CD34+ LSA and CD34− leukemia have been reported in canine patients, CD34 alone cannot completely rule out leukemia. However, because generalized lymphadenopathy and predominant CD34 − lymphoid cells were identified in the present patient, it was considered that the likelihood of leukemic state lymphoma was high due to the following characteristics of the patient. In the present patient, CLRA and LI were found using PARR and FC, respectively. PARR is a methodology used to detect clonality in B-cell and T-cell LSA [24]. Normally, Ig or T-cell receptor (TCR) gene rearrangements are considered to be of B-or T-lineage origin, respectively. However, CLRA, a phenomenon that breaks this view, is a finding comprising cross-lineage expression of lineage-specific immunological markers on B-or T-cells (i.e., both Ig and TCR on PARR) [25][26][27][28][29]. One study previously reported this uncommon phenomenon in 21% of dogs with marginal zone lymphoma and 5% of dogs with T-cell LSA [27,30]. Unfortunately, the precise mechanism of CLRA in LSA is poorly documented, and its effects on prognosis or response to treatment have not been determined. Further studies on the clinical implications of CLRA are warranted. FC is a useful test for diagnosing immunophenotype LSA and has the advantage of providing a larger panel of markers than IHC or ICC [4]. It is highly sensitive and specific for diagnosing LSA, has 94% agreement with IHC, and is superior to the PARR test [31]. In the FC assay of the right prescapular LN, most lymphocytes expressed CD3+/CD5−/CD21+/CD34−. CD3 is a T cell co-receptor expressed in the membranes of normal and neoplastic T cells [32]. Because CD3 is present at all stages of T-cell development, it is a useful marker for identifying T-cell LSA. Likewise, CD21, which is a protein expressed on B cells, is used to identify B-cell LSA [33]. However, the patient's lymphocytes in the right prescapular lymph node showed both CD3 and CD21, making it difficult to determine the phenotype. This phenomenon, known as LI, causes chemoresistance in LSA and leukemia in humans and is considered a negative prognostic factor [29,[34][35][36][37][38]. Although the cause of this phenomenon is unclear, chemoresistance may be related to the high prevalence of drug efflux pump expression and the high proportion of cytogenetic abnormalities [36][37][38][39]. Several reports in humans regarding the aberrant expression of CD markers in LSA and leukemia have been published to date. However, the clinical implications of LI in canine LSA are poorly understood. A previous study reported the lineage differentiation of canine LSA using FC [28]. Of the 59 dogs, 13 had LI. In this study, leukemic states occurred in all three phenotypes (i.e., B-, T-cell, and LI); however, LI cases comprised the largest proportion. Similarly, our patient was also in a leukemic state. LI may be related to leukemic LSA or leukemia. Since FC is useful in diagnosing and evaluating LSA prognosis, it should be considered as a front-line test in patients with LSA, and further studies on the clinical implications of LI in canine LSA are needed. Although rarely curable, most LSAs are initially responsive to treatment, and LSA that responds to chemotherapy has a better prognosis than LSA that does not [7,8,40]. In previous studies, most patients with MCL were euthanized at the time of diagnosis, but some received chemotherapy (e.g., COP and CHOP and sole prednisone) [14][15][16][17][18]20,21]. There are only two cases in which MCL was treated using CHOP [16,17]. In one study, an MCL dog treated with CHOP improved clinically in early induction [16]. However, the patient became ill at week 10 of chemotherapy, and the dog was euthanized after three months of initial presentation [16]. In another study, an MCL dog treated with CHOP also achieved clinical remission initially, but recurred 2.5 months after induction [17]. Although rescue chemotherapy was attempted multiple times, the patient showed a partial and temporary response only to dacarbazine [17]. The dog was euthanized due to severe and uncontrolled seizures nine months after the initial diagnosis [17]. Likewise, After the diagnosis, our patient received CHOP-based chemotherapy, including L-asparaginase. However, the response to chemotherapy was transient and poor. The patient died three months after the initial presentation. This is significantly different compared to the 10-14-month Vet. Sci. 2022, 9, 549 8 of 10 mean survival time (MST) of multicentric B-cell LSA [41]. Furthermore, most patients with MCL have extensive nodal and extranodal involvement [13]. Likewise, the current patient showed a poor response to chemotherapy and had extensive nodal and extranodal involvement, including the spleen, superficial and abdominal lymph nodes, and bone marrow at the initial presentation. Although splenectomy, bone marrow examination, and necropsy, which are tests that can confirm LSA infiltration, were not performed due to the owner's refusal, lymphoblastic abdominal effusions around the spleen and lymphoblasts on peripheral blood smears suggested the involvement of the spleen and bone marrow, respectively. Extrapolation of MCL prognosis is difficult due to the small number of reports; however, it appears that MCL is considered a negative prognostic factor in canine LSA. In addition to MCL as an important prognostic factor, CLRA expression in PARR assays and atypical immunophenotypic features of LI should be investigated in canine LSA. However, it is beyond the scope of this report, and further large-scale retrospective reviews are needed. Moreover, to broaden our knowledge of MCL with CLRA and LI and to establish tailored treatment strategies, a complete characterization of this type of LSA is required, as is further investigation of other case series and studies about the clinical implications of CLRA and LI. Conclusions To our knowledge, this is the first case report of MCL with CLRA and LI. The patient's response to therapy remained consistently poor despite treatment with CHOP-based chemotherapy. Taken together, MCL might be considered a negative prognostic factor and appears to have an unusual clinical course in canine B-cell LSA. A complete characterization of this type of LSA requires further investigation with additional case studies, and further studies on the clinical implications of CLRA and LI are required. Institutional Review Board Statement: Not applicable.Informed consent statement: The animal's owner provided informed consent for her details to be published and the study to be conducted. Data Availability Statement: Not applicable.
v3-fos-license
2021-05-05T00:07:50.626Z
2021-03-29T00:00:00.000
233676061
{ "extfieldsofstudy": [ "Biology", "Medicine", "Computer Science" ], "oa_license": "CCBY", "oa_status": null, "oa_url": null, "pdf_hash": "b67b32e8c1649424257fa1b08d6c601fc54d8559", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:1705", "s2fieldsofstudy": [ "Medicine" ], "sha1": "ef5a658bb6416182e3de0f3d4f214507cb2c3ab4", "year": 2022 }
pes2o/s2orc
Functional network connectivity imprint in febrile seizures Complex febrile seizures (CFS), a subset of paediatric febrile seizures (FS), have been studied for their prognosis, epileptogenic potential and neurocognitive outcome. We evaluated their functional connectivity differences with simple febrile seizures (SFS) in children with recent-onset FS. Resting-state fMRI (rs-fMRI) datasets of 24 children with recently diagnosed FS (SFS-n = 11; CFS-n = 13) were analysed. Functional connectivity (FC) was estimated using time series correlation of seed region–to-whole-brain-voxels and network topology was assessed using graph theory measures. Regional connectivity differences were correlated with clinical characteristics (FDR corrected p < 0.05). CFS patients demonstrated increased FC of the bilateral middle temporal pole (MTP), and bilateral thalami when compared to SFS. Network topology study revealed increased clustering coefficient and decreased participation coefficient in basal ganglia and thalamus suggesting an inefficient-unbalanced network topology in patients with CFS. The number of seizure recurrences negatively correlated with the integration of Left Thalamus (r = − 0.58) and FC of Left MTP to 'Right Supplementary Motor and left Precentral' gyrus (r = − 0.53). The FC of Right MTP to Left Amygdala, Putamen, Parahippocampal, and Orbital Frontal Cortex (r = 0.61) and FC of Left Thalamus to left Putamen, Pallidum, Caudate, Thalamus Hippocampus and Insula (r 0.55) showed a positive correlation to the duration of the longest seizure. The findings of the current study report altered connectivity in children with CFS proportional to the seizure recurrence and duration. Regardless of the causal/consequential nature, such observations demonstrate the imprint of these disease-defining variables of febrile seizures on the developing brain. www.nature.com/scientificreports/ risk-factor analyses have revealed that patients with intractable TLE happen to report a history of prolonged febrile seizures (CFS) at higher frequency (30-60%) 9,13 . CFS is associated with a heightened risk of epilepsy in 4.1-6.0% of the cases 12 . Tsai et al., in a study on long term neurocognitive outcomes in subjects with CFS, noted significantly lower full-scale intelligence quotient (FSIQ), perceptual reasoning index, and working memory index scores than in the control group 14 . Dube et al. indicated that hyperthermic seizures in the immature rat model of FS do not cause spontaneous limbic seizures during adulthood 15 . However, "prolonged" experimental FS led to later-onset limbic (temporal lobe) epilepsy and interictal epileptiform EEG abnormalities in a significant proportion of rats 16 . The literature on the clinical profile of FS notwithstanding, imaging literature in FS is sparse. Theodore et al., in a study of 35 subjects presenting as refractory Complex Partial Seizures [CPS] with video-EEG of temporal lobe onset found 9 patients with a prior presentation of CFS had smaller volumes of ipsilateral Hippocampal Formation [HF]. FS in the clinical history had a predictive value on the severity of HF atrophy 17 . Suffices to say that the imaging literature in FS has at best summarised the morphological aspects of the brain, with little emphasis on the functional connectivity, especially in the case of CFS, which seems to have a different evolution. Imaging studies attempting to understand the connectivity differences even in patients with generalised epilepsy are riddled with complexities as the observed differences between patients and matched controls could be secondary to various disease defining features like the type of seizures, duration of disease, number of recurrences, familial loading of epilepsy etc. Hence there is an increased interest in studying drug naïve patients with new-onset seizures and the use of disease matched controls in an attempt to reduce the complexity of the question to be answered 18,19 . With a theoretical background of dissonant profiles of SFS and CFS from an experimental, clinical and neurocognitive perspective we identify a knowledge gap about early brain connectivity changes in children with CFS vis-à-vis SFS. We hypothesize that there could be differences in the connectivity patterns between clinical subgroups. It is likely that the alteration in connectivity may not be limited to a particular brain region, but may manifest as whole-brain connectivity changes. Graph theory is a relevant formalism in this context for studying large-scale brain networks. Here networks are conceptually represented as sets of nodes (vertices exemplified by ROIs in the brain) and connected by links (edges illustrated by structural, functional or effective connections) 20 . We use seed-region to whole-brain-voxel time-series functional connectivity and Graph theoretical analysis on Blood Oxygen Level Dependent (BOLD) resting-state functional magnetic resonance imaging (rs-fMRI) to compare subjects with CFS and SFS. Results 24 subjects were included in the final analysis [Simple FS-11 and Complex FS-13] within 12 days [IQR 8.5-13.5] and 10 days [IQR 9-30] after the last seizure respectively. Among the clinical variables analysed, namely mean age at onset, the number of recurrent febrile seizures, the maximum duration of febrile seizures, duration of the disease, none showed significant between-group differences. Functional connectivity (FC) was estimated using time series correlation of seed region-to-whole-brain-voxels and the brain regions showing significant betweengroup connectivity differences were correlated with disease-defining clinical characteristics. Our results revealed that the CFS group had altered temporal lobe connectivity and altered basal ganglionic integration measures proportional to recurrences, and duration of the seizure. Altered brain connectivity in complex febrile seizure. Seed to Voxel functional connectivity analysis revealed that the patterns of connectivity of multiple seed ROI's in patients in the CFS group were significantly different (FDR-corrected, P < 0.05 for 90 ROIs) from those in the SFS group ( Fig. 1 The network topology in complex febrile seizure. The CFS group revealed increased network segregation (i.e., clustering coefficient), decreased network integration (i.e., participation coefficient), and decreased global efficiency (Fig. 2, Table 2, Supplementary Fig. S1l-u). A significantly (FDR-corrected, P < 0.05 for 90 ROIs) increased network segregation was observed in right inferior frontal gyrus (p = 0.0005) and left caudate (p = 0.001). Bilateral caudate (right hemisphere p = 0.0005, left hemisphere p = 0.0032), pallidum (p = 0.0009), right thalamus (p = 0.0023), Amygdala (p = 0.0016), orbitofrontal gyrus (p = 0.0004) and paracentral lobule (p = 0.0027) revealed decreased network integration ( Fig. 3; Table 2). Correlation with clinical variables. The duration of the longest febrile seizures was positively correlated Discussion Febrile seizures are classified as generalised onset motor seizures in the revised ILAE classification of epilepsy 21 . The FEBSTAT study remarked that children presenting with Febrile Status Epilepticus are at risk for acute hippocampal injury in the background of structural imaging abnormalities 22 . In this study, we demonstrate, that children with CFS have altered connectivity in the bilateral temporal lobes and thalami when compared to the SFS group. We also note increased segregation and decreased integration in several subcortical structures and the right frontal lobe in the CFS group. The increased connectivity of temporal lobes and decreased integration of the subcortical structures correlated with the number of recurrent febrile seizures, and the duration of the longest seizures 23,24 . While the causal nature and relevance of structural imaging findings are being contested 25,26 , the findings of the current study add to the understanding of rs-fMRI-derived functional connectivity alterations in new-onset FS. We surmise that these changes may be the imaging translation of experimental evidence of alterations of functional neuronal and network properties (increased excitability) in early life epileptogenesis 27,28 without structural alterations such as neuronal death, genesis or altered branching 29 . Increased excitability can in part explain reduced FC within and across circuits exhibiting this trait. With intact neurovascular coupling neurons with a differential higher likelihood of firing would lead to less coherent "ensemble type firing"/ local field potentials. Such less coherent (viz, noisier) neural activity can culminate in reduced RSFC correlation strength within and across resting state networks. The functional connectivity imprint of febrile seizures reported in the current study could represent either the cause or effect of febrile seizures. Hyperconnectivity, a probable marker of increased neural resource use, is a well-documented ubiquitous response in many neurological disruptions 30 . Notwithstanding its implications in terms of processing speed, cognitive fatigue and resource use 31 , increased connectivity has nevertheless been noted as a pervasive reaction to many neurologic disturbances. Evidence suggests that this change in connectivity stems from increased spiking output from neuronal ensembles 32 . In the pathobiology of complex febrile seizures, the observed hyperconnectivity may have a role that is not necessarily compensatory. Enhanced EEG connectivity in febrile seizures has been incriminated as a seizure-prone state by Birca et al. 33 . Mossy fibre plasticity and enhanced hippocampal excitability, without neither hippocampal cell loss nor altered neurogenesis, have been reported in animal models of prolonged febrile seizures 34 . Frequently described in the context of temporal lobe epilepsy, Kindling is a self-propagating reduction in the ability of the brain to limit seizures in which duration and behavioural involvement of induced seizures increase after repetitive induction of mesial temporal lobe seizures 35 . Sequential exaggerated mossy fibre invasion of the molecular and granule cell layers have been associated with the process of "kindling". Murine models of hyperthermic seizures have documented evidence of the kindling-like phenomenon in epileptogenesis 36 . We propose that processes like these above-detailed mechanisms www.nature.com/scientificreports/ may operate in the genesis of hyperconnectivity in CFS as well the observed correlation with Febrile seizures' recurrence and seizure duration. Seizure circuitry in the context of temporal lobe epilepsy has been classified broadly as (a) the minimal-size initiating circuit(s) involving the trisynaptic circuit of the entorhinal cortex, the dentate gyrus (a "gatekeeper" resisting recruitment) and Ammon's horn of the hippocampus and (b) the pathways of seizure spread by which additional brain circuits are recruited as the seizure continues and spreads 27 . In this context, it becomes difficult to ignore the similarity of the regions described in the experimental models with those obtained from the data-driven methodology used in the current study. The hyperconnectivity between allocortical [hippocampus, amygdala, accumbens] and neocortical regions [middle temporal gyrus] of bilateral temporal lobes is similar to the regions described in the seizure initiating circuit. There was also positive connectivity of bilateral thalami to multiple subcortical structures and cortical structures, probably indicating pathways of spread 37 . Delving on the translational value from a clinical benefit perspective, the network topology of increased segregation and decreased integration found in our study indicates the simplified, regularised nature of these networks in these regions, reiterating the argument for disease-related network alterations (affecting long-range connection and forming self-reinforcing networks) in children with CFS. These observations are in tune with studies on subjects with generalised tonic-clonic epilepsy (affliction of bilateral temporal poles and thalamus) 38 and benign epilepsy and centrotemporal spikes 39 . These structures have been fixated using other imaging approaches as well 40,41 . That the severity of network disturbances scales alongside variables such as duration of the seizure and number of recurrences only affirms the significance of these findings. Since the network alterations described herein in CFS is in children with "recent-onset" seizures, not confounded by other nuances of epilepsy (viz chronic drug intake), the association is likely "causative" and not an epiphenomenon/after-effect. Findings from another study in children with drug-naive recent-onset generalised epilepsy also supported thalamic atrophy as a cause and not the effect of epilepsy 42 . In addition, the nodal topology identified was a simplified pattern of the network, with increased segregation and decreased integration similar to the pattern observed in an immature brain 43 . The connectivity evidence from this study, first of its kind, makes a compelling case for reviewing the existing management pattern of patients with FS and advancing a body of research towards a well-differentiated treatment and follow-up algorithm. www.nature.com/scientificreports/ This study is not without its limitations. It needs to be noted that imaging was performed in natural sleep due to ethical aspects of giving sedation in children undergoing imaging for febrile seizures. In this precarious situation, it was difficult to monitor the stages of sleep and hence the confounding effects of various stages of sleep on results cannot be eliminated. It is to be noted that the rs-fMRI pattern in sleeping infants closely resembles the adult sleep state rather than adult wakefulness and few existing works of literature on network topology in sleep also reveal similar findings to the observations in the current study 44 . However, since both groups of children www.nature.com/scientificreports/ Figure 3. Glass brain view of group difference between CSF and SFS for regional network segregation and integration (a) brain regions showed higher segregation/local connectivity (i.e., nodal clustering coefficient) in CFS compared to SFS and (b) brain regions showed decreased integration (i.e., nodal participation coefficient) in CFS compared to SFS with multiple comparisons correction of FDR < 0.05 for no of ROIs (N = 90). Red to yellow color indicates increased network segregation and blue to green color indicates decreased network integration in CSF in comparison with SFS. The color bar indicates the "t-values" of statistical difference between CFS and SFS. The glass brain view were constructed using BrainNet Viewer. www.nature.com/scientificreports/ were sleeping and since the obtained results are in tune with the existing literature in other generalised motor epilepsies, it is more likely that the results are a genuine reflection of epilepsy than natural sleep. This study was not armed with a control group. For reasons similar to those cited above, the lack of a control group might also seem to be a limitation, but there is evidence for advantages of disease matched controls in evaluating heterogeneous diseases like epilepsy 18 . The results of our study are based on the group-level analysis between the two groups and might not be relevant to an individual patient. The sample size achieved was small. We lost significant data of nine children due to uncorrectable head motion due to snoring and hence the study sample size is small and generalizability to a larger population becomes difficult. The sampling time of rsfMRI data of 7 min is less than ideal to achieve stable RSFC data. Larger samples with longitudinal observations and the clinical outcome would add further evidence to the above observations. Conclusion Children with recently diagnosed complex febrile seizures reveal altered connectivity having an immature simplified pattern in several regions including temporal lobes and thalami proportional to the frequency, and duration of the seizure. This evidence is in tune with experimental evidence in febrile seizures and the network topology in other generalised motor epilepsies, and hence more likely represent the cause of seizures. Regardless of the causal/consequential nature, such observations on altered connectivity demonstrate the imprint of these diseasedefining variables of febrile seizures on the developing brain. Materials and methods The prospective study was conducted at a tertiary care referral centre for neurologic disorders in children with recent-onset febrile seizures. Written informed consent was obtained from the caregiver of each participant, and the study was approved by the NIMHANS Human Ethics Committee-Basic and Neurosciences Division to be performed in children without using sedation. The study was performed in accordance with the relevant guidelines and regulations. Hence all children underwent imaging while they were naturally sleeping inside the MRI gantry. This was associated with increased scan time, due to multiple pauses and restarts and resultant loss of data in nine subjects. Table 3. Study population. Thirty Image acquisition. MRI was performed in a 3 Tesla scanner (SKYRA, Siemens, Erlangen, Germany). The child was allowed to sleep in the gantry room in the hands of the parent after ensuring that they both were metal-free after the room lights were dimmed. Once the child was asleep in the MR environment, a trained Number of recurrent febrile seizures Mean (± SD) 3.1 ± 2. www.nature.com/scientificreports/ technologist transferred the child on the table and positioned with minimum disturbance to the sleeping child wrapped in a blanket. If at any point the child woke up, the entire cycle was repeated. The head was well supported with soft pads. 32 Channel head coil was used. The resting-state fMRI acquisition (rs-fMRI) parameters were as follows: 200 volumes, repetition time 2030 ms, 40 slices, 3 mm slice thickness, FOV 195 × 195 mm, matrix 64 × 64, refocusing pulse 90°, voxel size-3 × 3 × 3 mm The total time of acquisition for rs-fMRI was 6 min 52 s. Anatomic images were acquired by using a 3D T1-weighted MPRAGE sequence in 192 sagittal sections with a TR of 1900 ms, TE of 2.5 ms, a TI of 900 ms, a FOV of 256 × 256 and a section thickness of 1 mm. Oblique Coronal T2 Fast Spin Echo (FSE) planned perpendicular to the hippocampus was also done to rule out other structural abnormalities. Structural imaging revealed that one patient with complex febrile seizure had bulky and T2 hyperintense right hippocampus. The rest of the imaging studies did not reveal any abnormality. Image analysis. Pre-processing. The MRI data were pre-processed using MELODIC (Multivariate Exploratory Linear Optimized Decomposition into Independent Components) version 3.14, which is part of FMRIB's Software Library (FSL, http:// fsl. fmrib. ox. ac. uk/ fsl). The pre-processing steps include: Discarding the first five functional images, removal of non-brain tissue, motion correction using MCFLIRT, intensity normalization, temporal band-pass filtering, spatial smoothing using a 5 mm FWHM Gaussian kernel, rigid-body registration. The functional and structural data were co-registered to the 2 year paediatric and MNI template space (developed University of North Carolina [UNC]) using FLIRT (12 DOF) 45 . Finally, single-session ICA with automatic dimensionality estimation was performed to identify noise components for each subject. Each ICA component was evaluated based on the spatial map, the time series, and the temporal power spectrum 46 . Once the noisy ICA was marked by manual hand labelling, FIX was applied with default parameters to remove noisy components to obtain clean functional data. The head movement parameters (three translational and three rotational) were regressed out and 0.01 to 0.09 Hz band pass filter was used before post processing. Anatomic parcellation. The fMRI data were segmented into 90 anatomic ROIs based on a University of North Carolina [UNC] paediatric [two years] atlas for whole-brain regions by using the anatomically labelled template reported by Shi et al. 47 . Functional connectivity analysis. A seed-to-voxel-based functional connectivity analysis was performed by computing the temporal correlation between the blood oxygen level-dependent signals to create a correlation matrix showing connectivity from a seed region to all other voxels in the brain by using the functional connectivity toolbox (CONN, version 17) implemented in SPM8 (http:// www. nitrc. org/ proje cts/ conn) 48 . Source reduction of WM and CSF-related physiologic noises was carried out before connectivity estimation, by using the CompCor algorithm 49 . Bivariate correlations were analysed to reflect connections between the seed region to the rest of the brain voxels. Then Fisher's r-to-z transformation was used on the connectivity matrix. This was followed by a general linear model that was designed to determine statistically significant BOLD signal correlations between the mean time series from each seed ROI and that of every other brain voxel, at the individual subjects' level (first-level analysis) 50,51 . Second-level random-effects analysis was used to create within-group statistical parameter maps for each network and to examine connectivity differences between groups. The group mean effects were estimated for both groups and seed to target connectivity was calculated using 2nd level co-variate analysis using CONN 48 . Pearson linear correlation was performed between clinical variables [number of recurrent febrile seizures, the longest duration of febrile seizure, duration of disease, age of onset and time interval between last seizures and MRI] using the effect size of statistically significant seeds to target connectivity. The correlation coefficient, r (rho) and statistical significance, p values were calculated for each of these connectivity using MATLAB. Positive correlations were designated with a plus ("+") sign and negative correlations with a minus ("−") sign. Graph theory analysis. The graph-theoretical metrics were computed from the connectivity matrix. To this end, we extracted the BOLD time series for the parcellated 90 brain regions of interest (ROI). We followed this with an ROI-ROI rs-fMRI time-series correlation (Pearson correlation coefficients; individual subject) with a resultant 90*90 (n ROIs = 90) connectivity matrix that was constructed. The functional connectivity brainnetworks were defined based on 90*90 weighted undirected networks specified by G (N, E), where G is a nonzero subset with nodes (N = ROIs) and edges (E = inter-nodal correlation coefficients, Fisher's Z values) to serve as a measure of functional connectivity between these nodes. We then computed the following graph theory measures: brain network segregation, small-worldness, network integration, and efficiency using the Brain connectivity toolbox (http:// www. brain-conne ctivi ty-toolb ox. net) 52,53 . Sparsity-based thresholding was employed to achieve a fixed, desired connection density to enable inter-group comparison. Brain network segregation. Network segregation was estimated from the Clustering Coefficient, which measures the strength of localised interconnectivity of a network. For a typical node (i) the absolute clustering coefficient (C i ) is the ratio of the number of its existing connections (E) and all feasible connections of graph G. C is the average measure of all nodes. www.nature.com/scientificreports/ where N is the total network nodes, E i is the count of existent connections of node i, K i being the node's degree. Normalized whole-brain clustering coefficient (γ) was calculated from the ratio of absolute clustering coefficient of the network to random clustering coefficient (C Rand ). Brain network integration. Participation coefficient measures the breadth or diversity of between-module connections of the individual nodes. Generally, nodes with a higher participation coefficient have a higher strength of connections to multiple modules. Participation coefficient ('PC i ') of a node(i) is the ratio of 'number of edges of node (i) to nodes in a module (s)' , and the degree of node 'i' . Absolute participation coefficient may then be defined (PC abs ) as: wherein, 'm' is a module among a set of modules 'M ' , and 'Qi (m)' is the denoted by degree between node ' i' to all other nodes in module 'm' . Small-worldness. It is computed from the ratio between network segregation (γ) to that of network integration (λ). A network would be described as "small-world" if (i) γ > 1, and (ii) λ ≈ 1. Summarising into a simple measurement, σ = γ/λ > 1 for those networks that have a "small-world" organization 54 . Whole brain efficiency. The brain network integration was measured using global efficiency, a measure of efficacy information exchange over the network 55 . It may be defined as an average of inverse shortest path length characterising that network 56 . The higher the global efficiency of a network, the greater is the topological integration. Finally, the brain regions that showed significant differences for on graph theoretical measures of segregation and integration were subsequently correlated with clinical measures. The brain regions that showed significant differences were rendered in a glass brain view using BrainNet Viewer (http:// www. nitrc. org/ proje cts/ bnv/) 57 . Statistical analysis. Between groups (CFS and SFS) differences for the seed-to-voxel-based functional connectivity, seed to target connectivity was considered statistically significant with false discovery rate (FDR) corrected (p < 0.05) at a clustered level for brain regions (N = 90) using CONN 48 . Following this, we used a cluster size threshold of 20-voxels (i.e., if the connectivity cluster showed less than 20-voxels it was excluded). For graph measure between groups (CFS and SFS) differences were assessed using a two-tailed two-sample t-test. The whole-brain graph measures statistical analysis was done at each sparsity threshold with FDR correction for no of sparsity (N = 17). Brain regional segregation and integration differences were assessed for 90 brain regions using FDR correction with p < 0.05 for no brain regions (N = 90). For clinical measures correlation with seed-to-voxel-based functional connectivity difference, we used FDR correction with p < 0.05 for the total number of significant connectivity clusters for all the target ROIs (N = 11). In the case of graph theory, we used FDR correction with p < 0.05 for the total number of the brain regions that showed differences in regional integration or segregation (N = 10).
v3-fos-license
2021-12-05T16:05:53.866Z
2021-12-03T00:00:00.000
244884405
{ "extfieldsofstudy": [], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.frontiersin.org/articles/10.3389/fnins.2021.766802/pdf", "pdf_hash": "95f8e3728ae6f3eb39297d6572a19ca39b779d6f", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:1706", "s2fieldsofstudy": [ "Biology" ], "sha1": "ca5f54026d333f6a11b3934bc2d936e4d94630a5", "year": 2021 }
pes2o/s2orc
Sex and Genotype Modulate the Dendritic Effects of Developmental Exposure to a Human-Relevant Polychlorinated Biphenyls Mixture in the Juvenile Mouse While many neurodevelopmental disorders (NDDs) are thought to result from interactions between environmental and genetic risk factors, the identification of specific gene-environment interactions that influence NDD risk remains a critical data gap. We tested the hypothesis that polychlorinated biphenyls (PCBs) interact with human mutations that alter the fidelity of neuronal Ca2+ signaling to confer NDD risk. To test this, we used three transgenic mouse lines that expressed human mutations known to alter Ca2+ signals in neurons: (1) gain-of-function mutation in ryanodine receptor-1 (T4826I-RYR1); (2) CGG-repeat expansion in the 5′ non-coding portion of the fragile X mental retardation gene 1 (FMR1); and (3) a double mutant (DM) that expressed both mutations. Transgenic and wildtype (WT) mice were exposed throughout gestation and lactation to the MARBLES PCB mix at 0.1, 1, or 6 mg/kg in the maternal diet. The MARBLES mix simulates the relative proportions of the twelve most abundant PCB congeners found in serum from pregnant women at increased risk for having a child with an NDD. Using Golgi staining, the effect of developmental PCB exposure on dendritic arborization of pyramidal neurons in the CA1 hippocampus and somatosensory cortex of male and female WT mice was compared to pyramidal neurons from transgenic mice. A multilevel linear mixed-effects model identified a main effect of dose driven by increased dendritic arborization of cortical neurons in the 1 mg/kg PCB dose group. Subsequent analyses with genotypes indicated that the MARBLES PCB mixture had no effect on the dendritic arborization of hippocampal neurons in WT mice of either sex, but significantly increased dendritic arborization of cortical neurons of WT males in the 6 mg/kg PCB dose group. Transgene expression increased sensitivity to the impact of developmental PCB exposure on dendritic arborization in a sex-, and brain region-dependent manner. In conclusion, developmental exposure to PCBs present in the gestational environment of at-risk humans interfered with normal dendritic morphogenesis in the developing mouse brain in a sex-, genotype- and brain region-dependent manner. Overall, these observations provide proof-of-principle evidence that PCBs interact with heritable mutations to modulate a neurodevelopmental outcome of relevance to NDDs. While many neurodevelopmental disorders (NDDs) are thought to result from interactions between environmental and genetic risk factors, the identification of specific geneenvironment interactions that influence NDD risk remains a critical data gap. We tested the hypothesis that polychlorinated biphenyls (PCBs) interact with human mutations that alter the fidelity of neuronal Ca 2+ signaling to confer NDD risk. To test this, we used three transgenic mouse lines that expressed human mutations known to alter Ca 2+ signals in neurons: (1) gain-of-function mutation in ryanodine receptor-1 (T4826I-RYR1); (2) CGG-repeat expansion in the 5 non-coding portion of the fragile X mental retardation gene 1 (FMR1); and (3) a double mutant (DM) that expressed both mutations. Transgenic and wildtype (WT) mice were exposed throughout gestation and lactation to the MARBLES PCB mix at 0.1, 1, or 6 mg/kg in the maternal diet. The MARBLES mix simulates the relative proportions of the twelve most abundant PCB congeners found in serum from pregnant women at increased risk for having a child with an NDD. Using Golgi staining, the effect of developmental PCB exposure on dendritic arborization of pyramidal neurons in the CA1 hippocampus and somatosensory cortex of male and female WT mice was compared to pyramidal neurons from transgenic mice. A multilevel linear mixed-effects model identified a main effect of dose driven by increased dendritic arborization of cortical neurons in the 1 mg/kg PCB dose group. Subsequent analyses with genotypes indicated that the MARBLES PCB mixture had no effect on the dendritic arborization of hippocampal neurons in WT mice of either sex, but significantly increased dendritic arborization of cortical neurons of WT males in the 6 mg/kg PCB dose group. Transgene expression increased sensitivity to the impact of developmental PCB exposure on dendritic arborization in a sex-, and brain region-dependent manner. In conclusion, developmental exposure to PCBs present in the gestational environment of at-risk humans interfered with normal dendritic INTRODUCTION Despite a worldwide ban on the production of polychlorinated biphenyls (PCBs) since the early 2000's, PCBs remain a significant risk to the developing human brain. Pregnant women and children continue to be exposed to not only legacy PCBs released from hazardous waste sites and PCB-containing equipment and materials manufactured prior to the PCB production ban, but also contemporary PCBs produced as inadvertent byproducts of contemporary pigment and dye production or via environmental degradation of legacy PCBs (Koh et al., 2015;Granillo et al., 2019). Human (Schantz et al., 2003;Berghuis et al., 2015;Pessah et al., 2019) and animal (Sable and Schantz, 2006;Klocke and Lein, 2020) studies provide compelling evidence of PCB developmental neurotoxicity, while recent epidemiologic studies suggest that developmental PCB exposures confer risk for NDDs, including autism spectrum disorder (ASD) and attentiondeficit/hyperactivity disorder (ADHD) (Lyall et al., 2017;Pessah et al., 2019;Xi and Wu, 2021). The size and shape of the neuronal dendritic arbor is a key structural determinant of neuronal connectivity, and changes in dendritic morphology (increased or decreased dendrite number, branching and/or spine density) contribute to the altered patterns of neuronal connectivity observed in many NDDs (Coskun et al., 2013;Keown et al., 2013;Khan et al., 2015;Alaerts et al., 2016;Cooper et al., 2017). The dynamic structural remodeling of dendrites and synapses that occurs during development is driven in large part by Ca 2+ -dependent signaling that mediates the influence of neural activity and other environmental factors on dendritic morphogenesis and plasticity (Cline, 2001;Konur and Ghosh, 2005;Chen and Nedivi, 2010). Many NDD risk genes encode proteins that regulate intracellular Ca 2+ signals, are regulated by local fluctuations in Ca 2+ concentrations and/or are involved in regulating dendritic growth and synaptogenesis (Krey and Dolmetsch, 2007;Pessah et al., 2010;Grove et al., 2019). Developmental exposure to Aroclor 1254, a commercial mixture of legacy PCBs, or to PCB 95 has been demonstrated to increase dendritic arborization in the hippocampus, cortex and cerebellum of experimental animal models (Roegge et al., 2006;Lein et al., 2007;Yang et al., 2009;Wayman et al., 2012b). In vitro studies have shown that the ryanodine receptor (RyR)-active PCB congeners PCB 95 and PCB 136 (Wayman et al., 2012b;Yang et al., 2014), and the lower chlorinated congener PCB 11 , promote dendritic growth in primary hippocampal and cortical neurons via activation of Ca 2+ -dependent signaling pathways (Wayman et al., 2012a;Sethi et al., 2018) that map onto Ca 2+ -dependent signaling pathways implicated in the etiology of NDDs (Panesar et al., 2020). These observations suggest the possibility that PCBs amplify the risk and/or severity of NDDs by converging on signaling pathways altered by heritable defects in Ca 2+ -dependent signaling pathways that regulate dendritic arborization and/or plasticity. To test this hypothesis, we compared the effect of developmental exposure to a human-relevant PCB mixture on the dendritic morphology of pyramidal neurons in the hippocampus and somatosensory cortex of wildtype (WT) vs. transgenic mice that expressed heritable human mutations that modulate the fidelity of neuronal Ca 2+ signaling. Specifically, we examined three transgenic lines: (1) mice that carried a human RYR1 gain-of-function mutation (T4826I-RYR1) (Barrientos et al., 2012;Yuen et al., 2012); (2) mice that expressed a CGG repeat expansion in the 5 non-coding region of the fragile X mental retardation gene 1 (FMR1) in the premutation range (55-200 repeats) (Willemsen et al., 2003); and (3) mice that expressed both mutations (double mutant; DM) (Keil et al., 2019b). RyR Ca 2+ ion channels regulate intracellular Ca 2+ stores (Pessah et al., 2010) and their activation is required for activity-dependent dendritic growth and synaptogenesis (Wayman et al., 2012b;Lesiak et al., 2014). A genome wide association study identified RYR1 and RYR2 as ASD candidate genes by using sex as an additional risk factor (Lu and Cantor, 2012). FMR1 premutation is causally linked to fragile X-associated tremor/ataxia syndrome (FXTAS) and is the most prevalent monogenic NDD risk factor (Krueger and Bear, 2011;Chonchaiya et al., 2012;Leehey and Hagerman, 2012). Unlike FMR1 knockout models, these mice exhibit reduced FMR1 protein (FMRP) expression and elevated Fmr1 mRNA (Berman et al., 2012;Robin et al., 2017). In a study examining GWAS and genetic databases, approximately 10% of FMRP targets in the brain overlap with ASD candidate genes, many of which regulate neuronal connectivity (Fernandez et al., 2013). Studies of primary neurons derived from FMR1 premutation knockin mice (referred to hereafter as CGG mice) demonstrate resting intracellular Ca 2+ concentrations threefold higher than neurons derived from WT (Robin et al., 2017), and abnormal patterns of intracellular Ca 2+ oscillations including increased number of spontaneous Ca 2+ burst activity . iPSC-derived neurons from an FMR1 premutation carrier also exhibited enhanced Ca 2+ transients (Liu et al., 2012). Altered dendritic arborization and spine density are linked with these changes in Ca 2+ dynamics in both primary neurons from FMR1 premutation mice , and iPSC-derived neurons from humans with FMR1 premutation (Liu et al., 2012). In addition to the two transgenic lines expressing either a RYR1 gain-of-function mutation or FMR1 premutation, we examined a transgenic line (DM) that expressed both mutations (Keil et al., 2019b). Expressed variants in RyR1 and FMR1 expansion repeats in the premutation range are relatively common mutations in the human population. Approximately 15% of the human population is estimated to carry one or more RYR1 genetic variants (Kim et al., 2013), whereas, the estimated prevalence of the FMR1 premutation in the human population is 1:209 in females and 1:430 in males . Both mutations are phenotypically silent until triggered by halogenated anesthetics (RYR1 gain-of-function) or advancing age (FMR1 premutation). Thus, while we are not aware of any clinical reports of human patients expressing mutations at both loci, there is a reasonable likelihood that there are individuals who carry both mutagens. Regardless, these DM mice were not created to mimic a human disease, but rather as an experimental model to investigate whether gene dosage influences the effects of developmental PCB exposures. In other words, is the phenotypic outcome amplified when two mutations that converge on calcium signaling and regulation of dendritic growth are expressed relative to expression of either mutation alone. The RYR1 mutation was chosen as a direct target of PCBs (Ta and Pessah, 2007); whereas the FMR1 premutation was chosen because of its demonstrated role in translational control of calcium regulating proteins (Robin et al., 2017). Our earlier characterization of dendritic arborization in juvenile male and female mice from these three transgenic lines revealed significantly increased dendritic arborization of pyramidal neurons in the CA1 hippocampus of male T4826I-RYR1 and, to a lesser extent, male CGG mice relative to male congenic WT mice. Dendritic arborization of pyramidal neurons in the somatosensory cortex was significantly enhanced in male and female CGG and DM mice compared to WT mice with the most pronounced differences seen in DM females (Keil et al., 2019b). In this study, we exposed WT, T4826I, CGG and DM mice to vehicle or the MARBLES PCB mixture in the maternal diet throughout gestation and lactation. The MARBLES PCB mixture proportionally mimics the top twelve PCB congeners detected in the serum of pregnant women enrolled in the MARBLES cohort (Granillo et al., 2019;Sethi et al., 2019) who are at increased risk of having a child with an NDD (Hertz-Picciotto et al., 2018). We previously demonstrated that the MARBLES PCB mix has RyR activity in vitro at low micromolar concentrations, reflecting the small percentage of PCB congeners with potent RyR activity . This is consistent with epidemiological evidence that RyR-active PCBs are associated with increased risk of ASD (Granillo et al., 2019). Our findings indicate that expression of heritable mutations that alter the fidelity of neuronal Ca 2+ signals modulated the impact of PCB exposure on several parameters of dendritic arborization in a sex-and brain region-dependent manner. Materials Organic unsalted peanut butter (Trader Joe's, Monrovia, CA, United States) and organic peanut oil (Spectrum Organic Products, LLC, Melville, NY, United States) were purchased from Trader Joe's (Davis, CA, United States). The individual PCB congeners (PCB 11,28,52,84,95,101,118,135,138,149,153,and 180) used to make the MARBLES PCB mix were synthesized and authenticated as previously described (Li et al., 2018;Sethi et al., 2019). The purity of all PCB congeners was > 99% pure . Animals All procedures involving animals were conducted in accordance with the NIH Guide for the Care and Use of Laboratory Animals, conformed to the ARRIVE guidelines (Kilkenny et al., 2010), and were approved by the University of California, Davis Institutional Animal Care and Use Committee. Male and female mice were derived from transgenic mouse colonies maintained at UC Davis (Keil et al., 2019b), which included transgenic strains: (1) homozygous for the human gain-of-function mutation in RYR1 (T4826I-RYR1) referred to as T4826I mice, (2) homozygous (female) or hemizygous (male) for the X-linked CGG repeat expansion in FMR1 in the permutation range (170-200 repeats; referred to as CGG mice); and (3) DM mice that expressed both mutations (Keil et al., 2019b). C57Bl/6J and SVJ129 WT mice were purchased from Jackson Labs (Sacramento, CA, United States) and crossed to generate a 75% C57Bl/6J / 25% SVJ129 congenic WT line that matched the genetic background of the T4826I, CGG and DM animals as determined by singlenucleotide polymorphism (SNP) analysis (Keil et al., 2019b). Homo/hemizygous matings were used to generate the juvenile mice used for Golgi analyses, and all animals used in this study were genotyped as previously described (Keil et al., 2019b). All animals were housed in clear plastic shoebox cages containing corn cob bedding and maintained on a 12 h light and dark cycle at 22 ± 2 • C with 40-50% humidity. Feed (Diet 5058, LabDiet, Saint Louis, MO, United States) and water were available ad libitum. Two weeks prior to mating, nulliparous and previously unmated dams (>6 weeks of age) were singly housed and PCB dosing was initiated. Dams were placed with a genotypematched male overnight for mating. Males and females were separated the next day and females were checked for the presence of a copulatory plug, which was considered gestational day 0. After mating, dams were housed singly prior to parturition and with their pups after parturition. At postnatal day 2 (P2), pups were culled or cross-fostered within genotype-and dose-matched litters to ensure all litters consisted of 4-8 pups. After weaning at P21, pups were group housed with same-sex littermates. Mice underwent self-grooming and social approach behavioral testing as part of a larger study, and then were euthanized on P27-31 to collect brains for Golgi analyses. This study is part of an overall study designed to assess the effects of developmental exposure to the MARBLES PCB mixture on multiple developmental outcomes, including NDD-relevant behavioral phenotypes (data under review), the gut microbiome and intestinal physiology (Rude et al., 2019) and cytokine levels in the serum and hippocampus (Matelski et al., 2020). The data described in this study were collected from animals used for behavioral studies prior to being euthanized to harvest brains for morphometric analyses of dendritic arborization. We previously reported that developmental exposure to the MARBLES PCB mixture had no effect on the length of time from mating to parturition and pregnancy rates across groups averaged 88% (Matelski et al., 2020). While dam weight at weaning was not altered by PCB exposure, there was a significant main effect of genotype, with DM dams weighing significantly more than WT dams, T4826I dams weighing significantly more than CGG dams, and CGG dams weighing significantly less than DM dams (Matelski et al., 2020). We also found that there were no effects of developmental PCB exposure or genotype on litter size or sex ratio within the litter (data under review). Developmental Polychlorinated Biphenyls Exposures The MARBLES PCB mixture was prepared to proportionally mimic the serum PCB congener profile of the twelve most prevalent PCB congeners detected in serum of pregnant women enrolled in the MARBLES human epidemiological cohort (Granillo et al., 2019;Sethi et al., 2019). These women are at increased risk for having a child with an NDD (Hertz-Picciotto et al., 2018). The PCB congeners included in the MARBLES PCB mixture and their final total percentage in the mixture was as follows: PCB 28 (48.2%), PCB 11 (24.3%), PCB 118 (4.9%), PCB 101 (4.5%), PCB 52 (4.5%), PCB 153 (3.1%), PCB 180 (2.8%), PCB 149 (2.1%), PCB 138 (1.7%), PCB 84 (1.5%), PCB 135 (1.3%), and PCB 95 (1.2%). The MARBLES PCB mix was solubilized in peanut oil and homogenously mixed into peanut butter to achieve concentrations of 0.025, 0.25, and 1.5 mg PCB/g peanut butter. A vehicle control (0 mg/g) was similarly prepared by mixing the equivalent amount of peanut oil needed to solubilize the highest concentration of MARBLES mix into peanut butter. Two weeks prior to mating, nulliparous dams (>6 weeks of age) were randomized to dose groups and PCB exposures were initiated. Dams were fed the MARBLES PCB mix in peanut butter at doses of either 0, 0.1, 1 or 6 mg/kg BW /day daily until pups were weaned at P21. Similar doses of Aroclor 1254 were previously shown to result in PCB body burdens comparable to those observed in human tissues (Yang et al., 2009). At each daily dosing, dams were monitored to ensure complete ingestion of each dose of peanut butter. Golgi Staining Golgi staining, image acquisition, and analysis were performed as described previously (Keil et al., , 2019bWilson et al., 2017). Parameters used to assess Golgi staining and criteria for selecting Golgi-stained neurons to trace were described previously (Lein et al., 2007;Keil et al., 2017). Briefly, P27-31 pups were euthanized with CO 2. Brains were carefully and quickly extracted from the skulls and processed for Golgi staining using the FD Rapid GolgiStain kit (FD NeuroTechnologies Inc. Columbia, MD, United States) according to the manufacturer's instructions. Brightfield image stacks of pyramidal neurons in the CA1 of the hippocampus and layers IV/V of the somatosensory cortex were captured using an Olympus IX-81 inverted confocal microscope (Olympus, Shinjuku, Japan) at 20X magnification using MetaMorph Advanced image analysis software (version 7.1, Molecular Devices, Sunnyvale, CA, United States). These brain regions were chosen because they contain easily identifiable pyramidal neurons and are implicated in the pathogenesis of neurodevelopmental disorders (Coskun et al., 2013;Khan et al., 2015;Cooper et al., 2017). Neuronal basilar dendritic arbors (N = 39-49 hippocampal neurons per group and N = 44-48 cortical neurons per group derived from six mice per sex, genotype, and exposure group) were hand-traced by a single individual blinded to experimental group using Neurolucida (version 11, MBF Bioscience, Williston, VT, United States). Basilar dendritic arbors in these regions were chosen for analysis because previous studies of PCB effects on dendritic arborization demonstrated the developmental exposure to Aroclor 1254 or PCB 95 altered basilar dendrites (Lein et al., 2007;Yang et al., 2009;Wayman et al., 2012b). Dendritic arbor complexity was quantified using automated Sholl (Neurolucida Explorer, version 11, MBF Bioscience) with 10-µm Sholl rings centered on the neuronal soma. Neuron tracings are publicly available on the NeuroMorpho.Org database 1 . Statistical Analyses Sholl curves for each neuron were assessed using a multilevel linear mixed-effects model to determine effects of genotype, sex, dose or interactions on dendritic arborization; these analyses were conducted using SAS software (version 9.4, SAS Institute Inc., Cary, NC, United States) as described previously (Keil et al., , 2019bWilson et al., 2017). In the multi-level linear mixed-effects modeling, genotype, sex, and dose were treated as fixed effects. A random intercept was included in the model to control for clustering of observations within a neuron and neurons within animals. Log transformation was applied when necessary (as indicated in Tables 1, 2). Tables 1, 2 report tests for fixed effects and differences of least squares means for any fixed effects with p ≤ 0.05 as well as for any fixed effects that were approaching significance (p < 0.1) and also had significant effects as identified in the differences of least squares means. Supplementary Data Files report the SAS output, including the solution for fixed effects, fixed effects, least squares means and differences of least squares means. Area under the curve (AUC), distance from soma of the peak dendritic intersections (Peak X), and maximum number of dendritic intersections (Peak Y) values were calculated for Sholl profiles using AUC analysis in GraphPad Prism Software (version 6 and 7, San Diego, CA, United States) for each neuron. To allow for comparisons to earlier studies that did not use mixed-effects models (Roegge et al., 2006;Lein et al., 2007;Yang et al., 2009;Wayman et al., 2012b), PCB-induced differences between neurons within sex and genotypes were independently examined using GraphPad Prism Software. These data were first assessed for normality using the Shapiro-Wilks, KS and D'Agostino and Pearson omnibus normality test, and homogeneity of variance using Bartlett's test. Within each sex and genotype, significant differences between PCB dose groups were determined using one-way ANOVA followed by Dunnett's or Tukey's multiple comparison test for approximately normal data. If data were normal but had unequal variance, group differences were determined using a Bold text indicates biologically relevant comparisons. one-way ANOVA with Welch's correction followed by Dunnett's T3 multiple comparisons test. For non-normal data, differences were determined using a Kruskal-Wallis test followed by Dunn's multiple comparison test. We first focused on differences from vehicle control, if there were no differences from vehicle control then differences between PCB groups were examined. P-values ≤ 0.05 were considered statistically significant. In two instances, the p value of the Kruskal-Wallis tests were 0.0591 and 0.0565, Dunn's post hoc analysis revealed significant differences (p = 0.04 and p = 0.03), so these were reported in Figures 2E, 3A respectively. RESULTS Pyramidal CA1 hippocampal neurons and layer IV/V pyramidal somatosensory cortical neurons were examined in this study because altered patterns of connectivity and dendritic morphology have been reported in these brain regions in individuals with ASD compared to neurotypical controls (Coskun et al., 2013;Keown et al., 2013;Khan et al., 2015;Cooper et al., 2017). Results from the multilevel, mixed-effects statistical model, which includes interactions and allows for the analysis of the Sholl plot as a whole, are summarized in Tables 1, 2 with biologically relevant comparisons highlighted in bold. Within each subsection of the Results below, these results are discussed first. Subsequently, we describe PCB effects that are significantly different from vehicle control within each sex and genotype independently to allow for interpretation of PCB effects alone and to allow for comparisons to published studies that did not use mixed-effects models (Roegge et al., 2006;Lein et al., 2007;Yang et al., 2009;Wayman et al., 2012b). Morphometric Effects of Polychlorinated Biphenyls and Genotype on Pyramidal CA1 Hippocampal Neurons Sholl plots and representative images of basilar dendritic arbors of Golgi-stained pyramidal CA1 hippocampal neurons from male and female WT, T4826I, CGG and DM mice at P27-P31 are shown in Figure 1 (see also (Table 1). Overall, these results suggest pyramidal CA1 hippocampal neurons from CGG mice are more complex than those of WT mice, while pyramidal CA1 hippocampal neurons from T4826I and DM mice are less complex than their WT counterparts, and hippocampal neurons of DM males are more complex than T4826I males. We next asked whether developmental PCB exposure alters dendritic arborization by focusing on PCB dose-response relationships within each sex and genotype independently (Figure 2). We focused on difference from vehicle control; if there were no difference from vehicle control then differences between PCB groups were analyzed. There were no effects of PCB exposure on distance from the soma of the maximum number of dendritic intersections (Peak X) in male or female hippocampal neurons of any genotype (Figures 2A,B). The maximum number of dendritic intersections (Peak Y) was increased in the 1 mg/kg PCB group vs. the 0.1 mg/kg PCB group in male CGG hippocampal neurons (Figures 2C,D). Total area under the Sholl curve was increased in the 6 mg/kg PCB dose group vs. vehicle control in male DM hippocampal neurons (Figures 2E,F). Differences in the proximal AUC of the Sholl plot were limited to female hippocampal neurons with a significant dose-dependent increase in DM female neurons in all PCB dose groups compared to vehicle controls ( Figure 2H). There were no PCB effects in distal area under the Sholl curve for either sex in any genotype (Figures 2I,J). These results suggest that compared to vehicle control, PCBs increase dendritic complexity in DM mice only, an effect which is sex-and dose-dependent. We also analyzed more detailed measures of dendritic arborization that have previously been shown to be sensitive to PCBs (Lein et al., 2007;Yang et al., 2009;Wayman et al., 2012b). Based on the mixed-effect model analysis, effects were limited to genotype for the total number of basilar dendrites with pyramidal CA1 hippocampal neurons of all transgenic mice having fewer dendrites compared to WT ( Table 1). Examining each sex and genotype independently, there were no effects of developmental PCB exposure on the number of primary dendrites, dendritic tips, or the number of dendritic tips per primary dendrite (Supplementary Figures 3A-F). The sum length of all dendrites was increased in the 6 mg/kg male DM neurons vs. sex-and genotype-matched vehicle control (Figures 3A,B). Mean dendritic length was unchanged (Figures 3C,D). These results suggest that genotype alone decreases dendrite number and PCBs only increase sum dendritic length in DM males exposed to the highest PCB dose. Soma area was the morphometric parameter most affected by both genotype and developmental PCB exposure in pyramidal CA1 hippocampal neurons. Table 1 illustrates the statistically significant effect of genotype and significant genotype by dose interactions, with DM and T4826I mice displaying a smaller soma size than WT and CGG mice. Genotype by dose interactions were seen in WT neurons, with soma size significantly decreased in the 6 mg/kg group vs. WT vehicle control and other PCB dose groups ( Table 1). T4826I and DM mice had smaller soma size than WT controls regardless of exposure (Table 1). Additionally, in DM mice, soma size was significantly decreased in the 1 mg/kg dose group compared to the 6 mg/kg group (Table 1). Soma size of CGG vehicle controls did not differ from WT vehicle controls, but hippocampal neurons of CGG mice at all PCB concentrations had smaller soma size than WT vehicle controls (Table 1). Overall, developmental PCB exposure or the T4826I and DM genotypes were associated with decreased hippocampal soma area. Examining PCB effects in each sex and genotype independently, PCB exposure altered soma area in WT, CGG, and DM males, as well as in WT and DM females. In WT males, soma area was significantly reduced in the 6 mg/kg PCB dose group compared to WT male vehicle controls. In CCG males, soma area was significantly reduced in the 1 and 6 mg/kg dose groups relative to CGG vehicle controls ( Figure 3E). Soma area was also significantly reduced in 1 mg/kg DM males in contrast to DM vehicle control males ( Figure 3E). While PCBs generally decreased soma area in males across genotypes, this effect was genotype-dependent in females. Like males, soma area of hippocampal neurons in WT females was significantly reduced in the 6 mg/kg dose group compared to WT female vehicle controls ( Figure 3F). However, in DM females, soma area was significantly increased in the 6 mg/kg dose group vs. DM female vehicle controls ( Figure 3F). Morphometric Effects of Polychlorinated Biphenyls and Genotypes on Layer IV/V Pyramidal Somatosensory Cortical Neurons Polychlorinated biphenyl dose effects were more pronounced in cortical neurons compared to hippocampal neurons. Figure 4 illustrates Sholl plots and representative images of Golgi-stained pyramidal neurons in layer IV/V of the somatosensory cortical neurons from male and female WT, T4826I, CGG, and DM mice. There was a significant effect of dose on cortical Sholl profiles, with the 1 mg/kg PCB group exhibiting greater dendritic complexity than vehicle controls or the 6 mg/kg dose group ( Table 2). There was also a significant genotype by dose interaction observed in the Sholl profiles (fully summarized in Table 2), identified as: (1) CGG vehicle control neurons were more complex than T4826I, DM, and WT vehicle control neurons; (2) 6 mg/kg CGG neurons were less complex than CGG vehicle control neurons; (3) 1 mg/kg T4826I neurons had greater complexity compared to T4826I and WT vehicle control neurons; and (4) 1 mg/kg DM neurons showed much greater complexity than DM vehicle controls, 0.1, or 6 mg/kg DM neurons, as well as increased complexity relative to WT vehicle controls, WT 1 mg/kg, or T4826I vehicle control groups. Distance from the soma of maximum dendritic intersections (Peak X) did not differ between dose groups, but there was a significant effect of dose on the maximum number of dendritic intersections (Peak Y), with the 1 mg/kg PCB dose group having increased intersections relative to all other dose groups ( Table 2). There was a significant effect of dose on the total area under the Sholl curve, with 1 mg/kg PCB dose groups having increased area vs. vehicle controls or the 6 mg/kg PCB dose group as well as a significant genotype by dose effect driven by the same differences stated above for the Sholl profile analysis model ( Table 2). For proximal area under the Sholl curve, there was a significant effect of dose with 1 mg/kg dose groups having greater area than all other dose groups, as well as a significant genotype by dose interaction driven by all differences listed above for the Sholl profile analysis with the addition of the DM 6 mg/kg PCB dose group having decreased proximal area compared to the WT 6 mg/kg PCB dose group ( Table 2). In contrast, distal AUC showed fewer differences. CGG vehicle controls had greater distal AUC than WT, T4826I, or DM vehicle control neurons, and exposure to 6 mg/kg PCB decreased distal AUC in CGG neurons relative to CGG vehicle controls. The DM 1 mg/kg group showed greater distal AUC than DM vehicle controls, all other DM PCB dose groups, 1 mg/kg PCB WT mice, and vehicle controls from T4826I and WT mice ( Table 2). Together, these results indicate a non-monotonic dose response, with exposure to the MARBLES PCB mix at 1 mg/kg promoting dendritic arborization, especially in T4826I and DM neurons. Vehicle-treated CGG animals have the greatest dendritic complexity compared to the other genotypes, and the highest PCB dose (6 mg/kg) decreased dendritic complexity within CGG neurons. We next examined the effects of PCBs on dendritic growth when independently analyzed within sex and genotype (Figure 5). In WT mice, the distance from the soma of the maximum number of intersections (Peak X) was increased in the 6 mg/kg PCB dose group relative to vehicle controls in males, but not females (Figures 5A,B). PCB effects on the maximum number of dendritic intersections (Peak Y) were limited to female animals, where they were decreased in 6 mg/kg CGG neurons vs. CGG vehicle controls and in 6 mg/kg DM females relative to 1 mg/kg DM females ( Figure 5D). PCB effects on the area under the Sholl curve were genotype-and sex-dependent. In WT animals, the total AUC was increased in the male 6 mg/kg dose group vs. WT male vehicle controls ( Figure 5E). In T4826I animals, the total area under the Sholl curve was increased in the male 1 mg/kg dose group vs. T4826I male vehicle control ( Figure 5E). Total area under the Sholl curve was also increased in DM males in the 1 mg/kg dose group vs. the 0.1 or 6 mg/kg dose group (Figure 5E). In contrast to males, total area under the Sholl curve was unchanged in WT female mice and was decreased in CGG 6 mg/kg females compared to CGG vehicle control females (Figure 5F). Like males, total area under the Sholl curve was greater in T4826I 1 mg/kg females compared to T4826I vehicle control females ( Figure 5F) and total area under the Sholl curve was greater in DM 1 mg/kg females vs. the DM 6 mg/kg females ( Figure 5F). For both males and females, most of the differences in AUC occurred in the proximal portion. Similar to total area under the Sholl curve, proximal area under the Sholl curve was increased in the 6 mg/kg dose group vs. vehicle control in WT males, and increased in the 1 mg/kg PCB dose group vs. vehicle control in T4826I and DM males ( Figure 5G). In CGG males and females, proximal area under the Sholl curve was decreased in the 6 mg/kg PCB dose group vs. vehicle controls, compared to an increase in DM 1 mg/kg females compared to 0.1 or 6 mg/kg DM females ( Figure 5H). The only significant difference in the distal area under the Sholl curve was found in DM males, with a significant increase in the 1 mg/kg PCB dose group relative to the 0.1 mg/kg group (Figures 5I,J). In summary, these results indicate that developmental exposure to the MARBLES PCB mixture increased dendritic complexity in WT male cortical neurons at 6 mg/kg, an effect that was influenced by genotype as T4826I and DM male cortical neurons had indices of increased dendritic complexity at the 1 mg/kg dose. In contrast, PCB exposure only affected female cortical neurons from T4826I and CGG genotypes when compared to vehicle controls, with increased complexity in T4826I cortical neurons at the 1 mg/kg dose but decreased complexity in CGG cortical neurons at the 6 mg/kg dose. In other measures of dendritic arborization, effects were driven by PCB dose for the total number of basilar dendrites, terminal dendritic tips, dendritic length sum and the number of nodes, with the 1 mg/kg PCB dose groups having greater complexity than all other dose groups ( Table 2). In addition, there was a significant genotype by dose interaction for total dendritic length, which was largely driven by differences highlighted above for the cortical Sholl profile analysis ( Table 2). There was an increase in the number of dendritic tips per dendrite in the 1 mg/kg PCB dose vs. the 0.1 mg/kg or 6 mg/kg PCB dose groups; there were no effects of PCB exposure on mean dendritic length ( Table 2). Together, these results indicate a non-monotonic dose response, with the 1 mg/kg PCB dose group having the greatest response and an overall tendency to increased dendritic complexity, especially in T4826I and DM neurons. In contrast, vehicle-treated CGG neurons were more complex than the other genotypes with the exception of the 6 mg/kg dose group, which exhibited decreased CGG neuron complexity. Examining PCB dose effects in each sex and genotype independently, we observed differences in the number of primary dendrites were limited to CGG females, with the number of primary dendrites decreased in the 6 mg/kg dose group compared to the 1 mg/kg dose group (Supplementary Figure 4A,B). Effects of developmental PCB exposure on the number of dendritic tips were seen in offspring of both sexes, but to a greater extent in females. More specifically, the number of dendritic tips was increased in DM 1 mg/kg males vs. DM vehicle control males (Supplementary Figure 4C). However, in female neurons, the number of dendritic tips was increased in the T4826I 1 mg/kg dose group vs. vehicle control, decreased in the CGG 6 mg/kg dose group vs. vehicle control, and decreased in the DM 6 mg/kg dose group vs. the 1 mg/kg dose group (Supplementary Figure 4D). Dendritic tips in WT female neurons had a Kruskal-Wallis p-value of 0.05 however no differences compared to vehicle control were significantly different upon post hoc analysis (Supplementary Figure 4D). There were no differences in the number of dendritic tips normalized to primary dendrite number (Supplementary Figures 4E,F). Total dendritic length per neuron was increased in the 1 mg/kg dose group vs. vehicle control in T4826I and DM males and in the 1mg/kg and 0.1 mg/kg dose group in T4826I females (Figures 6A,B). Additionally, the total dendritic length was decreased in the female CGG 6 mg/kg dose group vs. vehicle control, and in the female DM 6 mg/kg dose group vs. 1 mg/kg dose group ( Figure 6B). Mean dendritic length per neuron was increased in male DM in the 1 mg/kg PC dose group vs. the 0.1 mg/kg dose group (Figure 6C), but was decreased in CGG females in the 0.1 mg/kg and 6 mg/kg dose groups vs. vehicle control ( Figure 6D). In summary, PCBs at 1 mg/kg tended to increase complexity within DM and T4826I males and T4826I females vs. vehicle control, and PCBs at 6 mg/kg decreased dendritic complexity in CGG females. Similar to hippocampal neurons, soma area of cortical neurons was significantly impacted by genotype and PCB exposure. CGG neurons had greater soma area than T4826I and DM neurons and T4826I neurons had reduced soma area relative to WT neurons ( Table 2). There was also a dose effect for soma area, with the 0.1 mg/kg and 1 mg/kg PCB dose groups having greater area than vehicle control neurons ( Table 2). Examining each sex and genotype independently, soma area was increased in the 0.1 mg/kg PCB dose group vs. vehicle control in WT male neurons ( Figure 6E). There was also a decrease in soma area in the 6 mg/kg PCB dose group vs. the 0.1 mg/kg PCB dose group in male CGG neurons ( Figure 6E). A greater number of PCB effects were observed in female neurons. Unlike WT hippocampal neurons, there was a significant increase in soma area in the 0.1, 1, and 6 mg/kg PCB dose groups vs. vehicle control in WT female neurons ( Figure 6F). In CGG female neurons, there was a significant decrease in soma area in the 1mg/kg and 6 mg/kg dose group compared to vehicle control. In contrast, in DM female neurons, there was a significant increase in soma area in the 6 mg/kg PCB dose group vs. vehicle control ( Figure 6F). In summary, PCBs increased soma area in WT cortical neurons in a sex-and dose-dependent manner, and this effect was affected by genotype since CGG female neurons had decreased soma area while DM female neurons had increased soma area at the 6 mg/kg PCB dose group relative to vehicle controls. Table 3 summarizes data presented in Figures 2, 3, 5, 6 and Supplementary Figures 3, 4, which are the PCB dose responses (indicated by arrows) within each sex and genotype relative to the vehicle control for each parameter of dendritic arborization that was measured in this study. DISCUSSION We describe novel data demonstrating that developmental exposure to a human-relevant PCB mixture alters dendritic arborization in the juvenile mouse brain; however, the dendritic outcome and dose-response relationship varied depending on sex, genotype, and brain region. These findings support the hypothesis that PCBs interact with heritable human mutations that alter the fidelity of neuronal Ca 2+ signaling to confer NDD risk. This conclusion is based on two lines of evidence. First, dendritic arborization was significantly increased in cortical neurons of WT males in the 6 mg/kg PCB dose group. By comparison, the dendritic complexity of cortical neurons was significantly increased in T4826I and DM males in the 1 mg/kg PCB dose group, suggesting that expression of the T4826I-RYR1 mutation, either alone or in combination with CGG mutation, increased sensitivity of male cortical neurons to the dendritepromoting effects of the MARBLES PCB mixture, evident as a leftward shift of the dose-response relationship. This is consistent with previous reports that RYR1 gain-of-function mutations confer heightened sensitivity to RyR-active PCBs in vitro (Ta and Pessah, 2007). Second, while developmental exposure of WT mice to the MARBLES PCB mixture had no significant effect on the dendritic morphology of male or female hippocampal neurons or female cortical neurons, it significantly altered the dendritic arborization of these neuronal cell types in mice that expressed one or more transgenes. Specifically, dendritic arborization of hippocampal neurons was significantly increased in DM males in the 6 mg/kg dose group and DM females in the 0.1, 1, and 6 mg/kg dose groups. The dendritic arbors of cortical neurons were more complex in T4826I females in the 1 mg/kg dose group, while dendritic arborization was decreased in CGG females in the 6 mg/kg dose group. Overall, these results add to a growing body of literature indicating that the genetic substrate can modulate the response to neurotoxic environmental chemicals. Several interesting observations emerged from this study, including: (1) cortical neurons were more sensitive than hippocampal neurons to the dendritic effects of the MARBLES PCB mix; and (2) sex strongly influenced dendritic responses to PCB exposure. The observation of the differential sensitivity of cortical and hippocampal neurons is consistent with our earlier studies of dendritic arborization in the hippocampus and cortex of juvenile rats developmentally exposed to the commercial PCB mixture Aroclor 1254 (Lein et al., 2007;Yang et al., 2009). The observation regarding the influence of sex is also consistent with previous studies in which we demonstrated sex-dependent effects of PCB 95 and PCB 11 on the dendritic arborization of primary hippocampal and cortical neurons in vitro Keil et al., 2019a). The in vivo sex differences we observed in this study varied between genotypes. Specifically, PCB effects on dendritic arborization of cortical neurons were male-specific in WT and DM mice, but female-specific in CGG mice. Moreover, the direction of the dendritic response of cortical neurons to PCBs varied depending on sex, with male WT and DM cortical neurons exhibiting increased dendritic arborization and female CGG cortical neurons exhibiting decreased dendritic arborization. More subtle sex differences were observed in DM hippocampal neurons and T4826I cortical neurons: (1) female and male DM hippocampal neurons responded similarly to PCBs with increased dendritic arborization, but female neurons were more sensitive, responding to the MARBLES PCB mixtures at 0.1, 1, and 6 mg/kg/d while male neurons were affected only by the 6 mg/kg/d dose; and (2) while the direction of the dendritic response in DM hippocampal neurons and T4826I cortical neurons was similar between sexes, the specific parameters of dendritic arborization that were altered by PCBs differed. While it is widely posited that sex differences in dendritic arborization and neuronal connectivity contribute to the sex bias in the prevalence of a number of NDDs (Alaerts et al., 2016;McCarthy, 2016), our findings support an emerging literature suggesting that sex differences in the response to environmental neurotoxicant exposure may contribute to NDD sex bias. Black Arrow: Male, White Arrow: Female, Dash indicates no effect. The biological basis for the differential susceptibility of females vs. males and hippocampal vs. cortical neurons is not known. One possibility is sex and regional differences in PCB toxicokinetics. PCBs tend to be lipophilic and thus would be predicted to be uniformly distributed throughout the brain in both sexes; however, this has yet to be demonstrated. Moreover, it is now appreciated that hydroxylated metabolites of PCBs can have neurotoxic properties that differ from those of the parent congener (Klocke and Lein, 2020), and expression of the cytochrome P450 enzymes that metabolize PCBs differ by sex and brain region . Another non-mutually exclusive possibility is that PCB toxicodynamics vary according to sex and/or brain region. While addressing this possibility will require identification of the mechanism(s) that mediate the effects of the MARBLES PCB mixture on dendritic arborization, if RyR activity is involved, there is significant evidence in the literature that expression of RyRs and accessory proteins that regulate its gating properties are developmentally regulated and vary across brain regions (Pessah et al., 2010). A novel observation of this study was the effect of the MARBLES PCB mixture on soma size, with PCB effects on this morphometric parameter observed in all but the T4826I genotype. Generally, developmental PCB exposure decreased hippocampal soma size but increased cortical soma size. The two exceptions to this generalization were increased soma size of female DM hippocampal neurons in the 6 mg/kg dose group and decreased soma size of female CGG cortical neurons in the 1 mg/kg and 6 mg/kg dose group. Interestingly, PCB effects on soma size in hippocampal neurons were phenocopied in the T4826I and DM genotypes, which had hippocampal neurons with smaller soma sizes relative to WT controls. PCB effects on soma size did not necessarily correlate with PCB effects on dendritic complexity. For example, while developmental exposure to the MARBLES PCB mix significantly decreased soma size of hippocampal neurons in male and female WT mice, male CGG mice, and male DM mice, dendritic arborization in these neuronal cell types was either unaffected (male and female WT mice and male CGG mice) or increased (male DM mice) relative to sex-and genotype-matched control. Moreover, PCBs significantly increased dendritic arborization of male and female T4826I cortical neurons, but had no significant effect on soma size in these neurons. These observations suggest that different mechanisms mediate the morphometric effects of PCBs on soma vs. dendrites, and that the PCB effects on either morphometric parameter do not simply reflect general cellular hypertrophy. Other environmental exposures have been reported to alter soma size. For example, developmental exposure to morphine was found to decrease or increase soma size of ventral tegmental area dopaminergic neurons depending on the brain region to which the neurons projected (Simmons et al., 2019). Soma size has been linked to cognitive ability, with increased hippocampal soma size in birds hypothesized to enhance spatial memory and survival in changing climate conditions (Freas et al., 2013). In a rat model of autism-like behavior, soma size of hippocampal CA1 pyramidal neurons was reduced in offspring developmentally exposed to valproic acid (Hajisoltani et al., 2019). The effects of reduced soma size on cognitive behavior may extend to humans, as hippocampal soma size is reduced in individuals with schizophrenia (Benes et al., 1991). Human iPSC cells with a knockdown of SHANK3, an autism-related gene, or neurons derived from iPSC from patients with Rett syndrome also exhibited reduced soma size (Marchetto et al., 2010;Huang et al., 2019). Conversely, there is evidence that increased soma size is associated with altered cognitive ability: mice lacking the FMR protein had increased neuronal somata (Selby et al., 2007). These observations suggest that either abnormally enlarged or reduced neuronal soma size may be detrimental to cognitive function, identifying another NDDrelevant outcome influenced by interactions between PCBs and human mutations associated with altered Ca 2+ -dependent signaling and/or neuronal connectivity. A question raised by this study is whether gene dosage affected dendritic arborization in the absence or presence of developmental PCB exposures. Gene dosage seemed to influence dendritic outcome independent of developmental PCB exposure as evidenced by the observation that male DM hippocampal neurons had significantly more complex dendritic arbors than male T4826I hippocampal neurons (assessed as distal area under the Sholl curve, Table 1). Gene dosage also seemed to influence sensitivity of hippocampal neurons to the dendritic effects of the MARBLES PCB mixture since PCB effects on this neuronal cell type were only observed in male and female DM mice. Assessing the influence of gene dosage on the response of cortical neurons to PCBs is more difficult because PCB effects on cortical neurons were more complex. Nonetheless, male DM cortical neurons were more sensitive to the dendrite-promoting activity of PCBs than male WT and CCG neurons. Conversely, the dendritic arborization of female DM cortical neurons was not altered by developmental PCB exposure compared to vehicle control, while female T4826I cortical neurons responded to PCBs with more complex dendritic arbors and female CGG cortical neurons responded with less complex dendritic arbors. Based on these observations, it is difficult to determine whether the T4826I and CGG genotypes contributed equally to the DM phenotype. In male cortical neurons, developmental PCB exposure increased the proximal area under the Sholl curve in both T4826I and DM mice in the 1 mg/kg dose group, but had no effect or reduced this parameter in male CGG cortical neurons, suggesting this phenotype in DM males was driven largely by the T4826I-RYR1 mutation. However, in male hippocampal neurons, developmental PCB exposure decreased soma area in CGG and DM neurons of mice in the 1 mg/kg dose group but not in T4826I neurons, suggesting this PCB response is largely influenced by the CGG mutation. Yet in other cases, the DM response to PCBs was not phenocopied by either the T4826I or CGG genotype. For example, in hippocampal neurons, PCB responses were only seen in DM mice and not mice of the other genotypes. Additionally, in female cortical neurons, developmental PCB exposure increased dendritic complexity in T4826I mice in the 1mg/kg dose group, decreased dendritic arborization in CGG mice in the 6 mg/kg dose group, and had no effect on dendritic arborization in DM mice compared to vehicle controls. This latter scenario may reflect an additive effect of both genotypes. Collectively, these observations suggest that while the T4826I-RYR1 and CGG mutations both alter the fidelity of Ca 2+ signaling in neurons (Barrientos et al., 2012;Cao et al., 2012;Robin et al., 2017), the interactions between these mutations in the DM mice are complex, potentially reflecting mechanism(s) independent of Ca 2+ signaling. Several potential mechanisms by which MARBLES PCBs interact with the T4826I-RYR1 and FMR1 CGG repeat expansion mutations to modulate dendritic arborization include (1) PCB induced changes in the expression of RYR1 and FMR1/FMRP and/or (2) convergence on the same signaling systems dysregulated by these genetic factors at critical times during development. With respect to the former, we have previously demonstrated that gestational and lactational exposure to Aroclor 1254 in the maternal diet at 1 or 6 mg/kg/d dose-dependently increased RyR expression in the cerebellum of weanling pups (Yang et al., 2009). Whether the MARBLES mix similarly increases RyR expression and whether any PCB(s) increase expression of FMRP is not known, but should be the focus of future investigations. Several lines of evidence support a model in which PCBs and genetic factors converse on Ca 2+ -dependent signaling pathways. First, the MARBLES PCB mixture has RyR activity as determined by equilibrium binding of [ 3 H]ryanodine to RyR1-enriched microsomes . Moreover, two of the MARBLES PCB congeners, PCB 95 and PCB 11, promote dendritic growth in primary hippocampal and cortical neurons via activation of Ca 2+ -dependent signaling pathways involving CREB, Wnt, miR132, and/or mTOR (Yang et al., 2009;Wayman et al., 2012a,b;Lesiak et al., 2014;Keil et al., 2018;Sethi et al., 2018). The signaling pathways activated by PCBs to increase dendritic arborization map onto Ca 2+dependent signaling pathways altered in NDDs (Stamou et al., 2013;Panesar et al., 2020). Second, both the T4826I-RYR1 gain of function mutation (Barrientos et al., 2012) and the FMR1 CGG repeat expansion mutation Robin et al., 2017) have been shown to increase resting intracellular Ca 2+ concentrations and spontaneous Ca 2+ oscillations in neuronal cells. Increased intracellular Ca 2+ promotes dendritic growth via a CaMK-CREB-Wnt signaling pathway (Wayman et al., 2006) and dendritic spine formation via a CREB-miR132 pathway (Impey et al., 2010). Intracellular Ca 2+ also regulates mTOR-dependent translational control of dendritic growth (Kumar et al., 2005;Urbanska et al., 2012). The FMR1 CGG repeat expansion mutation results in decreased expression of the translational repressor FMRP (Hagerman and Hagerman, 2013), which effectively alters mTOR signaling (Wang et al., 2012). FMRP also functions as a chaperone for miR132, but the effects of decreased FMRP on miR132 signaling are not known. Expression of either the RYR1 gain of function mutation (Pessah, personal communication) or the FMR1 CGG repeat expansion have been shown to alter dendritic growth in primary neurons. We propose that at least a subset of PCB congeners in the MARBLES PCB mixture converge on these signaling pathways to amplify the effects of these gene mutations on dendritic arborization. While further studies are required to confirm this model, it does provide a potential explanation for the observation that in contrast to previous studies of rats exposed developmentally to PCB 95 (Wayman et al., 2012b) or Aroclor 1254 (Lein et al., 2007;Yang et al., 2009), the MARBLES mix did not promote dendritic arborization in the CA1 pyramidal neurons in the hippocampus of WT mice. PCB 95 is among the most potent congener with respect to RyR sensitization (Pessah et al., 2010), and Aroclor 1254 contains a significant percentage of RyRactive PCB congeners, including PCB 95 (Howard et al., 2003). In contrast, PCB 95 comprised only 1.2% of the total mass in the MARBLES mixture. Moreover, a comparative analysis of the in vitro RyR potency of PCB 95 vs. the MARBLES mix showed that the MARBLES mix activates the RyR at micromolar concentrations with a maximal activation of 4-fold while PCB 95 activated the RyR at nanomolar concentrations and the maximal activation was 12-fold . This earlier in vitro study compared the RyR potency of each of the individual PCB congeners in the MARBLES mix, and the results indicate that the most potent RyR sensitizing congeners comprise ∼9% of the MARBLES mix. Therefore, if RyR sensitization is the predominant mechanism driving PCB-induced dendritic arborization, it is perhaps not surprising that the MARBLES mix did not promote dendritic arborization in hippocampal neurons of WT animals. However, we did observe increased dendritic arborization of cortical neurons in WT animals, suggesting that the dose-dependency of PCB-induced dendritic growth varies between brain regions. A model in which PCBs and genetic factors interact via convergence on Ca 2+ -dependent signaling pathways also provides a potential explanation for the non-monotonic doserelated effect of the MARBLES PCB mix on dendritic arborization. Multilevel linear mixed-effects modeling identified a main effect of dose on the dendritic complexity of cortical neurons with the 1 mg/kg PCB dose group exhibiting significantly increased dendritic arborization compared to vehicle controls or the 0.1 and 6 mg/kg dose groups. A similar non-monotonic dose-response relationship has been reported in previous in vivo and in vitro studies of Aroclor 1254, PCB 95 and PCB 136 (Yang et al., 2009(Yang et al., , 2014Wayman et al., 2012b). The mechanism underlying this dose-response relationship is not known, but a possibility is suggested by in vitro studies demonstrating that moderate increases in Ca 2+ promote dendritic growth whereas large increases cause dendritic retraction (Segal et al., 2000;Lohmann and Wong, 2005). Thus, if PCBs and the T4826I-RYR1 and FMR1 CGG repeat expansion mutations are modulating dendritic growth via increased levels of intracellular Ca 2+ , then higher PCB doses, increased gene dosage, or the combination of PCBs and gene mutations may increase intracellular Ca 2+ above concentrations that promote dendritic growth to levels that trigger dendritic retraction, perhaps via activation of calpain (Baudry et al., 2013) or preferential activation of CaMKIV (Redmond et al., 2002). This mechanism may also explain the observation that MARBLES PCBs decreased dendritic complexity of female CGG cortical neurons. Testing this hypothesis is an important area of future study, findings from which will expand our understanding of how environmental and genetic risk factors and potentially provide algorithms for predicting specific gene-environment interactions likely to increase the risk of adverse neurodevelopmental outcomes. An outstanding question is whether effects of the MARBLES PCB mixture on dendritic arborization are linked to changes in behavior. The animals used in this study were assessed in tasks that measured social communication, repetitive behavior and sociability (data under review). While aberrant behavior was observed in PCB-exposed animals, there was not a oneto-one correlation between PCB effects on dendritic growth and behavior in terms of dose-response relationships or genotype effects. However, this does not negate the relevance of the dendritic findings since we may not have captured the neuroanatomic circuits that mediate the behaviors that were assessed. We believe the dendritic findings are relevant to human NDDs for several reasons. First, animals were exposed to a human-relevant PCB mixture that reflected the PCB congener profile in the gestational environment of at-risk individuals, and the PCB concentrations measured in brain tissue of exposed pups were within the range of PCB levels measured in human brain tissue (Sethi et al., under review) 2 . Second, both increased and decreased dendritic arborization are thought to contribute to the clinical phenotypes associated with many NDDs (Coskun et al., 2013;Keown et al., 2013;Khan et al., 2015;Alaerts et al., 2016;Cooper et al., 2017). In summary, these studies add to the growing body of literature implicating PCBs as NDD risk factors, and identify genetic mutations that may amplify the effects of neurotoxic PCBs on the developing brain. DATA AVAILABILITY STATEMENT The raw data supporting the conclusions of this article will be made available by the authors, without undue reservation. ETHICS STATEMENT The animal study was reviewed and approved by University of California, Davis Institutional Animal Care and Use Committee. AUTHOR CONTRIBUTIONS IP and PL conceptualized the project and obtained funding to support the work. PL supervised all aspects of this study. KK, SS, and PL designed the experiments. KK and SS maintained the mouse colony, dosed the animals, collected tissues for PCB quantitation, and conducted the statistical analysis of the independent PCB dose effects. KK, SS, TR, and CK conducted the Golgi analysis. MW conducted the mixed-effects modeling of the morphometric data. KK, SS, and CK composed the figures. KK drafted the initial manuscript. CK and PL made significant edits to the early versions of the manuscript. All authors listed have made a substantial, direct, and intellectual contribution to the work and approved it for publication. FUNDING This study was supported by the National Institute of Environmental Health (grant numbers R01 ES014901 to PL and IP, T32 ES007059 to SS, R00 ES029537 to KK and P30 ES023513) and by the Eunice Kennedy Shriver National Institute of Child Health and Human Development (grant number F32 HD088016 to KK). This project used core facilities supported by the MIND Institute Intellectual and Developmental Disabilities Research Center (grant number P50 HD103526), by the National Center for Advancing Translational Sciences, National Institutes of Health (grant number UL1 TR001860) and the UC Davis Environmental Health Sciences Center (grant number P30 ES023513). Synthesis of PCB congeners was supported by the Superfund Research Center at The University of Iowa (grant number P42 ES013661). The contents of this study do not necessarily represent the official views of the NIEHS or NICHD. The NIEHS and NICHD do not endorse the purchase of any commercial products or services mentioned in the publication.
v3-fos-license
2020-08-20T13:02:00.982Z
2020-08-18T00:00:00.000
221181320
{ "extfieldsofstudy": [ "Medicine", "Chemistry" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://febs.onlinelibrary.wiley.com/doi/pdfdirect/10.1002/2211-5463.12951", "pdf_hash": "4556fd760478b5ee926d1b4df6fabccea25685b2", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:1707", "s2fieldsofstudy": [ "Biology" ], "sha1": "f1b171b44bd4dbbadbc49c723298a105f50f02bc", "year": 2020 }
pes2o/s2orc
A label‐free impedance assay in endothelial cells differentiates the activation and desensitization properties of clinical S1P1 agonists S1P1 activation maintains endothelial barrier integrity, whereas its desensitization induces lymphopenia. SAR247799, a new G protein‐biased S1P1 agonist, was compared to clinical stage S1P1‐desensitizing compounds using a label‐free impedance assay assessing endothelial barrier integrity. SAR247799 had the highest activation‐to‐desensitization ratio (114), compared to ponesimod (7.66), ozanimod (6.35), and siponimod (0.170) and thus demonstrated the best ability to cause sustained S1P1 activation. Sphingosine-1 phosphate receptor-1 (S1P 1 ) activation maintains endothelial barrier integrity, whereas S1P 1 desensitization induces peripheral blood lymphopenia. The latter is exploited in the approval and/or late-stage development of receptor-desensitizing agents targeting the S1P 1 receptor in multiple sclerosis, such as siponimod, ozanimod, and ponesimod. SAR247799 is a recently described G protein-biased S1P 1 agonist that activates S1P 1 without desensitization and thus has endothelial-protective properties in patients without reducing lymphocytes. As SAR247799 demonstrated endothelialprotective effects at sub-lymphocyte-reducing doses, the possibility exists that other S1P 1 modulators could also exhibit endothelial-protective properties at lower doses. To explore this possibility, we sought to quantitatively compare the biased properties of SAR247799 with the most advanced clinical molecules targeting S1P 1 . In this study, we define the b-arrestin pathway component of the impedance profile following S1P 1 activation in a human umbilical vein endothelial cell line (HUVEC) and report quantitative indices of the S1P 1 activation-to-desensitization ratio of various clinical molecules. In a label-free impedance assay assessing endothelial barrier integrity and disruption, the mean estimates (95% confidence interval) of the activation-to-desensitization ratios of SAR247799, ponesimod, ozanimod, and siponimod were 114 (91.1-143), 7.66 (3.41-17.2), 6.35 (3. 21-12.5), and 0.170 (0.0523-0.555), respectively. Thus, we show that SAR247799 is the most G proteinbiased S1P 1 agonist currently characterized. This rank order of bias among the most clinically advanced S1P 1 modulators provides a new perspective on the relative potential of these clinical molecules for improving endothelial function in patients in relation to their lymphocyte-reducing (desensitization) properties. Sphingosine-1 phosphate receptor-1 (S1P 1 ) is a G protein-coupled receptor of the sphingolipid family [1]. S1P 1 activation causes GTP/GDP exchange in a Gai-dependent manner, resulting in the inhibition of cyclic adenosine monophosphate (cAMP) generation [2]. S1P 1 can also signal through recruitment of b-arrestin causing receptor internalization and subsequent desensitization of G protein-mediated responses. Compounds targeting this receptor have been primarily developed as receptor-desensitizing agents with associated peripheral blood lymphopenia, and this has been exploited in the approval of 3 drugs for multiple sclerosis, fingolimod (a nonselective S1P 1/3/4/5 agonist), and more recently siponimod and ozanimod (S1P 1/5 agonists) [3,4,5]. S1P 1 -desensitizing molecules in clinical development include ponesimod (in phase 3 trials for MS) as well as molecules that have shown efficacy in other autoimmune diseases including inflammatory bowel disease, lupus, and psoriasis [6,7,8]. S1P 1 activation has endothelial barrier-stabilizing effects through the formation of adherens and tight junctions [9,10]. Recently, we reported the discovery of SAR247799 a G protein-biased S1P 1 selective agonist capable of S1P 1 activation while limiting receptor desensitization [11]. The biased properties of SAR247799 were associated, in rat and pig models of ischemia/reperfusion injury, with endothelial-protective properties at doses that did not show lymphocyte reduction, and lymphopenia was only evident at supratherapeutic doses [11]. Similarly, 5-week sustained activation of S1P 1 in diabetic rats showed improvements in renal function and endothelial function without causing receptor desensitization [12]. Furthermore, these preclinical findings showed translation to human studies, where SAR247799 showed improvement in endothelial function in type-2 diabetes patients, again at sub-lymphocyte-reducing doses [12]. SAR247799 displayed an attractive safety and tolerability profile in humans, and supratherapeutic doses were characterized by dose-dependent lymphocyte reduction, a biphasic effect consistent with that observed in preclinical studies [11,12,13]. As SAR247799 demonstrated endothelial-protective effects at sub-lymphocyte-reducing doses, the possibility exists that other S1P 1 modulators, although developed as S1P 1 -desensitizing molecules, might also exhibit endothelial-protective properties at lower doses. To explore this possibility, we sought to quantitatively compare the biased properties of SAR247799 with the most advanced clinical molecules targeting S1P 1 in a relevant endothelial cell-based assay. We previously reported the biased properties of SAR247799 by measuring the potency and efficacy for activation of G protein pathways (inhibition of forskolin-induced cAMP) relative to b-arrestin recruitment and receptor internalization pathways [11]. SAR247799 displays more G protein-biased S1P 1 agonist properties than siponimod in these receptor overexpression assays. However, it is important to recognize the limitations of various cell-based assays for determining ligand bias [14]. Cell-based assays relying on receptor overexpression may not fully recapitulate the same consequences associated with endogenous receptor signaling. The stoichiometry between receptor occupancy and intracellular events is often altered in assays that rely on biosensors, due to signal amplification. Gai-coupled receptors are particularly challenging because intracellular G protein signaling is measured indirectly by inhibition of forskolin-induced cAMP production, usually in transfected cells. Furthermore, the measurement of bias requires comparison of two separate assays (e.g., cAMP with barrestin recruitment or receptor internalization), and differences in timepoints and assay conditions may cause the physiochemical properties of test compounds to influence experimental readouts to different extents. Consequently, such systems can lead to under-or overreporting of receptor bias, and a previous study using various S1P 1 -overexpressing cell assays did not find differences in signaling between molecules [15]. The ideal approach to quantify GPCR bias would be by utilizing a single assay capable of measuring activation and desensitization in the same setting, be performed in relevant cells without receptor overexpression, and not rely on reporter systems that introduce the possibility of signal amplification. Endothelial barrier function can be measured by the passage of molecules across a cell layer [16,17]. Movement of ions across the endothelial layer occurs mainly by intercellular exchange, and the integrity of cell-cell junctions is the primary resistance to this movement [18]. Trans-endothelial electrical resistance, or impedance, is therefore an index of endothelial barrier integrity. A real-time cellular assay (RTCA) in human umbilical vein endothelial cells (HUVEC), utilizing a label-free electrical impedance measurement, has been shown to produce a Gi-mediated increase in impedance following activation with S1P 1 agonists [19]. We previously reported qualitative differences in the desensitization properties of SAR247799 and siponimod using such a system [11]. We now extend these observations to define the b-arrestin pathway component of the impedance profile following S1P 1 activation in HUVECs, and we report quantitative indices of the S1P 1 activation-to-desensitization ratio of various clinical molecules. We show that SAR247799 is the most G protein-biased S1P 1 agonist currently characterized and provide a rank order of bias among the most clinically advanced S1P 1 modulators. Impedance protocol HUVECs from pooled donors (PromoCell GmbH, Heidelberg, Germany, C-12203) were seeded at 10 000 cells per well, in complete medium (C2210, C39210) containing 2% fetal bovine serum (FBS), in 96-well collagen-I coated Eplates. Cells were allowed to attach and proliferate for 6 h, followed by overnight serum starvation in medium containing 0.1% FBS. Electrical impedance was measured continuously with RTCA-MP station (xCELLigence RTCA, ACEA-Biosciences, San Diego, CA, USA), according to the manufacturer's protocol, and expressed as baseline normalized cell index (BNCI) using RTCA2.0 software. Impedance measurements were analyzed for 60 min following addition of test compounds or DMSO control, and the early (peak response at 8-10 min) and late response (at 60 min) was used for further analysis. Cells were then washed in medium containing 0.1% FBS for 5.5 h. The effect of each test compound to desensitize the response to a second stimulation with the natural ligand S1P (80 nM) was measured in the same wells and expressed as the AUC 0-60 min of the S1P-induced BNCI response. S1P (Avanti Polar Lipids) was prepared from a 125 µM stock solution in 4 mgÁmL À1 BSA according to the manufacturer's instructions. The baseline for the BNCI calculation was the respective vehicle responses for the first (0.1% DMSO) and second stimulations (2.5 µgÁmL À1 BSA). When tested, 10 µM GRK2 inhibitor was pretreated with cells for 2 h prior to addition of test compounds and its effect on early and late responses measured as above. All experiments were repeated on at least 3 separate occasions. GRK assays The biochemical potency on GRK2 was determined using recombinant human GRK2 (Catalogue number PR4694A, Thermo Fisher, Les Ulis, France) in a 33 P-ATP flash plate kinase assay with 3 µM ATP and biotin-RRREEEEESAAA as substrate. The cellular potency of the GRK2 inhibitor was determined using a b-arrestin recruitment assay in PathHunter Ò cells overexpressing human S1P 1 (Eurofins DiscoverX Corporation, San Diego, CA, USA; catalogue number 93-0207C2) as described [11]. The mean IC 50 from at least 3 separate experiments was reported. Calculations Effective concentration corresponding to half of the difference between the maximum and minimum effect (EC 50 ) of agonists was determined with SAS procedure NLIN in SAS system release 9.1 under UNIX via BIOSTAT@T-SPEED-LTS v2.0 internal software using the 4-parameter logistical model. The potency of test compounds to desensitize the S1P-induced BNCI response was determined as the inhibitory concentration corresponding to 50% of the S1P response in the absence of test compound (IC 50 ) and determined using the 4-parameter logistical model as above. The activation-to-desensitization ratio was expressed as IC 50 /EC 50 for early phase. EC 50 , IC 50 , and activation-to-desensitization ratio were reported as geometric mean with 95% confidence intervals. SAR247799 produced a sustained cell impedance response Electrical impedance was measured as an index of endothelial barrier integrity and expressed as baseline normalized cell index (BNCI). An overview of the experimental set-up following stimulation of HUVECs with each test compound is illustrated in the schematic (Fig. 1). All compounds produced a rapid and concentration-dependent increase in BNCI with a peak at approximately 8-10 min ( Fig. 2A-D). After this peak response, the BNCI declined and the compounds showed differences in the kinetics of sustaining the BNCI response over the subsequent hour. This biphasic response was characterized by calculating the peak BNCI response, referred to as the early response, and the BNCI at 60 min, referred to as the late response. For the early response, all compounds showed a concentration-dependent response, with similar E max between the 4 compounds ( Fig. 2E-H). The potency of each compound in the early response was expressed as its EC 50 and was between 1 and 30 nM for the 4 compounds ( Table 1). For the late response, SAR247799 displayed concentration-dependent increases that paralleled the early response (Fig. 2E). The late response with SAR247799 gave an E max that was 81% of that achieved in the early phase, and the EC 50 values were similar (42.8 nM versus 26.1 nM) (Fig. 2E, Table 1 Fig. 2F-H). Siponimod, ponesimod, and ozanimod gave, respectively, maximum BNCI values in the late response which were 34%, 81%, and 51% of the E max reached in the early response ( Fig. 2F-H, Table 1). Higher concentrations of siponimod, ponesimod, and ozanimod displayed a concentration-dependent decline in BNCI in the late response. The highest concentrations of siponimod caused endothelial barrier disruption as the BNCI values in the late response were below the baseline (Fig. 2F). b-arrestin signaling contributes to the late response and barrier disruption The BNCI increase following S1P 1 agonist stimulation has previously been shown to be Gi-mediated [19]. Given that impedance responses provide an integrated assessment of ligand activity [20,21] we sought to determine the contribution of b-arrestin pathway activation to maintaining the Gi-mediated BNCI increases. We did this by inhibiting the b-arrestin pathway with a G protein-coupled receptor kinase (GRK) inhibitor. GRKs cause intracellular phosphorylation of GPCRs, a requisite step for b-arrestin binding and subsequent halting of G protein-mediated activation [22]. GRK2-mediated phosphorylation of S1P 1 is also a requisite step for lymphopenia induced by S1P 1 -desensitizing agents [23]. The GRK2 inhibitor had an IC 50 of 44 nM in the GRK2 kinase assay, and it inhibited b-arrestin recruitment in S1P 1 -overexpressing cells with an IC 50 of 1.1 µM. Thus, 10 µM of the GRK inhibitor was used for assessing the effect of inhibiting the b-arrestin pathway on the impedance response of S1P 1 agonists. For this evaluation, we chose SAR247799 and siponimod because they displayed the most-sustained and the most-transient BNCI increases, respectively. In the presence of the GRK2 inhibitor, the responses of siponimod and SAR247799 were no longer biphasic but showed a sustained BNCI increase with no signal decline over 60 min (Fig. 3A-E). The GRK inhibitor had little effect on the early response (at 10 min) of either compound (Fig. 3C,F). The late response of SAR247799 showed a concentration-dependent increase in the absence of the GRK inhibitor, and this was increased a further 2-fold by the presence of the GRK inhibitor ( Fig. 3A-C). The late response of siponimod showed a concentration-dependent decrease to negative BNCI values in the absence of the GRK inhibitor (Fig. 3D). We showed that these siponimod-induced barrier-disruptive properties were due to b-arrestin activation, because in the presence of the GRK inhibitor the same concentrations of siponimod demonstrated improved barrier integrity (Fig. 3E,F). Concurrent activation and desensitization measurements reveal differences among compounds To measure the ability of each compound to desensitize S1P 1 , plates of the same compound-treated cells used for measurement of early and late responses (first stimulation) were washed and then tested for their ability to mount a second BNCI response to a single concentration of the endogenous ligand S1P (second stimulation) (Fig. 1). As S1P, unlike most synthetic agonists, displays sustained impedance responses through S1P lyase-dependent receptor recycling [19], the S1P-induced impedance response was characterized by the area under the curve (AUC) over 60 min. At the highest concentrations tested, preincubation of all 4 compounds fully desensitized the S1P-induced BNCI response (Fig. 4A-H). The highest concentrations of ponesimod caused, in the second stimulation stage, the S1P-induced BNCI response to fall below the baseline (Fig. 4C,G), indicating that ponesimod not only blocked the S1P-induced barrier-promoting effect, but caused barrier disruption. To enable a comparison of the desensitization effect of compounds that had differing maximal effects, IC 50 s for the desensitization response were calculated as the concentration causing an absolute 50% reduction of the control S1P-induced BNCI response (i.e., in the absence of test compound). The desensitization IC 50 s were compared to the activation EC 50 in the early response and expressed as an activation-to-desensitization ratio (IC 50 /EC 50 ). Siponimod was more potent in the desensitization assay (IC 50 = 0.167 nM) than in the early-phase activation assay (EC 50 = 0.977 nM), giving an activation-to-desensitization ratio of 0.170 (Fig. 4F, Table 1, Fig. 5). Ponesimod, ozanimod, and SAR247799 were less potent in the desensitization assay compared to the early-phase activation assay (Fig. 4G,H,, respectively), and the respective activation-to-desensitization ratios were 7.66, 6.35, and 114 (Table 1, Fig. 5). Discussion This study describes a quantitative approach to characterize the activation-to-desensitization ratio for S1P 1 modulators using an endothelial electrical impedance assay. The rank order of the 4 clinical compounds evaluated was SAR247799>ponesimod>ozanimod> siponimod with activation-to-desensitization ratios of 114, 7.66, 6.35, and 0.170, respectively. Consequently, SAR247799 had the best ability to activate S1P 1 , while minimizing S1P 1 desensitization, and siponimod had the best ability to desensitize S1P 1 , while minimizing S1P 1 activation. A particular advantage of this method, due to the continuous nature of impedance monitoring, was that the same cell experiment was capable of measuring activation and desensitization properties; the cells were simply washed and re-stimulated between the activation and desensitization parts of the study. It is well recognized that experimental parameters such as cell density, cell passage, cell viability as well as compound dilution can introduce variability to measurements in cell-based assays. The experimental procedure controlled these variables by concurrent measurement of activation and desensitization using the same cells and compound dilution. As a result, bias ratios were reproducible between replicate experiments. Similar quantitative approaches could be utilized to compare other receptors endogenously expressed in HUVECs or, for that matter, other cell types. 5. Activation-to-desensitization ratios of S1P 1 agonists. Geometric means with 95% confidence intervals (SAR247799 n = 5, siponimod n = 3, ponesimod n = 3, and ozanimod n = 3). The BNCI increase caused by S1P 1 modulators in HUVECs is pertussin toxin-sensitive and hence Gi-mediated [19], consistent with BNCI increases seen with other Gi-coupled receptor ligands [24,25]. SAR247799 was able to sustain the impedance response over 60 min consistent with sustained Gi activation. However, the other compounds had markedly lower, and sometimes below baseline, impedance responses at 60 min, consistent with a study performed with ponesimod [19]. An inhibitor of b-arrestin pathway signaling (GRK inhibitor) modified the BNCI profile to one of sustained activation over 60 min, confirming that b-arrestin activation was responsible for reducing the latephase response. This finding is consistent with our previous characterization of SAR247799 relative to siponimod; SAR247799 activated G protein pathways more effectively than b-arrestin or receptor internalization and SAR247799 was more G protein-biased than siponimod [11]. Consequently, characterization of S1P 1 agonists for their ability to sustain impedance responses could be a useful tool to rapidly distinguish between the biased nature of ligands. Quantitatively, there was a larger differential between siponimod and SAR247799 in the HUVEC activation-to-desensitization ratios compared to the bias ratios determined in S1P 1 -overexpressing cells (cAMP versus b-arrestin or cAMP versus internalization) [11], emphasizing some of the limitations of recombinant assays. The BNCI response in some cases went below the baseline at the highest concentrations tested. This was particularly evident in the late-phase response for siponimod and in the S1P 1 desensitization setting for ponesimod. Positive BNCI values represent a tightening of the endothelial barrier, whereas negative BNCI represents barrier disruption. Disruption of the endothelial barrier has been reported with high doses of S1P 1 -desensitizing molecules in cell-based assays as well as in animals, particularly in the lung [11,26,27,28]. It is also evidenced in clinical trials where dose-dependent lung dysfunction and macular edema have been noted with various molecules [3,4,5,29,30]. These safety findings are particularly relevant in settings where lung endothelial barrier protection is actually desired (e.g., acute lung injury or systemic lupus erythematosus), and the disruption of barrier integrity assessed with BNCI values falling below baseline could be a potential approach to predict and mitigate this. Lymphocyte reduction through S1P 1 desensitization may not be the sole mechanism contributing to efficacy of S1P 1 modulators in multiple sclerosis patients [31]. It has been proposed that an alternative mechanism to limit the entry of inflammatory cells into the CNS with this drug class is through improvement of blood-brain barrier (BBB) integrity through activation of S1P 1 receptors on endothelial cells and astrocytes, which are the main cellular constituents of the BBB [32]. The recently approved dose of ozanimod in MS is associated with only 55% lymphocyte reduction, whereas fingolimod and siponimod produce 70-80% lymphocyte reduction at their approved doses. As we showed here that ozanimod is more biased than siponimod toward S1P 1 activation, it is possible that a contribution of non-S1P 1 -desensitizing mechanisms, such as S1P 1 -mediated endothelial/astrocyte protection, could explain why the efficacious doses were associated with different levels of lymphocyte reduction. Endothelial protection as well as lymphocyte reduction could be desirable mechanisms to target in indications beyond MS. Chronic rheumatic disorders such as systemic lupus erythematosus, psoriasis, or systemic sclerosis are characterized by prominent endothelial dysfunction and marked vascular dysfunction, particularly in the microcirculation, suggesting that drug targeting the endothelium could find therapeutic utility in these conditions [33,34]. However, lymphocytes clearly play an important role in these diseases as shown by the success of B cell depletion and IL17/IL23 pathway inhibition in systemic lupus erythematosus and psoriasis, respectively [35,36]. Recently, a S1P 1 modulator, cenerimod, demonstrated promising signals in a lupus trial at lymphocyte-reducing doses [7], raising the question about relative contributions of lymphocyte-reducing and endothelial-protective mechanisms to the effects observed. Our approach to quantifying activation-to-desensitization ratios allows a new dimension to understanding, rationalizing, and potentially predicting relative differences in efficacy and suitability of various molecules targeting this pathway in the clinic. In addition to the roles of S1P 1 activation and S1P 1 desensitization in endothelial barrier integrity and lymphocyte reduction, respectively, S1P 1 activation also has heart rate-reducing effects through its action on atrial myocytes. To achieve S1P 1 -activating effects on the endothelium preferentially over the heart would require compounds with low tissue penetration or low volume of distribution. For this reason, the activationto-desensitization ratios need to be considered in the context of tissue distribution properties. Table 2 summarizes both of these dimensions for the 4 compounds evaluated. SAR247799 has the highest S1P 1 activationto-desensitization ratio, as well as the lowest volume of distribution (7-23 L) compared to siponimod (124 L) [4], ponesimod (160 L) [37], and ozanimod (5590 L) [5]. It is noteworthy that ozanimod and ponesimod, although having similar activation-to-desensitization ratios, have a marked difference in their tissue distribution properties. In conclusion, this is the first study to compare and distinguish between the activation-to-desensitization properties of clinical S1P 1 modulators. SAR247799 had the most suitable profile for endothelial protection, whereas siponimod had the best profile for S1P 1 desensitization (and resulting lymphocyte reduction). As there are different therapeutic benefits associated with activating and desensitizing this receptor, the findings have clinical implications for selecting molecules in the class for desired effects on the endothelium versus on lymphocytes, respectively.
v3-fos-license
2019-09-19T09:04:49.039Z
2019-08-01T00:00:00.000
204644039
{ "extfieldsofstudy": [ "Sociology" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.scielo.br/j/tla/a/r5XH5HzRYbpHKrwMn68rXwd/?format=pdf&lang=en", "pdf_hash": "d52adc65270433b6c89d753ff50ab924c600029a", "pdf_src": "Anansi", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:1713", "s2fieldsofstudy": [ "Art" ], "sha1": "85e989f851ac234e3ab6057f5702b11920458598", "year": 2019 }
pes2o/s2orc
HUMANS, NONHUMAN OTHERS, MATTER AND LANGUAGE: A DISCUSSION FROM POSTHUMANIST AND DECOLONIAL PERSPECTIVES 1 HUMANOS/AS, OUTROS/AS NÃO-HUMANOS/AS, MATÉRIA E LÍNGUA/LINGUAGEM: UMA DISCUSSÃO A PARTIR DE PERSPECTIVAS PÓS-HUMANISTAS E DECOLONIAIS Our objective is to discuss decolonial and, mainly, posthumanist perspectives, as we engage in an inter-epistemic dialogue, which encompasses discussions on matter and language. At first, we address Indigenous thoughts in order to relate them to decolonial and posthumanist worldviews, briefly concentrating our attention on some arguments concerning their relation, and we justify our choices. We draw on critiques of colonial and humanist ideas about humans, nonhuman others and matter, and we then discuss traditional conceptions of language. From a posthumanist framework, we approach understandings of language that directly intertwine it with materiality. Based on the problematizations we present, our aim is to expand understandings of what it means to be human and perceptions of language, as we become involved in a project that seeks to see and go beyond human hubris. Therefore, we encourage an onto-epistemological review of language, based on its entanglement with matter. PREAMBLE We have already been working with decolonial thoughts (BORELLI; PESSOA, 2019; MASTRELLA-DE-ANDRADE; PESSOA, 2019; PESSOA, 2019; PESSOA; BORELLI; SILVESTRE, 2018; PESSOA; SILVESTRE; BORELLI, 2019) and posthumanist perspectives (SOUSA, 2017(SOUSA, , 2018(SOUSA, , 2019a(SOUSA, , 2019b(SOUSA, , 2019c) ) for about three years.However, in this paper, our objective is to discuss both praxiologies 2 together, with a special focus on matter and language.By decoloniality, we mean the movements towards delinking ourselves from the modes of living, thinking and being (MIGNOLO, 2007) that were built as a result of the process of colonization and have been maintained even after the end of colonialism in the form of, for example, racial, class, sexual, gender, linguistic, spiritual and epistemic hierarchies, which characterize our Eurocentrist world-system (GROSFOGUEL, 2010).Concerning posthumanism, we understand it is a project that questions what it means to be human, as it engages with ethico-onto-epistemological (BARAD, 2003(BARAD, , 2007) ) challenges that arise in contemporary times, since there is a constant attempt to consider and address human and nonhuman entities, language, and space from a flat hierarchy perspective.Posthumanism is thereby connected to critical projects that seek social justice, but it is committed to the search for a kind of justice that goes beyond it. Like Patel (2016), Papadopoulos (2018) and Pennycook (2018bPennycook ( , 2019)), we perceive posthumanism and decoloniality as closely related and we believe they can greatly benefit from each other.These are two perspectives that have been important for the development of our work, especially concerning language education and teacher education.Therefore, from our viewpoint, they are deeply connected in face of the objectives we pursue and challenges we meet in our work. As we see it, one of the aspects posthumanism and decoloniality have in common is that they are both partly influenced by Indigenous knowledges 3 .Nascimento (2017) shows how Indigenous peoples perceive the world in a nondichotomous way when it comes to interactions between humans and nonhumans, mind and spirit, what is tangible and what is intangible, and between these entities and language.As they engage with nature 4 in a holistic and interdependent way, their production of knowledge and perceptions are relational and shared with the earth, animals, and plants, without hierarchies between them and people being established; thus, in this perspective, all materiality (alongside spirituality) plays a role in the processes we experience (NASCIMENTO, 2017;PATEL, 2016).According to Patel (2016) and Nascimento (2017), such understanding is closely related to decolonial thoughts, and we add, to posthumanist perspectives as well, as it decenters human beings in the big picture.Indigenous peoples have perceived life and the world without the dualisms and binarisms that the modern/colonial and humanist world has imposed since long before decolonial and posthumanist praxiologies emerged. Nonetheless, as Pennycook (2018b, p. 141) underscores, in relation to posthumanism, "[t]he West with its talk of the Anthropocene is rediscovering Indigenous knowledges and, as has long been part of that history, claiming them for itself."He emphasizes that, in a way, posthumanism has an interest in prehumanism, " […] in thinking before the great rise of Western thought and destruction, in turning to alternative ways of thinking about the world and our relationship to it" (PENNYCOOK, 2018b, p. 141).For the author, Indigenous knowledges can help us rethink the divisions colonialism, its progeny coloniality and humanism have created between humans and nonhumans.In the same perspective, Martin and Mirraboopa (2003) and Patel (2016) state that Indigenous ways of knowing, being and doing can teach us a lot about our relationship with others and the planet. We are aware that some decolonial scholars, like Mignolo and Vázquez (2017) and Mignolo (2018aMignolo ( , 2018b)), do not think these frameworks are/should be related, let alone that we should engage in a dialogue with them, due to their historical, political, epistemological, and even ontological dissimilarities.For instance, due to the fact that the posthumanist project started in Europe, these authors automatically reject any reflection and contribution coming from this movement, as they seem to understand it as a continuation of humanism.In addition, when they briefly address posthumanism in their works, they overgeneralize it, as if it were a homogenous perspective, although posthumanism is an umbrella term used "to refer to a variety of movements and schools of thought" (FERRANDO, 2013, p. 26), and as Pennycook 4.Although we are aware that nature and culture are western inventions, that is, ontological fictions, insofar as "Indigenous peoples do not make this distinction" (MIGNOLO, 2018b, p. 159), we use the term nature here to refer to life in general. In spite of the complexity of western epistemology, for Mignolo (2018b), the production of knowledge that comes from Europe cannot hold dialogue with his decolonial project.Furthermore, Vázquez (in conversation with MIGNOLO, 2017, p. 505-506, our translation, quotation marks in original) affirms that […] the problem with the 'posthuman' is that it loses sight of relationality, it becomes a projection of human life mediated through communication technologies and biotechnology. […] While the thought of the 'posthuman' points to the future of life mediated by digital and genetic technologies, decoloniality points to the liberation of life forms that were eradicated or, at best, denigrated by the great project of 'humanity' of modernity/coloniality. 5 Vázquez's contention is very simplistic and reductionist.It seems that the author disregards the difference between transhumanism and posthumanism.Transhumanism has as its focus "possible biological and technological evolutions", and is rooted in humanist understandings (FERRANDO, 2013, p. 27), while posthumanism is fully engaged with "a critical and historical account of the human" (FERRANDO, 2013, p. 28), grounded on social critiques.Posthumanism does seek to grasp the relations between humans and technology (understood in a broader sense as any object used to achieve a purpose), but it does it in order to understand the ontological significance of technology, being that only one of posthumanism's research interests.Unlike what Vázquez (in conversation with MIGNOLO, 2017) states, decolonial and posthumanist projects have similar agendas (PENNYCOOK, 2019), depending on the locus of enunciation of the ones involved in such projects.As opposed to Mignolo's (2018a) argument that posthumanism asserts universality, we perceive posthumanism, like Pennycook (2019), as a localized viewpoint, of which the focus is on understanding the relations between humans, nonhuman others, matter, language, and space in one's own context.In addition, Mignolo's (2018a) critique ignores a multiplicity of places and people, some outside Europe, who are also part of the movement. In relation to the discussion held here, we highlight that when we mention posthumanist and decolonial perspectives, by no means do we intend to generalize them, as if they were unitary movements; and, second, we underscore that our purpose is to draw on our readings and understandings of both praxiologies, which are evidently based on our own locus of enunciation, in an attempt to encourage knowledge expansion.We rely on the idea that scholars whose work is grounded on ethics and justice should be open to listen to one another, for they all might have something to contribute in order to expand our worldviews.As Nascimento (2017) points out, recognizing other perspectives is a necessary condition for the creation of new epistemologies.In the same vein, Pennycook (2018b, p. 131) affirms that the broad scope of posthumanism draws on "multiple related areas without being reduced to them." According to Winnubst (2018), despite their differences, both decolonial and posthumanist frameworks aim to deconstruct the idea of human created by modern/ colonial and humanist perspectives.Moreover, concerning their similarities, we understand that both decoloniality and posthumanism see everything as integrated, incorporated, that is, human and nonhuman entities as affecting and being affected by each other.Regarding posthumanism, what it brings to the table is its ethico-onto-epistemological perspective (BARAD, 2003(BARAD, , 2007)), meaning that ethics, knowing and being are perceived as intertwined.This framework not only deconstructs what we understand by human, but it also encompasses all nonhuman others in its scope in a nonhierarchical way.Similarly, for Patel (2016, p. 7), who is a decolonial scholar, there is a need to "[s]hift material relations among human beings, including their connections to land (land here meaning land, air, water, and space) and other beings."Accordingly, in Papadopoulos's words (2018, p. 205), "[s]ocial movements start to become more than social, movements of matter and the social simultaneously, movements that change power by creating alternative forms of life." Based on our experience in applied linguistics, we see the importance of decolonizing ourselves, that is, of unlearning modern/colonial and humanist structures which have shaped us (as human beings and as teachers/professors), our understandings of language, and our views of language teaching and learning, as well as of teacher education.Such attitude has the potential to foment new ways, new possibilities of dealing with what we do and with who we become. Here, we endorse an onto-epistemological review of language, based on its entanglement with matter.Our objective is to encourage reflection on what language is (or, in more posthumanist terms, on what it becomes) and new ways of understanding language and communication.In order to do that, we rely on interepistemic dialogues (NASCIMENTO, 2017) between decolonial and, mainly, posthumanist perspectives, with the aim of offering one possible way forward to think about language and the elements and processes it involves.Following Deleuze and Guattari's suggestion ( 2005), as we draw on decolonial and posthumanist scholarship here, we choose to take an attitude of and, and, and, rather than instead, so as to try to explore other possibilities for understanding, especially, language and materiality, and their relation. This paper is divided into five parts.In this first section, we address some general aspects, as we justify our choices for this discussion.In the second section, we provide some critiques of colonial and humanist perspectives regarding understandings of humans, nonhuman others and matter.In the third section, we reflect on traditional conceptions of language.In the fourth section, we discuss language from a posthumanist viewpoint.Finally, we present some final remarks. DECOLONIAL AND POSTHUMANIST CRITIQUES: HUMANS, NONHUMAN OTHERS AND MATTER Colonial and humanist perspectives are firmly embedded in modern epistemologies, which emerged from European contexts.Their ideals were (and still are) exclusionary, as, throughout history, the ones considered fully human were Western, white, male, heterosexual, able-bodied and upper-class individuals only; that is, these two frames of reference are eminently hierarchical.Consequently, for a long time in our history, those who did not fit into the predetermined human category were considered less than human, sometimes were compared to animals, and, on some occasions, were even regarded as lesser beings than nonhuman others, on grounds of race, class, gender, sexual orientation, disability etc. (COOK, 2016;FERRANDO, 2013;MIGNOLO, 2000MIGNOLO, , 2018b;;PENNYCOOK, 2018aPENNYCOOK, , 2018bPENNYCOOK, , 2019;;PRATT, 2012;QUIJANO, 2005;SOUSA, 2018SOUSA, , 2019bSOUSA, , 2019c;;VERONELLI, 2015VERONELLI, , 2019)). At another historical moment, with the French Revolution and the Enlightenment, as Pennycook (2018b, p. 76) argues, humanist universalism sought "to bring all humans into the same framework."By directly and indirectly setting an archetype of the human subject (whose characteristics were mentioned in the previous paragraph), based on homogeneity, essentialization and generalizations, humanism promoted ideas of superiority and inferiority among humans, which are elements that contributed to disregarding human differences and, consequently, the inequalities and injustices experienced by minoritized groups throughout history. Consequently, those considered less human, or even not human at all, also had their forms of communication denied as languages (QUIJANO, 2000;VERONELLI, 2015VERONELLI, , 2019)).As Veronelli (2015, p. 113) asserts, "[t]o find in colonized peoples the ability to express complex cosmological, social, scientific, erotic, economic meaning is at odds with their reduction to inferior, animal-like beings."Therefore, from this viewpoint, even if colonized people acquired/learned a European language, they could not evolve in the human scale, insofar as they would never be considered legitimate speakers with the same intellectual capacity as Europeans (NASCIMENTO, 2019;VERONELLI, 2015VERONELLI, , 2019)).Accordingly, the ideas behind the concepts of humanity and language, as traditionally understood, have always referred to particular subjects, with particular bodies and behaviors.Traces of these colonial ideas persist nowadays, if we consider the case of the native speaker episteme, for instance, which is grounded on notions such as that of the nonnative speaker as a deficient speaker.This colonial project occurs through ontological and epistemological processes, which create intertwined colonial discourses about Others. In addition, since then, and mainly in recent modern times, arguments have been presented to make human beings be perceived as the only ones able to do certain things.Modern history thereby has been constructed on the grounds of an anthropocentric way of understanding the world, based on human exceptionalism, which emphasizes the superiority of humans over others.Inspired by Santos's (2010) and Fabrício's (2017) discussions on coloniality and decoloniality, we argue that we need to unthink humanist perspectives concerning humans, nonhuman others, space, and language, in order to be able to think about them in posthumanist terms.Both authors criticize the dichotomous logic that creates practices of division and classification of things, people, and phenomena, which lead to a binary perception of the world, without the consideration of their nuances, imprisoning them in totalizing and essentializing categorizations. According to Deleuze and Guattari (2005, as cited in TOOHEY, 2018b, p. 3), assemblage is the entanglement of entities, that is, in accordance with the ontology proposed by them: "things are as they are (while also constantly changing) because of their interrelations and their entanglements with other things (which are also in other assemblages that are constantly changing)."This view is in keeping with the idea of vincularidad held by some Andean Indigenous thinkers, which, for Walsh and Mignolo (2018, p. 1), "[…] is the awareness of the integral relation and interdependence amongst all living organisms (in which humans are only a part) with territory and land and the cosmos."For Coole andFrost (2010), De Freitas andCuringa (2015) and Toohey (2018aToohey ( , 2018b)), people, animals, objects, nature, discourse, and so on, are always becoming together in relation to and with one another.This ontological perspective focuses on processes of becoming rather than being. For Barad (2003Barad ( , 2007)), as we are always entangled with other entities, we experience ongoing processes of intra-action.Based on Barad's arguments (2011), Toohey (2018a, p. 34, emphasis added) explains that Barad's intra-action contrasts with (or builds from) the idea of interaction.She argued that, if two things are in interaction, then they are separate entities with individual characteristics, but if they are what they are in relation to one another, they intra-act and come into being (on their way to becoming something else) through their entanglement. As Coole and Frost (2010) claim, our everyday life is surrounded and immersed in matter, and we ourselves are composed of it.Accordingly, Bennett (2010) advocates for the need to raise the status of materiality.For Barad (2003Barad ( , 2007)), Coole and Frost (2010), Ferrando (2013) and De Freitas and Curinga (2015), matter should not be perceived as fixed, stable, passive, inert and immutable, but as something that has active participation in the world, as something that has agency (which is understood as distributed).As Pennycook (2019) argues, we need to be able to see that nonhuman others, like objects, play a very big role in relationship to the human.These entities have affordances, that is, specific physical and nonphysical characteristics that invoke ideas, feelings etc., and we thereby act in specific ways also because of them.Moreover, according to Barad (2007, p. 136), posthumanist thought deconstructs the notion of "[…] the body as the natural and fixed dividing line between interiority and exteriority"; in a way, boundaries between inside and outside are softened.We will return to these perspectives, relating them to language, in the following sections. A REFLECTION ON TRADITIONAL CONCEPTIONS OF LANGUAGE Although contemporary movements, characterized as critical, have influenced and even changed conceptions in the field of applied linguistics, premises and assumptions grounded on modern/colonial and humanist perspectives still strongly permeate the studies carried out in the area.It is possible to notice how processes of dichotomization, essentialization, totalization, universalism, reductionism and homogenization are still clearly present in applied linguistics (SEVERO, 2017;SOUSA, 2017SOUSA, , 2018SOUSA, , 2019bSOUSA, , 2019c)).Severo (2017, p. 40, our translation) inquires […] about the relation between the emergence of linguistics as a field of scholarship (FOUCAULT, 1979(FOUCAULT, , 2000) ) and the colonial project, in which language has been transformed into an object to be scrutinized, classified, named, riven, analyzed and described according to certain rules that define what counts as true within the limits of a given discursive regime. 6 Based on Foucault (1979Foucault ( , 2008)), Severo explains that such discursive regimes are not atemporal but are rather conditioned by specific historical and socio-political conditions, and that, therefore, languages are invented by these same regimes.From that perspective, laws and rules were upheld for all languages, as well as specific ways of perceiving them, which led to the creation of ideas of superiority and inferiority among them.Thus, it is important to reflect on how 6. "[...] a respeito da relação entre a emergência da linguística como campo de saber (FOUCAULT, 1979(FOUCAULT, , 2000) ) e o projeto colonial, em que a língua foi transformada em um objeto a ser escrutinado, classificado, nomeado, destrinchado, analisado e descrito segundo certas regras que definem o que conta como verdadeiro dentro dos limites de um dado regime discursivo." Humans, nonhuman others, matter and language... Dossiê Trab.Ling.Aplic., Campinas, n(58.2):520-543, mai./ago.2019 529 language has been understood throughout time, especially since Saussure (2011Saussure ( [1916]]), as the application of a common epistemological model, grounded on a European viewpoint, was set to all phenomena, disregarding the history of local singularities.For Severo (2017), such modern/colonial stance is a reductivizing, universalizing, generalizing and essentializing one, which does not only overlook the diversity of language practices, but also downplays it.Consequently, because of its influence, even nowadays, discursive regimes that reinforce such ideas are still being reproduced and certain power relations, influenced by them, are still being endorsed. Saussure's book entitled Course in General Linguistics, first published in 1916, was the founding work that established linguistics as a modern science.From it, assumptions about language were made, based on specific western perspectives.For Pennycook (2018b), the epitome of this model is the Saussurean talking heads.See figure 1. 2017) highlight the absence of specific identities and the display of others: the subjects in the figure are western, white, male, young, and there are no marks of belonging (like class and religion); their appearance is identical and, therefore, there are no relations of power involved; there is no context; and their communication happens in an identical and completely symmetrical way.Pratts (2012) underscores that modern linguistics was grounded on such perspectives, which brought about specific understandings about language and communication.As a result, "[…] conceptions and practices of language were forged as part of the colonial project, having as a privileged perspective the geopolitical position and interests of the colonizer" 7 (NASCIMENTO, 2017, p. 64, our translation).This notion of language thereby created new forms of exclusion, which encompassed not only the processes and elements related to language itself, but also to the people involved in them.Fabrício (2017, p. 31, our translation) adds that seeing language as a mediating tool between the internal (individual) and external (the outside world) domains is a reductionist perspective, which encourages ideas such as transmissionreception, linear codification-decodification of meanings and underlying stability for communication -a "[…] belief that creates a perception of language whose historical and contextual insensitivity constitutes it as operations which are exclusively mental, cognitive, disembodied, and separate from society." 8 The author emphasizes the inadequacy of this model, which is based on colonial modernity and creates linguistic ideologies that need to be questioned, disinvested, deconstructed and reimagined. According to Braidotti (2016), Cook (2016) and Papadopoulos (2018), posthumanism provides critiques of social aspects with regard to social injustices, like decoloniality does.However, as Patel (2016) and Murris (2016) argue, concerning decoloniality and posthumanism, respectively, both projects also open up a space for a notion of justice that includes but goes beyond social justice.This is related to an emerging understanding of how we are entangled with matter, space and nonhuman others. Posthumanist applied linguistics criticizes structural linguistics for perceiving language as a system located in individual minds, having one person's brain as the starting point for the speech circuit, sustained by the premise that one human being can perfectly understand another (PENNYCOOK, 2018b;TOOHEY, 2018a).As Pennycook (2018b) underlines, the notion of isolated heads passing messages back and forth, encoding and decoding ideas, based on the presumption of mutual understanding, is woefully inadequate for a comprehension of communication.In his words, In the same vein, Canagarajah (2017, p. 33) states that the Saussurean conception of language tends to favor "[…] homogeneity, normativity, and control.Structures are abstracted from the messiness of material life and social practice.In making structures fundamental and generative, structuralism imposes order and control over material life."However, despite arguments such as Canagarajah's (2017) and Pennycook's (2018b), which date back at least to 1980s, with Harris's (1981Harris's ( , 1990) ) work, Saussure's model still has a considerable influence on current understandings of language and communication, firmly grounded on humanist perceptions.This colonial and humanist project focused on the idea of language taking place exclusively between human heads, and that entailed the disregard of people's bodies and senses. Following this line of thought, Pennycook (2018b, p. 79-80) argues we need to problematize "[…] the constant reiteration of the point that it is language that separates humans from other animals."This is still a very strong assertion reinforced by traditional applied linguistics.According to the author, [t]his argument has several consequences: it insists on human exceptionality, which then requires an account of an evolutionary leap in human communication (rather than a more gradualist evolution from gesture to language).It therefore focuses on those features of human language which appear distinctive, ignoring the bigger picture of gesture, nonverbal communication or other sensory domains.(PENNYCOOK, 2018b, p. 81). By considering such arguments, Pennycook (2018b) adds that Chomsky's (1986Chomsky's ( , 2000) ) position is that of radical discontinuity, as it considers that only humans have language, based on what Pennycook (2018b) calls saltationary theories (from Latin saltus, jump), which disregard the gradual development from body expressions to what we currently understand as language.As generativists tried to grasp the evolution of the human mind, they associated it to Cartesian rationalism, reinforcing a separation between body and mind, or as Canagarajah (2017, p. 33) puts it, "the bias of mind over matter". As Hymes (1971, as cited in CANAGARAJAH, 2017, p. 32) observes, "Chomsky [(n.d.)] took structuralism further in a cognitive and individualized direction."According to Pennycook (2018b, p. 81), Berwick and Chomsky's (2016) position promotes the understanding of language as a universal and asocial system that is "internalized, formalized and grammaticalized", and located in the brain.Such comprehension of language and its relation to the world is too narrow, as it completely overlooks language's entangled relations to the material world.Deleuze andGuattari (2005) affirm, andDe Freitas andCuringa (2015) reiterate, that Chomskian linguistics fails to locate language in the material context from where it emerges.Pennycook (2018b, p. 104) endorses that "[i]t is these Western humanist assumptions about thought and language that the decolonial option starts to question." Concerning more recent frameworks adopted in the field, Toohey (2018b) addresses the contributions of sociocultural and poststructuralist theories to the analyses of language and identity in applied linguistics.As she highlights, unlike structuralist and generativist perspectives, these frameworks reject binaries such as self/society, and recognize that individuals are always situated in social relations of power with others.However, she states that their primary focus is on human interactions, "[…] with the non-human usually seen as context and/or mediations for human activity" (KIRSCHNER; MARTIN, 2010, as cited in TOOHEY, 2018b, p. 4).From a posthumanist perspective, materiality is seen as more than a stage on which humans perform (MACLURE, 2013;TOOHEY, 2018b), that is, it is more than simply a background, insofar as posthumanism refuses to perceive humans as the center, and rather considers them as part of assemblages, seeking to conceive all (human and nonhuman) entities in a more horizontal plane.In the following section we return to this specific discussion in more detail. LANGUAGE FROM A POSTHUMANIST PERSPECTIVE Posthumanism directly challenges traditional theories of language (SMYTHE et al., 2017).For Pennycook (2018b), posthumanism can help us understand humans not as separate from the rest of the world, but rather as a part of it, as intertwined and entangled in it.That said, Latour (1999), Maclure (2013) and Pennycook (2018b) point out that this praxiological framework thereby also questions where thought and language occur and what role a supposedly exterior world plays.As the latter sees it, "[…] there is no longer a world 'out there' separate from humans and represented in language but rather a dynamic interrelationship between different materialities" (PENNYCOOK, 2018a, p. 449, quotation marks in original).In this sense, the material world is constantly intra-acting with our cognition.In the same vein, Clark (2008, p. 219) claims that, according to this perspective, the human mind "[…] emerges as the productive interface of brain, body, and social and material world."In Pennycook's words (2018b, p. 15), [w]hile extended cognition points to the ways we outsource our thinking […] to other systems, such as mobile phones, distributed cognition focuses on the ways in which our thinking involves the environment about us.Questioning the distinction between internal (the brain) and external (our surrounds) domains, this line of thought poses serious questions for the notion of where the human starts and ends.[…] [L]anguage can equally be seen as distributed in space.This takes us beyond a discussion of language in the brain, in society or in context and urges us to think about language in alternative spatial and material terms. As a result, language is decentralized and let down from its superior position.The point here is not to undermine the fact that human language is highly complex and multifaceted, but rather to provincialize it: "[…] to recognize its limits, to acknowledge its constructedness, and to open up to a world of communicating and knowing beyond -or beside/s words" (THURLOW, 2016, p. 503).Thus, languages are considered elements that are present in assemblages, in which entities and forces affect and are affected by each other (intra-act), as discursive and material relations are intertwined across space.Therefore, in a few words, language and cognition are "[…] on the one hand embodied, embedded and enacted (far more than representational activity in the mind) and, on the other hand, extended, distributed and situated (involving the world outside the head)" (STEFFENSEN, 2012, as cited in PENNYCOOK, 2018b, p. 52).In addition, taking into account that, from this perspective, there is no inside and outside, and that language and agency are distributed across people, places and artefacts, playing active roles in the world (PENNYCOOK, 2018b(PENNYCOOK, , 2019)), Canagarajah (2017) claims that we need to also understand the meaning-making ability as distributed, since social networks, things and bodies work in entangled ways. By considering a spatial orientation, Canagarajah (2017) sees space as active, generative and agentive.In this sense, human beings, nonhuman others and space shape and co-construct each other in assemblages.Likewise, language is conceived as part of a broader set of semiotic possibilities, and the material world is perceived as having agency in the process.In their studies regarding spatial repertoires, Canagarajah (2017) and Pennycook (2018aPennycook ( , 2018b) ) advocate for an understanding of them as distributed in entanglements that encompass social practices and material embodiment; thus, things become materially and discursively intertwined. Language use is thereby seen as part of assemblages (SMYTHE et al., 2017) -"[…] a coming together of people, languages, places and objects […] [that] provide[s] a particular set of semiotic possibilities" (PENNYCOOK, 2018b, p. 99).In the posthumanist framework of languages as semiotic assemblages, Pennycook (2018b) suggests the ideas of alignment and attunement, instead of mutual intelligibility, rejecting the view of Saussure's ideal model of communication.According to the author, what happens in communication are attempts by the speakers to flexibly adapt to each other's speeches, so they can try to reach some form of understanding, which is always constituted by conflicts, negotiations and adjustments.As we see it, we are not fully able to verbalize what happens in our mental processes; consequently, what we say is just a fragmented part of what we mentally experience.Therefore, what the communicative process involves is much more complex because if we do not match our own thoughts with our words, it seems naïve to think we can match other people's.In this line of thought, Canagarajah (2017, p. 49) claims that [f]rom a spatial orientation, communicative proficiency involves the ability to align diverse semiotic and spatial resources for successful activity.Along with the flat ontology of assemblage, it holds that […] beyond giving primacy to the mind, it posits that the body and material objects facilitate thinking. The author shares some examples, from studies he carried out, of how nonverbal resources do not simply aid people with thinking and talking, but rather mediate and shape their mental processes and language use, meanings and understandings.Therefore, as Toohey (2018b) states, in this perspective, tools do not merely mediate interactions and actions, but rather intra-act with humans, as they are also perceived as active and changing parts of assemblages.Grounded on this viewpoint, linguistic and nonlinguistic resources are intertwined in such a way that it is possible to note a kind of distributed agency among them, just as there is an entanglement between human and non-human agencies (APPLEBY; PENNYCOOK, 2017; BENNETT, 2010).Accordingly, based on Lenz Taguchi's (2010) arguments, Murris (2016) claims that there is a performative agency, insofar as nonhuman entities have a certain force and power that can transform our thoughts and our being. Furthermore, by taking discussions such as the ones addressed here into consideration, MacLure (2013, p. 663-664) argues that [we] need to find ways of researching and thinking that are able to engage more fully with the materiality of language itself -the fact that language is in and of the body; always issuing from the body; being impeded by the body; affecting other bodies.Yet also, of course, always leaving the body, becoming immaterial, ideational, representational, a striated, collective, cultural and symbolic resource.Listening is thus a complex material engagement at the molecular level, beneath the level of a sonic unit of meaning, where one absorbs the pauses, accelerations, fallings away and other bodily performances that produce the sounds.The author's example is one of the ways we can perceive the materiality of language, its dynamic entanglement with our bodies and our surroundings.In this sense, things work as extensions of each other, distributed across space.As a result, following this perspective, the dichotomy between discourse and materiality is deconstructed (APPLEBY; PENNYCOOK, 2017;MACLURE, 2013;PENNYCOOK, 2018aPENNYCOOK, , 2018b;;TOOHEY, 2018aTOOHEY, , 2018b)).Thus, as Barad (2003Barad ( , 2007)), Maclure (2013) and Murris (2016) argue, we need to think, instead, in terms of materialist-discursive assemblages/entanglements. For Barad (2003Barad ( , 2007)), our bodies exist through material and discursive relations, and De Freitas and Curinga (2015, p. 263) add that this reconfiguration encourages new understandings " […] to think past the usual confines of the individual." On the other hand, according to Toohey (2018a, p. 37, quotation marks in original), it is important to underline the concept of agential cuts, introduced by Barad (2007), " […] as a way to signal that in research (and life) we make boundaries between objects and activities, but that 'cuts' could always be made in other ways."In this sense, Toohey (2018a, p. 39) claims that, as posthumanism adopts a flat ontology, we need to be aware that when we use the linguistic concept of language, we make "[…] an agential cut which abstracts language (and its users) from events, ignoring its becoming and its entanglement with the becomings of its human and non-human companions."This poses methodological and analytical questions to the work we develop.For Canagarajah (2017), these pragmatic boundaries (or cuts) are the options we make when we define our unit and focus of analysis from assemblages, but he states that, although we make specific choices, we cannot disregard their relation to the whole.Last but not least, we add that it is of paramount importance to take into account ethical demands of the contexts with which we are involved when we make cuts. FINAL REMARKS As Pennycook (2018bPennycook ( , 2019) ) argues, a posthumanist framework encompasses ideas from many different fields, ideas that sometimes are not obviously tied to posthumanism, like Indigenous knowledges and decolonial thoughts.Nonetheless, this praxiological perspective "[…] has made it possible to pull together a range of interconnected ideas under one roof and to explore this emerging landscape that repositions people, places and objects in a new configuration" (PENNYCOOK, 2018b, p. 131).As we state in the first section, we consider that inter-epistemic dialogues (NASCIMENTO, 2017) can substantially contribute to the development of new ethico-onto-epistemological viewpoints beyond human hubris, in order to perceive human beings and nonhuman others in a more horizontal plane. As both decolonial and posthumanist perspectives embrace social aspects and the materiality that constitutes us, our surroundings and nonhuman others, in entangled ways (CANAGARAJAH, 2017;PATEL, 2016;PENNYCOOK, 2018b), from our viewpoint, they articulate ideas that are meaningful and useful for the work we develop, insofar as they have helped us to better understand and reflect on what we do and on how we could do things differently in our contexts.For instance, we have perceived the need to decenter the subject of language education and teacher education, considering s/he should be seen as one of the elements that integrates (educational) assemblages, in which other entities are present as well, such as the classroom/space, teaching materials, objects, teaching aids, other students, the teacher/professor etc.In this line of thought, the materiality that surrounds us can be conceived as extensions of ourselves in both material and discursive terms, like the board and the textbook for the teacher. Therefore, as applied linguists, inspired by decoloniality and posthumanism, we advocate for a more expansive understanding of language and our relations to it. By comprehending what we do based on other perspectives, we can develop new ways of thinking and becoming which may help us in the process of delinking from exclusionary narratives posed by humanism and coloniality. Following Maclure's ideas (2013), De Freitas and Curinga (2015, p. 249) argue that we need "[…] to study the imbrication (intricate interlocking) of matter and meaning in new ways."Similarly,Deleuze and Guattari (2005) declare that language should be considered a form of material expression.In this line of thought,Rotman (2008, as cited in DE FREITAS; CURINGA, 2015, p. 257) claims that, for example, […] talking involves the curling of a tongue and various minute vibratory actions of the face and body.[…] [As] the evolutionary neurologist Terrence Deacon [(n.d.) argues], […] one listens to the movements of these body parts as one makes sense of what another is saying. Mignolo [(2018baims that Western epistemology,[…]inMignolo [(2018b)], appears as a homogenous black box, a box in which everything works according to the same logic of rationality and universalism.To me, this conception seems questionable -for two reasons.First, it neither acknowledges those currents within Western thought that conceptualize modernity as dialectic or ambivalent -like for instance first generation Frankfurt School critical theory or poststructuralism -nor those currents within Western thought that precisely criticize any favoring of individualism and individualist rationalism; examples for the latter range from feminist care ethics (e.g.
v3-fos-license
2020-06-01T14:08:09.369Z
2020-06-01T00:00:00.000
219156991
{ "extfieldsofstudy": [ "Biology", "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.nature.com/articles/s41598-020-65916-y.pdf", "pdf_hash": "bb5b662d8f1dbdb94dd3b0ce38fa36b9bd6ed75d", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:1714", "s2fieldsofstudy": [ "Medicine" ], "sha1": "bb5b662d8f1dbdb94dd3b0ce38fa36b9bd6ed75d", "year": 2020 }
pes2o/s2orc
Alterations of transcriptome signatures in head trauma-related neurodegenerative disorders Chronic traumatic encephalopathy (CTE) is a neurodegenerative disease that is associated with repetitive traumatic brain injury (TBI). CTE is known to share similar neuropathological features with Alzheimer’s disease (AD), but little is known about the molecular properties in CTE. To better understand the neuropathological mechanism of TBI-related disorders, we conducted transcriptome sequencing analysis of CTE including AD and CTE with AD (CTE/AD) post-mortem human brain samples. Through weighted gene co-expression network analysis (WGCNA) and principal component analysis (PCA), we characterized common and unique transcriptome signatures among CTE, CTE/AD, and AD. Interestingly, synapse signaling-associated gene signatures (such as synaptotagmins) were commonly down-regulated in CTE, CTE/AD, and AD. Quantitative real-time PCR (qPCR) and Western blot analyses confirmed that the levels of synaptotagmin 1 (SYT1) were markedly decreased in CTE and AD compared to normal. In addition, calcium/calmodulin-dependent protein kinase II (CaMKII), protein kinase A (PKA), protein kinase C (PKC), and AMPA receptor genes that play a pivotal role in memory function, were down-regulated in head trauma-related disorders. On the other hand, up-regulation of cell adhesion molecules (CAMs) associated genes was only found in CTE. Our results indicate that dysregulation of synaptic transmission- and memory function-related genes are closely linked to the pathology of head injury-related disorder and AD. Alteration of CAMs-related genes may be specific pathological markers for the CTE pathology. lobes. In Stage IV, there is serious atrophy of the frontal, temporal, anterior thalami, and white matter tracts. Hyperphosphorylated tau pathology affects most regions of the cerebral cortex. CTE and AD have both distinct and common features in clinical and neuropathological aspects. Deposition of hyperphosphorylated tau and presence of NFTs are common neuropathological features of CTE and AD 7 . The location of NFTs and presence of amyloid beta (Aβ) plaques are the differences between CTE and AD. In CTE, NFTs are non-uniform and are predominantly found in the superficial cortical layers. Also, CTE has a small amount of Aβ plaque deposits unlike AD. In AD, NFTs are more uniform and are mainly found in the third and fifth layers of the cerebral cortex 8 . AD has significant amounts of Aβ plaque deposition. CTE is associated with other neurodegenerative disorders, including AD, motor neuron disease (MND), Parkinson's disease (PD), Lewy body disease (LBD), frontotemporal lobar degeneration (FTLD), and multiple system atrophy (MSA). Previous studies have shown that, of the 142 cases with CTE, 37% had CTE with other neuropathology 9 . However, the relative contribution of other pathological substrates to clinical symptoms in CTE with other neuropathology is unknown. Although the neuropathological features of head trauma-related disorders are well-documented, the definitive diagnosis of TBI-related diseases only relies on medical history, mental status testing, and brain imaging. In addition, exact gene regulatory mechanisms and molecular pathways are not fully understood. Accordingly, in the present study, we proposed to determine changes in molecular properties and to identify biological markers for TBI-related diseases through transcriptome analysis. We performed genome wide RNA sequencing analysis of post-mortem human brain tissues with CTE, CTE and AD, and AD. We found that deregulation of synaptic transmission-and memory function-related genes are closely associated with the pathology of head injury-related disorder and AD. We also discovered that altered expression of CAMs associated genes may play an important role in CTE pathology. Results Transcriptome analysis of CTE, CTE/AD, and AD. We collected 34 samples from the anterior temporal lobe of the human brain. The samples were taken from 8 individuals with CTE, 10 individuals with AD, 6 individuals with CTE/AD, and 10 normal individuals (Fig. 1A). For each sample, we recorded the diagnosis, gender, stage, and age of symptom onset (Supplementary Table S1). Among the 24 disease samples, most samples were verified as stage III or IV. Through RNA sequencing, we obtained 9.24 Gb of RNA sequencing throughput on average and 84.17% of reads aligned to reference genome (Supplementary Table S2). To compute distances between samples, we conducted principal component analysis (PCA) based on the top 500 genes by variance across all samples. The expression pattern of CTE, CTE/AD, and AD was distinct from normal in the PC1 axis (26% variance) (Fig. 1B). Consistent with the PCA results, unsupervised hierarchical clustering analysis of 10,467 genes showed that there is a significant similarity of gene expression between CTE, CTE/AD, and AD ( Fig. 1C and Supplementary Table S3). Differentially expressed genes (DEGs) were determined based on q-value <0.05 and |Foldchange | ≥ 1.5 (Supplementary Table S4). We found 3,028 up-regulated and 2,713 down-regulated DEGs in CTE. 2,586 DEGs were up-regulated and 2,929 DEGs were down-regulated in CTE/AD. The number of up-regulated DEGs are 2,576 and down-regulated DEGs are 2,382 in AD. The FPKM levels were calculated to compare the expression levels of each disease (Supplementary Table S5). Weighted gene co-expression network analysis of CTE, CTE/AD, and AD. To explore the neuropathological features of TBI-related human brain disorders, we performed weighted gene correlation network analysis (WGCNA). 10,467 genes between CTE, CTE/AD, and AD were used to construct gene co-expression networks. Based on the topological overlap of the genes, modules of co-expressed genes were identified by step-by-step network construction. The module labeled by colors was depicted in the hierarchical clustering dendrogram ( Fig. 2A). We identified 4 modules, with a range in size from 1,716 genes in the blue module to 3,596 genes in the grey module. The grey module was excluded in the analysis because it was a collection of genes that could not be aggregated to other modules. We condensed the gene expression pattern within a module to a module eigengene, which is the first principal component of a module and is representative of the gene expression profiles of a module. To test whether or not module eigengenes are associated with diseases, we defined 3 traits: CTE, CTE/AD, and AD. Based on the PCA and hierarchical clustering analysis, we found a similar expression pattern between CTE, CTE/AD, and AD. As a result, we added a trait named CTE_CTE/AD_AD which is a trait including CTE, CTE/AD, and AD. We obtained the relationships between the module eigengenes and the 4 traits (Fig. 2B). The results revealed that the brown module was positively correlated with the CTE_CTE/AD_AD trait. The turquoise module had the strongest negative correlation with CTE_CTE/AD_AD (R = −0.8, p < 2×10 −8 ). We plotted a scatter plot of gene significance for CTE_CTE/AD_AD and module membership of genes in the turquoise module ( Supplementary Fig. S1). We identified a number of genes of high significance for CTE_CTE/AD_AD as well as high module membership in the module. Through topological overlap matrix, we also discovered that the turquoise module was the most highly interconnected module ( Supplementary Fig. S2). We represented gene symbol, locuslinkID, gene significance (GS) for CTE_CTE/AD_AD, module membership (MM), and p-values of all modules in Supplementary Table S7. www.nature.com/scientificreports www.nature.com/scientificreports/ Gene set enrichment analysis in CTE, CTE/AD, and AD. For modules with a strong negative correlation with the trait, the hub genes in the module should have negative GS and high positive MM. We defined the 1,603 hub genes of the turquoise module as GS < 0 and MM > 0.5 because turquoise module was negatively correlated with CTE_CTE/AD_AD trait. In case of the brown module, we identified 896 hub genes with positive GS and high positive MM (MM > 0.5) because brown module was positively correlated with CTE_CTE/AD_AD trait. To identify the biological function associated with the modules, we performed GO enrichment analysis using up-and down-regulated genes in each module. In the turquoise module, 622 hub genes were commonly dysregulated in CTE, CTE/AD, and AD. Neuron part, synapse, and synapse part pathways were remarkably enriched in 622 down-regulated genes (Fig. 2C). Moreover, the number of genes belonging to the neuron pathway was the greatest in the turquoise module. In the brown module, 17 hub genes were commonly up-regulated in CTE, CTE/ AD, and AD, and were not enriched in any pathways. Synaptic transmission-related genes were down-regulated in CTE, CTE/AD, and AD. To vis- ualize gene-gene interactions, we obtained gene-gene connection scores from WGCNA. We selected 622 commonly down-regulated genes in CTE, CTE/AD, and AD. We filtered gene-gene connection scores as weight cutoff value> 0.22. We selected the top 10 hub genes that were highly connected to other genes in the turquoise module, including GABRA1, PTPN5, RAB3A, KIAA1549L, SLC12A5, PLK2, SYT7, AP3B2, STXBP1, and PCSK2. (Supplementary Table S8). We represented 266 genes that were connected with the top 10 hub genes (Fig. 3A). Among the 266 genes, 67 genes belonged to neuron part, synapse, and synapse part pathways and were shown in red. In particular, SYT7 was involved in the top 10 hub genes and were significantly down-regulated in CTE, CTE/ AD, and AD. Other synaptotagmin family genes (SYT1, SYT4, and SYT5) were also involved in neuron, synapse, The SYT1 immunoreactivity was markedly reduced in the cytosolic compartment of cortical neurons in CTE and AD postmortem brain compared to normal subject. The nuclei were counter stained with hematoxylin (blue). Scale bars: black, 20 µm; white, 10 µm. (D) Densitometry analysis showed that the intensity of SYT1 is significantly decreased in the cortical neurons in CTE and AD postmortem brain compared to normal postmortem brain. The Student's t-test (unpaired) was performed for statistical analysis. **Significantly different at p < 0.001. (E) The mRNA level of SYT1 was significantly reduced in CTE and AD patients compared to normal subjects. (F) Synaptotagmin 1 (SYT1) protein was down-regulated in the cortex (2020) 10:8811 | https://doi.org/10.1038/s41598-020-65916-y www.nature.com/scientificreports www.nature.com/scientificreports/ and synapse part pathways. Synaptotagmin family genes (SYT1, SYT4, SYT5, SYT7, and SYT13) were remarkably down-regulated in CTE, CTE/AD, and AD (Fig. 3B). To verify whether SYT1 immunoreactivity is altered in CTE and AD brains, we performed immunohistochemistry. SYT1 immunoreactivity was intensely found within the cytosolic compartment of neurons in normal subjects. However, the SYT1 immunoreactivity was markedly reduced in the neuronal cell body and dendrites of CTE and AD patients (Fig. 3C). Densitometry analysis exhibited that SYT1 immunoreactivity is significantly decreased in the neuronal cell body of CTE and AD patients compared to normal subjects (Fig. 3D). To validate our transcriptomic results, we performed quantitative real-time PCR (qPCR) analysis from the postmortem brain tissue of CTE, AD, and normal subjects (Supplementary Table S9). In concordance with the transcriptome data, we found that SYT1 mRNA level was significantly decreased in both CTE and AD patients compared to normal subjects (Fig. 3E). Moreover, Western blot and densitometry analyses showed that the protein level of SYT1 was decreased in both CTE and AD patients compared to normal subjects (Fig. 3F-H and Supplementary Fig. S3). Memory function-related genes were down-regulated in CTE, CTE/AD, and AD. Among the genes mentioned above, SYT1 and SYT7 were reported to play a critical role in memory function 10-12 . We also looked at the expression of other genes related to memory function. For example, the genes that play an important role in long term potentiation (LTP) process were prominently dysregulated in CTE, CTE/AD, and AD (Fig. 4). AMPA receptors contain four subunits, designated as GluA1 (GRIA1), GluA2 (GRIA2), GluA3 (GRIA3), and GluA4 (GRIA4). Among them, GRIA2, GRIA3, and GRIA4 were strikingly down-regulated in all three disease groups ( Supplementary Fig. S4A). CaMKII subfamily genes (CAMK2A and CAMK2D) were remarkably dysregulated in CTE, CTE/AD, and AD ( Supplementary Fig. S4B). PKA catalytic subunits (PRKACA and PRKACB) were also down-regulated in CTE, CTE/AD, and AD ( Supplementary Fig. S4C). PRKCG, one of the major forms of PKC, was commonly down-regulated in all three disease groups ( Supplementary Fig. S4D). Cell adhesion molecules (CAMs) associated genes were up-regulated only in CTE. Based on differentially expressed genes (DEGs) derived from each of the diseases, we identified the number of common and distinct DEGs in CTE, CTE/AD, and AD (Fig. 5A,B). Among them, we focused on 964 up-regulated and 455 down-regulated genes in CTE for investigating unique transcriptome signatures. In KEGG enrichment analysis of CTE, cell adhesion molecules (CAMs) pathway was shown to be highly enriched in up-regulated genes in CTE (Fig. 5C). Down-regulated genes were not enriched in any pathways. In CAMs pathway, MHC class I-related genes like HLA-B, HLA-C, and HLA-E were significantly up-regulated in CTE (Fig. 5D). Discussion CTE is a progressive neurodegenerative disorder that leads to behavior, mood, and memory dysfunction. While the neuropathological features of CTE are demonstrated, the fundamental gene regulatory mechanisms and biological pathways of CTE-related diseases remain unclear. Previous our study revealed the mechanisms of how TBI causes neuropathological sequelae of tauopathy in CTE 13 . In this study, we focused on common and unique transcriptome features of CTE and compared them to those of CTE/AD and AD. CTE, CTE/AD, and AD showed common gene expression changes by cell-type. Neuronal genes were down-regulated in CTE, CTE/AD, and AD. Neuron loss is known to be found not only in normal aging, but also in the early stage of disease development 14 . Neuron death correlates with the severity of memory impairments, and leads to an inability to relocate neuronal organization of cerebral structures and add new neurons to them. Therefore, dysregulation of neuron impairs normal memory functions and learning process in CTE, CTE/AD, and AD. Up-regulation of the genes of astrocytes, oligodendrocytes, endothelial cells and microglia was shown in CTE, CTE/AD, and AD. Atrophy of astrocytes causes loss of synaptic connectivity, imbalance of neurotransmitter homeostasis, and neuronal death 15 . Oligodendrocytes are necessary for nerve repair after injury by preventing their cell death and maintaining myelin restoration 16 . Microglia are recognized as essential players in maintaining brain homeostasis and protecting the brain from infections and insults 17,18 . Microglia also exert a neuroprotective role to phagocytose and clear Aβ aggregates in AD 19,20 . Endothelial cells play a pivotal role in maintaining cardiovascular homeostasis 21 . Based on these results, we assumed that changed gene expression of astrocytes, oligodendrocytes, microglia, and endothelial cells has a great impact on various biological mechanisms in CTE, CTE/AD, and AD. Changes in expression of synapse-and synaptic transmission-related genes represented the common transcriptome features of CTE, CTE/AD, and AD. Synapses are essential for neuronal function and communication. They connect the neurons in the brain by passing an electrical or chemical signal from neuron to neuron. Herein, we found that synaptotagmin genes (SYT1, SYT4, SYT5, SYT7 and SYT13) were significantly down-regulated in CTE, CTE/AD, and AD. Synaptotagmins are Ca 2+ -binding protein that play a pivotal role in vesicle fusion to the synaptic membrane. SYT1 and SYT7 function as the main Ca 2+ sensors for fast and slow presynaptic vesicle exocytosis, respectively. Previous studies have shown that SYT1 and SYT7 act as redundant Ca 2+ sensors for AMPA exocytosis during LTP 10,11 . Therefore, we assumed that down-regulation of synaptotagmins impacts memory function in CTE, CTE/AD, and AD. Previous studies have shown that synaptic dysfunction results in cognitive of postmortem brain of CTE and AD patients compared to normal subjects. Western blot data represent three cases of normal subjects, CTE patients, and AD patients, respectively. (G) Densitometry analysis showed that SYT1 protein level was significantly reduced in the postmortem brain of CTE patients. (H) Densitometry analysis showed that SYT1 protein level was significantly reduced in the postmortem brain of AD patients. *Significantly different from the normal subject at p < 0.05. (2020) 10:8811 | https://doi.org/10.1038/s41598-020-65916-y www.nature.com/scientificreports www.nature.com/scientificreports/ impairment in AD and other dementias 22,23 . Our result suggests that dysregulation of synaptic transmission correlates with cognitive deficits and memory dysfunction in CTE as well as AD. In the current study, we found down-regulation of α (alpha) and δ (delta) chains of CaMKII (CAMK2A and CAMK2D) in CTE, CTE/AD, and AD. CaMKII is a major synaptic protein and important mediator in the LTP process 24 . LTP process is the persistent strengthening of synaptic transmission and this process underlies the molecular mechanism of learning and memory. Glutamate is released from the presynaptic terminal and binds to its specific receptors at the post synapse. Glutamate displaces Mg 2+ from the NMDA receptors and Ca 2+ flows through the opened NMDA receptors. CaMKII detects the uptake of Ca 2+ and thus triggers the biochemical cascade that enhances synaptic transmission 25 . CaMKII alters glutamate susceptibility by phosphorylating the AMPA receptor [26][27][28][29] . It has been reported that knock out mice of CAMK2A reduce the LTP process by half 30 . PRKACA and PRKACB, which encode the catalytic subunits of PKA, were down-regulated in CTE, CTE/ AD, and AD. PKA (protein kinase A), a cyclic AMP (cAMP)-dependent protein kinase, played a pivotal role in the LTP process 31 . PKA phosphorylates the GluA4 and GluA1 subunits to regulate the synaptic incorporation of AMPA receptors. PRKCG, one of the isozymes of the PKC, was significantly down-regulated in CTE, CTE/AD, and AD. PKC is a family of serine/threonine protein kinases and is involved in neuronal functions, such as modulation of ion channel and synaptic transmission [32][33][34] . PKC decreases Mg 2+ affinity in the NMDA receptor channel to increase channel open time, which enhances the response of NMDA receptors 35,36 . PKC phosphorylates the GluA1 and GluA4 subunits of AMPA receptors to alter glutamate sensitivity 37 . Especially, PRKCG modulates the GluA4 subunit of AMPA receptors by directly binding to GluA4 38 . Previous studies demonstrated that PKCγ (PRKCG) mutant mice showed impaired LTP process 39,40 . On the other hand, dysregulation of GRIA2, GRIA3, and GRIA4 gene expression was also found in CTE, CTE/ AD, and AD. AMPA receptors are consisted of 4 subunits (GRIA1-4). AMPA receptors regulate most of the excitatory synaptic transmission 41 . AMPA receptors are known to be phosphorylated by protein kinases such as PKA, PKC, and CaMKII. Phosphorylation of AMPA receptors potentiates their function. Among the AMPA receptors, GRIA1 (GluA1) and GRIA4 (GluA4) mainly act on the LTP process 42 . Notably, the unique transcriptome signature of CTE was associated with CAMs (cell adhesion molecules). CAMs mediate interactions between cells and the surrounding extracellular matrix that are essential for the process of controlling cell survival, activation, migration, and proliferation 43 . In the brain, CAMs are important for neural network formation such as axon-axon contacts, axon-astrocyte contacts, synapse formation, and regulation of synaptic structure 44,45 . In addition, MHC class I molecules, which are expressed in neurons, showed remarkable expression changes in CTE. MHC class I molecules regulate neuronal differentiation and affect synaptic plasticity, axonal regeneration, and T cell-mediated response [46][47][48] . Our data implies that MHC class I molecules may contribute to the pathogenesis of CTE, but the exact mechanism remains to be investigated in future studies. In summary, we identified alterations of common and unique transcriptome signatures in head trauma-related diseases (Fig. 6). Deregulation of synaptic transmission and memory function-associated transcriptomes were commonly affected in CTE, CTE/AD, and AD. On the other hand, up-regulation of CAMs-associated transcriptome signatures was found to be unique in CTE. Thus, altered transcriptome signatures provide insight on the Materials and Methods Human tissues. Neuropathological processing of 10 control, 8 CTE, 6 CTE/AD, and 10 AD human brain samples was performed using procedures previously established by the Boston University Alzheimer's Disease Center (BUADC) [20]. Next of kin provided informed consent for participation and brain donation. Institutional review board approval for ethical permission was obtained through the BUADC and CTE center. This study was reviewed by the Institutional Review Board of the Boston University School of Medicine (Protocol H-28974) and was approved for exemption because it only included tissues collected from post-mortem subjects not classified as human subjects. The study was performed in accordance with institutional regulatory guidelines and principles of human subject protection in the Declaration of Helsinki. Clinical features including gender, stage, age of symptom onset, and regional pathology were described in Supplementary Table S1. CTE is characterized pathologically by frontal and temporal lobe atrophy. Especially, the temporal lobe, including the hippocampus and the surrounding hippocampus regions, is critical for memory function. The temporal lobe is also involved in the primary organization of sensory input. The most common symptoms of neurodegenerative diseases are memory problems and sensory processing disorder. Moreover, hyperphosphorylated tau pathology is found in most regions of the cerebral cortex and the temporal lobe in CTE. Thus, temporal lobe dysfunction is highly associated with neurodegenerative processes and the neuropathology in CTE. In this context, we selected the temporal lobe of post-mortem brain for the transcriptome analysis. Transcriptome sequencing and analysis. Total RNA was extracted using the Illumina TruSeq RNA sample preparation kit and was sequenced on the Illumina HiSeq2000 platform. Raw reads were aligned to the human genome (GRCh37.p13) according to the STAR 2-pass method. STAR is an accurate alignment of high-throughput RNA-seq data. STAR is a two-step process that generates genome indexes files and maps the reads to the genome 49 . Duplicated reads were removed by Picard Markduplicate and filtered reads were further processed for variant calling using the GATK, including insertion/deletion realignment, base quality score Weighted Gene Co-expression network analysis (WGCNA). The normalized read counts were used to construct signed co-expression networks using the WGCNA package in R. We used step-by-step network construction and the module detection method because auto construction method is not appropriate for our large data sets (10,467 genes). The network was constructed by obtaining a dissimilarity matrix based on the topological overlap. The adjacency matrix was calculated by raising the correlation matrix to a soft thresholding power of 14, which was chosen to attain scale-free topology. Gene dendrogram was generated and module colors were assigned. We calculated the module eigengene (ME) value, which was defined as the first principal component of a given module. Dendrogram cut height for module merging was 0.8. Merged module eigengenes were used to test the association of modules with diseases. Module membership (MM) was calculated as the correlation between gene expression levels and the module eigengene. Gene significance (GS) was also calculated as the correlation between gene and external traits. We defined hub genes using MM and GS values. If the module was positively correlated with the trait, we selected hub genes with positive GS and high positive MM (MM > 0.5). If the module was negatively correlated with the trait, we defined hub genes with negative GS and high positive MM (MM > 0.5). To facilitate biological interpretation, we applied DEGs of hub genes to the Molecular Signatures Database (MSigDB) of Gene Set Enrichment Analysis (GSEA) 53 . Gene Ontology (GO) gene set of MSigDB was selected to be analyzed. For network analysis, we used the WGCNA algorithm to calculate gene-gene interaction level. Based on gene-gene interaction level, the top 10 hub genes were visualized with VisANT (weight cut off >0.22). Differential gene expression analysis. For the gene expression profiling, we normalized read counts by using regularized log transformation method of DESeq. 2 54 . The calculated p-values were adjusted to q-values for multiple testing using the Benjamini-Hochberg correction. Genes with a |Foldchange | ≥ 1.5 and q-value < 0.05 were classified as significantly differentially regulated. For visualization, the PCA was plotted by disease using the plotPCA function in DESeq2. The normalized read counts were also used for hierarchical clustering analysis. Heatmaps were constructed using the dnet R package. FPKM (fragments per kilo base of exon per million mapped reads) for each gene was calculated and used for analyses. To find any gene sets significantly enriched in DEGs of CTE, we applied them to the Molecular Signatures Database (MSigDB) of Gene Set Enrichment Analysis (GSEA) 53 . Kyoto Encyclopedia of Genes and Genomes (KEGG) gene set of MSigDB was selected to be analyzed. Immunohistochemistry analysis. To detect SYT1 in human postmortem brain tissues, we performed immunohistochemistry as described previously 55 . Coronal plane of paraffin-embedded tissue sections (10 μm) were incubated with blocking solution after 3% H 2 O 2 reaction for 1 hr. The tissue sections were incubated with anti-synaptotagmin 1 antibody (1:100 dilution; Abcam, ab131551) for 24 hr. After secondary antibody reaction, the tissue slides were further processed with Vector ABC Kit (Vector Lab PK-6100). DAB chromogen (Sigma D5637) was used to develop the immunoreactive signals. The nuclei were counterstained with hematoxylin. The www.nature.com/scientificreports www.nature.com/scientificreports/ tissue slides were examined under a bright field microscope and the intensity of immunoreactivity was analyzed using Multi-Gauge Software (Fuji photo film Co, Ltd. Japan). Quantitative real-time PCR. Total RNA was extracted from the frozen brain tissues using TRIzol reagent (MRC, TR118) as previously described 13,56 . Fifty nanograms of RNA was used as a template for quantitative RT-PCR amplification, using SYBR Green Real-time PCR Master Mix (Toyobo, QPK-201, Osaka, Japan). Primers were standardized in the linear range of the cycle before the onset of the plateau. The primer sequences are shown in Supplementary Table S9. GAPDH was used as an internal control. Real-time data acquisition was performed using a LightCyler96 Real-Time PCR System (Roche Diagnostics, Indianapolis, IN, USA) under the following cycling conditions: 95 °C for 1 min× 1 cycle, and 95 °C for 15 s, followed by 60 °C for 1 min × 45 cycles. The relative gene expression was analyzed using the LightCyler96 software and expressed as Ct, the number of cycles needed to generate a fluorescent signal above a predefined threshold. Data availability The RNA sequencing data are available under the European Nucleotide Archive (ENA) accessions no. ERP110728.
v3-fos-license
2016-05-18T10:16:50.184Z
2010-04-21T00:00:00.000
266021391
{ "extfieldsofstudy": [], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://bmcpublichealth.biomedcentral.com/counter/pdf/10.1186/1471-2458-10-200", "pdf_hash": "9fafd34e0a5fbd09079d6d3fc5f8fd8209fffc33", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:1716", "s2fieldsofstudy": [ "Medicine", "Education" ], "sha1": "0ed1e5440abb846782913f93d6188a9f543a44b8", "year": 2010 }
pes2o/s2orc
Assessing the effectiveness and cost effectiveness of adaptive e-Learning to improve dietary behaviour: protocol for a systematic review Background The composition of habitual diets is associated with adverse or protective effects on aspects of health. Consequently, UK public health policy strongly advocates dietary change for the improvement of population health and emphasises the importance of individual empowerment to improve health. A new and evolving area in the promotion of dietary behavioural change is e-Learning, the use of interactive electronic media to facilitate teaching and learning on a range of issues, including diet and health. The aims of this systematic review are to determine the effectiveness and cost-effectiveness of adaptive e-Learning for improving dietary behaviours. Methods/Design The research will consist of a systematic review and a cost-effectiveness analysis. Studies will be considered for the review if they are randomised controlled trials, involving participants aged 13 or over, which evaluate the effectiveness or efficacy of interactive software programmes for improving dietary behaviour. Primary outcome measures will be those related to dietary behaviours, including estimated intakes of energy, nutrients and dietary fibre, or the estimated number of servings per day of foods or food groups. Secondary outcome measures will be objective clinical measures that are likely to respond to changes in dietary behaviours, such as anthropometry or blood biochemistry. Knowledge, self-efficacy, intention and emotion will be examined as mediators of dietary behaviour change in order to explore potential mechanisms of action. Databases will be searched using a comprehensive four-part search strategy, and the results exported to a bibliographic database. Two review authors will independently screen results to identify potentially eligible studies, and will independently extract data from included studies, with any discrepancies at each stage settled by a third author. Standardised forms and criteria will be used. A descriptive analysis of included studies will describe study design, participants, the intervention, and outcomes. Statistical analyses appropriate to the data extracted, and an economic evaluation using a cost-utility analysis, will be undertaken if sufficient data exist, and effective components of successful interventions will be investigated. Discussion This review aims to provide comprehensive evidence of the effectiveness and cost-effectiveness of adaptive e-Learning interventions for dietary behaviour change, and explore potential psychological mechanisms of action and the effective components of effective interventions. This can inform policy makers and healthcare commissioners in deciding whether e-Learning should be part of a comprehensive response to the improvement of dietary behaviour for health, and if so which components should be present for interventions to be effective. The need for improved dietary behaviour The composition of habitual diets is associated with adverse or protective effects on health [1][2][3]. Specifically, diets high in saturated fats and sodium have been found to increase risk of cardiovascular diseases, while those high in fruit and vegetables and low in saturated fats have been linked with reductions in a range of diseases including certain cancers, cardiovascular disease and hypertension [4][5][6][7]. The WHO reports that the consumption of up to 600 g per day of fruit and vegetables could reduce the total worldwide burden of disease by 1.8%, and reduce the burden of ischaemic heart disease and ischaemic stroke by 31% and 19% respectively [8]. In the UK, the consumption of fruits and vegetables, dietary fibre, iron (premenopausal women only) and calcium are well below recommendations, whereas intakes of saturated fats and sodium exceed recommendations in large sections of the population [9]. Consequently, UK public health policy strongly advocates dietary change for the improvement of population health and emphasises the importance of individual empowerment to improve health [7,10], thereby shifting the focus of the National Health Service from treatment to prevention of illness [11,12]. Adaptive e-Learning via interactive computerised interventions A new and evolving area in the promotion of dietary behavioural change is e-Learning, the use of interactive electronic media to facilitate teaching and learning on a range of issues including health (see Additional file 1 for definitions of terms used in e-Learning). E-Learning has grown out of recent developments in information and communication technology, such as the Internet, interactive computer programs, interactive television, and mobile phones [13][14][15][16][17], technologies which are fast becoming more accessible to the general population. (For example, an estimated 70% of the population in the UK has access to the Internet and this percentage is likely to continue to grow [18].) This high level of accessibility with emerging advances in computer processing power, data transmission and data storage makes interactive e-Learning a potentially powerful and cost-effective medium for improving dietary behaviour [19][20][21]. It also has a number of distinct advantages compared with traditional approaches for the promotion of dietary behaviour change, such as the possibility of tailoring to individual circumstances [22], translating complex information through video, graphics, and audio systems [23], and potential cost savings on face-to-face interventions involving healthcare practitioners. The evidence that individualised, tailored e-Learning approaches are more effective than traditional non-tailored interventions [24] has given them a promising lead in health education [25][26][27]. E-Learning interventions have been classified into three generations: 1 st generation interventions use computers to tailor print materials; 2 nd generation interventions use interactive technology delivered on computers; and 3 rd generation interventions use portable devices such as mobile phones, for more immediate interaction and feedback [28]. Exploration of the properties of differ-ent e-Learning interventions is now required in order to determine possible effective components (with each component comprising both delivery and content-see fig 1). Potential cognitive and emotional mediators of dietary behaviour change should also be explored, in order to elicit potential mechanisms of action (see fig 2). There is a risk that e-Health and use of new technologies in health care might widen health inequalities on either side of the 'digital divide'. Experience suggests that there are two dimensions to the digital divide and its impact on health inequalities: access (to physical hardware and software) and accessibility (or the ability of people with differing literacy/health literacy/IT literacy to use or apply information and support supplied through e-Learning). It has been shown that it is possible to deliver e-health interventions specifically designed for people with low literacy skills (e.g. Hispanics in Southern USA, [29], homeless drug users [30], and single teenage mothers [31]). What remains less clear is the extent to which people with low literacy skills will feel comfortable using a computer, or will be able to act on information or advice provided over the Internet. Interactive e-Leaning programmes to promote positive dietary behavioural changes have the potential to benefit population health. However, before e-Learning can be hailed as a dietary behaviour change intervention of the future, the effective components and mechanisms of action of e-Learning programmes must be identified, and its cost-effectiveness established in different contexts. Previous reviews Three systematic reviews have examined the effectiveness of e-Learning for dietary behaviour change. The first [32] was restricted to first-generation interventions for dietary change and did not include any web or Internetbased interventions. The second [33] examined a broad range of second-generation interactive interventions for dietary behaviour change. Both of these reviews reported studies published prior to 2006 that were carried out in a variety of settings. The third review [28] was more recent, reviewing second-and third-generation interventions trialled up to 2008, but only in primary prevention in adults (no participants with diagnosed disease). All reviews were restricted to publications in the English language, and limited their searches to relatively few databases, increasing the potential for publication bias. The conclusions drawn from these systematic reviews were that elearning shows some promise for dietary behaviour change, although the findings were mixed. Inter-study heterogeneity with respect to study design, participants, measures, and outcomes precluded meta-analysis to estimate pooled intervention effects. Moreover, the costeffectiveness of e-Learning was not evaluated in any review, nor was there any attempt to identify potential mechanisms of action. The third review assessed internal and external validity of trials, and began to isolate effective components. Our review will provide a comprehensive and up-todate account of e-Learning technologies in use for promoting dietary behavioural change, and an evaluation of their effectiveness and cost-effectiveness in improving dietary behaviour as well as clinical outcomes. We will investigate the psychological theories that underlie the process of behaviour change [34][35][36], and look for key behaviour-change techniques that have been shown to be associated with healthy eating behaviours [37]. Where these have been used to inform intervention design in trials, we will explore potential mediators of behaviour such as knowledge, intention, self-efficacy and emotions with a view to understanding mechanisms of action. We will also explore the different components of trialled interventions, in order to find the effective components of successful e-Learning interventions for dietary change. We will use a systematic search strategy (described below) to identify relevant studies and to reduce the potential for reporting biases, and use wider inclusion criteria than in previous reviews to enable a wider range of conclusions to be drawn. Preliminary literature searching, including the NHS's Economic Evaluation Database, suggests that the published evidence on cost-effectiveness is extremely limited. Therefore, we will conduct a de novo economic evaluation of the intervention studies, looking at cost-effectiveness in England and Wales, if the required clinical effectiveness data are available from the primary trials. We will conclude with policy recommendations and recommendations for future primary research. AIMS of the Review The aims of this systematic review are to determine the effectiveness and cost-effectiveness of adaptive e-Learning for improving dietary behaviours. The specific objectives are to: • Describe the range of e-Learning technologies in use for promoting dietary behavioural change; • Evaluate interactive e-Learning effectiveness in terms of improvement in dietary behaviour and clinical outcomes; • Explore the properties of different e-Learning interventions in order to determine possible effective components of successful e-Learning interventions for dietary behaviour change; • Investigate potential explanations of dietary behaviour change, and mechanisms of action; • Evaluate cost-effectiveness compared with current standard interventions, and likely budget impact in England & Wales. Final outputs will be a report to the UK National Institutes of Health Research (NIHR) Health Technology Assessment (HTA) programme, and a peer-reviewed paper. Methods/Design Design The research will consist of a systematic review and a cost-effectiveness analysis. Types of study We will include randomised controlled trials (RCTs) for evidence of effectiveness, and economic evaluations for evidence of cost-effectiveness. Types of population Adolescents or adults aged 13 years and above who have participated in a study designed to evaluate the effectiveness of e-Learning to promote dietary behavioural change. We shall include all clinical conditions where dietary advice plays a major part in case management. Types of intervention Interventions will be included if they are interactive computer software programmes that tailor output according to user input (second and third generation interventions). These include those where users enter personal data or make choices about information that alter pathways within programmes to produce tailored material and feedback that is personally relevant. Users may interact with the programmes as members of a small group, as well as individually. Programmes should be available directly to users and allow independent access without the need for any expert facilitation. Interventions will be excluded if they are: First-generation tailored 'information only' (e.g. providing a leaflet or PDF); simple information packages with no interactive elements; non-interactive mass media interventions (such as TV advertisements); interventions designed to be used with others' help (e.g. teacher or health professional); interventions targeted at health professionals or teachers; computer-mediated delivery of individual health-care advice (e.g. online physicians); or electronic history-taking or risk assessment with no health promotion or interactive elements. Outcome measures We anticipate that most interventions will be aimed at dietary behaviours, and are unlikely to have followed participants long enough to obtain clinical changes. However, as dietary behaviour tends to be self-reported it is prone to error (e.g. recall bias). Biological outcomes on the other hand are more objective and also more important for modelling purposes. We will therefore use dietary behaviour as our primary outcome, but we will attempt to obtain data that allow us to model the relationship between behaviours and clinical changes. Primary outcome measures The primary outcome variables will be those related to dietary behaviours. They will include estimated intakes or changes in intake of energy, nutrients, dietary fibre, foods or food groups. The dietary assessment tools or techniques used to estimate dietary behaviour will be critically examined in terms of quality. Secondary outcome measures Objective measures that are likely to respond to changes in dietary behaviours and are associated with adverse clinical outcomes will be examined, including measurements of anthropometric status and blood biochemistry. Other data We will also seek data on economic outcomes, including costs of providing the intervention and costs to the individual user; data on unintended adverse consequences of the interventions; and process outcomes (e.g. usage data). Data relating to potential cognitive and emotional mediators of dietary behaviour will also be extracted. These will only be extracted if primary and/or secondary outcome data are available. Identification of eligible studies and data extraction Search strategy We have designed a four-part search strategy: Firstly, we will search electronic bibliographic databases for published work (see below for databases to be searched). Secondly, we will search the grey literature for unpublished work. Thirdly, we will search trials registers for ongoing and recently completed trials. Finally, we will search reference lists of published studies and contact authors and e-health research groups to check for more trials. All databases will be searched from 1990 (any studies conducted in the 1980s will be identified by searching the ref-erence lists of included studies). There will be no restrictions by language. To ensure the review is reasonably up-to-date at reporting, the searches will be re-run immediately prior to analysis and further studies retrieved for inclusion. The search strategy comprises two concepts: Computer/Internet-based interventions, and dietary behaviour. (See Additional file 2 for full search strategies). The databases that will be searched are: CINAHL, Cochrane Library, Dissertation Abstracts, EMBASE, ERIC, Global Health, HEED, HMIC, MED-LINE, PsychInfo, and Web of Science. Screening and review process All studies identified through the search process will be exported to a bibliographic database (EndNote version X3) for de-duplication and screening. Two review authors will independently examine the titles, abstracts, and keywords of electronic records for eligibility according to the inclusion criteria above. Results of this initial screening will be cross-referenced between the two review authors, and full-texts obtained for all potentially relevant reports of trials. Full-texts of potentially eligible trials will go through a secondary screening by each reviewer using a screening form based on the inclusion criteria (see Additional file 3) for final inclusion in the review, with disagreements resolved by discussion with a third author. Reference lists of all eligible trials will be searched for further eligible trials. Data extraction Two review authors will independently extract relevant data using a standardised data extraction form (Additional file 4) in conjunction with a data extraction manual (Additional file 5). Trial managers will be contacted directly if the required data are not reported in the published study. Descriptive analysis We will describe all studies that meet the inclusion criteria, including: 1. Study design a. Trial design and quality b. Data collection methods, modes, and techniques; validity of tools c. Adherence to protocol (We will attempt to retrieve the protocols of eligible studies to examine the adherence to initial plans a. Primary and secondary outcomes measured b. Information on process (ease of use) and usage (compliance) Information on how access to the intervention was provided (e.g. free laptops/Internet access); the intended reading age (or other measure of technological literacy/ skill required); and the socio-demographic characteristics of the participants will be used to address concerns over the digital divide. Where primary studies have included sub-group analyses of users with low-income or low educational status, we will note these. If sufficient data are provided by the primary studies we will consider undertaking sub-group analyses of intervention effects in lowincome and low educational status users. Statistical analysis We will use statistical software (Stata version 11) for data synthesis. In the presence of sufficient homogeneity (i.e. comparable population, interventions and outcomes) we will pool the results of RCTs using a random-effects model, with standardised mean differences (SMDs) for continuous outcomes and odd ratios for binary outcomes, and calculate 95% confidence intervals and two sided P values for each outcome. In studies where the effects of clustering have not been taken into account, we will adjust the standard deviations by the design effect, using intra-class coefficients if given in papers, or using external estimates obtained from similar studies [38]. In the absence of sufficient homogeneity, we will present tables of the quantitative results. We will assess selection bias using Egger's weighted regression method and Begg's rank correlation test. Heterogeneity among the trials' odds ratios will be assessed by using both χ 2 test at a 5% significance level and the I 2 statistic, the percentage of among-study variability that is due to true differences between studies (heterogeneity) rather than to sampling error. We will consider an I 2 value greater than 50% to reflect substantial heterogeneity. We will conduct sensitivity analyses in order to investigate possible sources of heterogeneity including study quality (adequate vs. inadequate allocation concealment; low vs. high attrition) and socio-demographic factors that could act as effect modifiers (for example age, gender, sexuality and socioeconomic status). Details of each e-Learning programme will be presented in a table of study charac-teristics, and we will conduct exploratory, descriptive analyses of data available on effective components and mechanisms of action. Economic evaluation A decision-analytic model will be built to assess costeffectiveness, so that intervention effects identified by the systematic review can be extrapolated beyond the observed trial periods [39]. The aim of the evaluation will be to compare the cost-effectiveness of adaptive e-learning technologies against other dietary interventions available in England and Wales. We will combine the results of the systematic review with expert advice to identify the relevant e-learning technologies and appropriate comparators (e.g. group learning, individual contact with a dietician) and model the costs associated with each. The primary form of economic evaluation will be a cost-utility analysis, where health outcomes are expressed as quality-adjusted life-years (QALYs). The base case analysis will be performed from a NHS cost perspective. Future costs and health benefits will be discounted at 3.5% per annum. Results will be presented as expected costs, expected QALYs, incremental cost-effectiveness ratios, net benefit statistics and cost-effectiveness acceptability curves. The model structure will be informed by: (i) reviewing previously published decision models where the immediate objective has been to evaluate technologies designed to help people change dietary behaviour and (ii) the results of the systematic review with respect to the recorded outcomes. For example, if the trials report changes in BMI, then a Markov model could be constructed, with the health states defined in terms of BMI groupings. Intervention costs [40] and effects could then be simulated by movements through these health states, with higher BMI being associated with increased health care costs (including costs of health outcomes such as cardiovascular disease, type 2 diabetes and cancer) and increased probabilities of all-cause mortality from sources such as the British Regional Heart Study [41]. Depending on the chosen model structure, other literature reviews will also be performed to identify evidence for other parameters, such as the increased costs and the dis-utility associated with increasing levels of obesity. Other variables for which additional searches might be required could include evidence linking increases in fruit or vegetable intake with weight loss and the reduction in the likelihood of cardiovascular disease following weight loss. Other important issues to incorporate in the model structure are likely to include attrition from the intervention, non-compliance and the need to retain a degree of flexibility as clinical studies are likely to report different outcomes (e.g. changes in behavioural and clinical outcomes). If the primary systematic review identifies a 'network' of relevant RCTs, consideration will be given to performing formal mixed-or indirect-treatment comparisons to allow cost-effectiveness comparisons to be made across all programmes [42]. Stakeholder involvement Involvement of non-governmental organisations who represent a range of potential user groups has been an important part of the project development. Jane Landon, Deputy Chief Executive of the National Heart Forum, is a member of the investigative team, attends steering group meetings with the other co-investigators, and contributes to decisions made as the study progresses. The National Heart Forum (NHF)is an alliance of over 60 national organisations representing professional, academic, consumer, charity and public sector organisations throughout the UK, and therefore represents a large population of potential users of e-Learning for dietary behaviour change. In our experience, user input is particularly valuable in considering outcomes of interest to users, and methods of disseminating results to user communities, thus contributing to public involvement in science. Strengths and limitations of the review Strengths of this review include unambiguous definitions and inclusion criteria, and a clear and systematic approach to searching, screening and reviewing studies and extracting data using standardised forms and duplicating all stages. Our search area is large enough and our inclusion criteria broad enough to encompass the broadest range of interactive e-Learning interventions and dietary, clinical and behavioural outcomes, and so has the best chance of identifying effective components of effective interventions for translation into policy or further research. Our review will also pinpoint potential mechanisms of action in terms of psychological theories of behaviour change employed in interventions, which will further inform the future development of e-learning interventions. The final report to the HTA will allow for a comprehensive statistical, economic and subgroup analyses, as well as descriptive analysis not usually available given the limited space available in academic journals. Although every effort will be made to locate unpublished trials our findings may still be vulnerable to selective reporting, and despite a pre-defined and systematic approach to screening and reviewing the study will still involve judgments made by review authors, either of which may lead to bias. This review will not look at cohort or other observational study designs, and therefore may not be able to evaluate acceptability or preference of e-Learning interventions. Implications for policy and healthcare commissioning This review aims to provide comprehensive evidence of the effectiveness and cost-effectiveness of adaptive e-Learning interventions for dietary behaviour change, and explore potential psychological mechanisms of action and the effective components of effective interventions. This can inform policy makers and healthcare commissioners in deciding whether e-Learning should be part of a comprehensive response to the improvement of dietary behaviour for health, and if so which components should be present for interventions to be effective.
v3-fos-license
2019-05-12T14:24:18.162Z
2018-11-29T00:00:00.000
149535709
{ "extfieldsofstudy": [ "Sociology" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.ccsenet.org/journal/index.php/jsd/article/download/0/0/37670/38034", "pdf_hash": "0df2f8e3780f23566ef114e62452fbca65c49cf6", "pdf_src": "Anansi", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:1717", "s2fieldsofstudy": [ "Sociology" ], "sha1": "0df2f8e3780f23566ef114e62452fbca65c49cf6", "year": 2018 }
pes2o/s2orc
Eurocentrism and the Contribution of Ibn Khaldun to the Growth of Sociology It is generally believed that sociology originated in Europe in the 19 century and the paternity of the discipline is commonly attributed to the French sociologist August Comte. However, reflections of a sociological nature were observed and found in the work of 14 century North African historian and philosopher Ibn Khaldun. However, such contribution of Ibn Khaldun is little acknowledged by European scholars in their works. Therefore, this paper attempts to examine how Eurocentrism is embedded in the writing of the European scholars and unpacks the contribution of Ibn Khaldun in the growth of Sociology. In the first part of essay, I argue that the perspective of European scholars are mainly Eurocentric and parochial in their accounts on culture, language and other aspects of non-European society. In the second part of the essay, I argue Ibn Khaldun’s contribution to the field of sociology is largely ignored, though his contributions dealt with the society and human character, political organization and government, differences between rural and urban populations, kinship, social solidarity, and the interplay between economic conditions and social organizations. Nevertheless, I argue that though Ibn Khaldun’s ideas have hugely impressed some of European thinkers in the 19 century prompting them to regard him as the progenitor of sociology, question remains as to how his ideas and theories have been appropriated by contemporary social scientists in their works. Introduction Historical origin of sociology in the accounts of European scholars generally represents Eurocentrism, ignorance and parochialism.Eurocentrism is regarded as a profoundly rooted belief in the supremacy of Western Civilization, which can be constituted as an integral part of Western intellectual tradition.Assessment and evaluation of non-Western European societies from the perspective of Europeans with biases can also be regarded as Eurocentrism ( Alatas, 2007).This characteristic represents in the Western view of civilization and modernity, in the history of ideas and in the history of science and technology, art and medicine.The Eurocentrism, which is deemed as the Western view of non-Western cultures and civilization, does not either acknowledge or not recognize the contributions of non-Western scholars.By and large, Western attitude towards other cultures and civilizations is obnoxious and parochial.Eurocentrism is a significant manifestation of the ideology of modern capitalism (Amin, 2009) and represents the thought process of general people as well as scholars in the field of social sciences. Sociology as a field of study is generally believed to have originated in Europe in the 19 th century and the paternity of the discipline is commonly attributed to the French sociologist and social philosopher August Comte .However, this view can be largely debunked given the fact that reflections on various themes that are distinctly sociological were found and observed in the works of 14 th century North African historian and philosopher Ibn Khaldun, He was a great scholar about whom so much has been written but whose sociological perspective has largely been neglected in the works of European scholars.In particular, his theoretical contribution has effectively remained marginal in the works of contemporary scholars in the Western world.This is what characterizes the modern form of Eurocentrism. Although very few studies have focused on Eurocentrism and sociology, they have not triggered much academic discourse or knowledge on the contribution of Ibn Khaldun to growth of sociology as a field of study in social sciences.Thus, there has been a paucity of research and scholarship with critical academic fervor dealing with Eurocentrism and the contribution of Ibn Khaldun to sociology.As such, this paper attempts to examine how Eurocentrism is embedded in the writing of the European scholars and unpacks the contribution of Ibn Khaldun to the growth of Sociology. Methodology This study is mainly descriptive and interpretive in nature and relies on secondary materials such as academic journals, newspapers, reports from research and the internet sources.These sources have recorded the discourses dealing with Eurocentrism, contribution of European scholars to sociology as well as the yeoman contribution of Ibn Khaldun to the growth of the sociology.The first part of essay deals with conceptual framework of Eurocentrism and the Eurocentric understanding or thinking of European scholars embedded in their accounts on culture, language and other aspects of non-European society.The second part of the essay explores the contribution of Ibn Khaldun contribution to the growth of sociology and how far his knowledge and ideas have been appropriated by scholars in European societies and others in non-European societies to deal with socio-economic and political problems and challenges confronting the contemporary societies. Findings and Discussion This section focuses on the discussion of key findings concerning Eurocentrism embedded in the writing of European scholars on the culture, language and other aspects of non-European societies and their contribution towards the development of sociology as a field of study in social sciences.More importantly, the crux of the discussion in this section deals with the contribution of Ibn Khaldun to the growth of sociology in 14 th century and to what extent his contribution of knowledge and ideas have been acknowledged and appropriated by the European scholars and other scholars in non-European world to help solve the problems confronting the contemporary societies. Eurocentrism and the European Scholars According to Francis Bacon (1621), the paper, the magnetic compass, gunpowder, and printing were the key inventions that separated the modern (Western) world from the traditional world.Although each of this invention originated in China, he did not take much risk to know where and how it originated.This suggests the mediocrity and superficial understanding of Western scholar on invention made in traditional world and their continued effort to undermine such invention. In the view of the German historian and archeologist Johann Winckelmann (1768), the 'true ideal of beauty' is witnessed only in the Greek aesthetic and artistic tradition and he is of the view, on the contrary, that Chinese art is inferior and decadent.Ironically, debunking this ethnocentric view of aesthetic and artistic tradition, Chinese were able to exert a significant influence on European art and decoration in the 18 th century. In the perspective of the Prussian philosopher Wilhelm von Humboldt (1835), the Chinese language seemed inferior to the European languages.The same is the case with the German Philosopher Johann Gottfried Herder (1803) who was disdainful of Chinese national character.This is an instance to show the Eurocentric view of European scholar over non-European languages and their culture. A British politician and statesman, Lord Thomas Babington Macaulay (1859), categorized the world into two: i) civilized nations and ii) barbarians; he referred to Britain representing the zenith of civilization and non-Western peoples representing barbarians.He was able to proudly declare that "a better equipped library of Europeans is far more worth than the whole literature of India and Arabia."This is another instance to show the embedded view of eurocentrism among European scholars. Even many well-known European social scientists in 19 th century including Alexis de Tocqueville( 1859), Auguste Comte(1859), and John Stuart Mill( 1873), viewed the Chinese as inferior ( Goody, 2006).Jack Goody's book The Theft of History (2006) underlines the distasteful feature of Eurocentrism.According to his perspective, the past is presented to the present generation based on the historical innovation and invention made by scholars in the Europe, which subsequently imposed on the rest of the world.He, then, went on to elaborate that European scholars claim that some of the important institutions of contemporary times like science, democracy, capitalism and modernity were the by-product of European invention.Nevertheless, Goody's thesis underlines that European scholars has purposefully ignored or downplayed the history of the rest of the World and its inventions.As a result, the Europe has misinterpreted much of its own history.Goody contents that the claim of European scholars that the important institutions originated in Europe is historically flawed and he is of the view that they are prevalent among widespread range of human societies (Goody, 2006).In a way, Goody highlights the Eurocentric perspective of Western scholars over the cultures and other inventions made in the non-European countries. Much acclaimed 13-part TV documentary "Civilization" produced by Konneth Clark is apparently Eurocentric because it elevates the civilization of the West by giving an exclusive status, given the arts, architecture and philosophy of Europe over the past 100 years.Surprisingly, it has not taken much trouble to explore the impressive civilizational legacy of the non-Western world, particularly Chinese and Islamic civilization.This is something that has not attracted the attention of most of the European scholars whose thoughts were Eurocentric and parochial. In her ground-breaking work A Sea of Languages: Rethinking Arabic Role in Medieval Literary History( 2009), Maria Rosa Menocal unpacked the embedded racism and chauvinism in European literary and cultural history that has explicitly averted European scholars from acknowledging the great influence of Arabic, Islamic and Andalusian cultures on the development of European medieval literature.It is to be noted that she has underlined that the contribution of Muslim scientists, translators, and intellectuals helped preserve and disseminate the Greco-Roman heritage, particularly the scientific, cultural, literary advances in Europe with the fall of the Roman Empire.Most importantly, she argues that the influence of Arabs in medieval Europe has not necessarily limited to literature but contained music, science, philosophy, architecture, and the arts (Menocal, 2009).Argument of Menocal underlines the contribution of Arabs in the field of literature, science, philosophy, architecture and arts, which some of the European scholars have selectively failed to acknowledge.This can be couched as the Eurocentric perspective of European scholars. The contribution of European Scholars to the growth of Sociology The foundations of social sciences in the West were laid during the Enlightenment in the mid-18 th century.The Enlightenment philosophers, such as Voltaire, Montesquieu, Buffon, Dennis Diderot, Rousseau, Jacques Turgot, Abbe de Saint-Pierre and Condorcet placed great emphasis on reason, freedom, science, education, reforms and reconstruction of society in the light of rational principles.They were greatly influenced by the scientific worldview and sought to apply the principles of the Newtonian physics to the study of human nature and society.Abbe de Saint-Pierre and Turgot developed the idea of societal progress.Turgot argued that the progress was an invariant and unique feature of human society (Manuel, 1962;Bury, 1955).Baron de Montesquieu, who was greatly inspired by the Newtonian physics, had a keen interest in discovering the order of society, which he sought to demonstrate through a typology of social structures (Aaron 1967: 19).Montesquieu argued that habits and character are greatly influenced the climate conditions and he considered this observation as equivalent to the law of gravity in physics.Some of the central themes in sociological thought, including the scientific study of society, the idea of social structure, the use of the comparative method, the inter-relationship of social institutions and the classification of societies on the basis of significant criteria, can be found in Montesquieu's work (Evans-Pritchard, 1981) The ideas of the Enlightenment philosophers had a profound influence on the social philosophers and sociologists in the 19 th century, notably August Comte.Comte was influenced by the ideas of Turgot, who argued that the human mind and society have passed through certain universal stages of evolutionary development from the theological thought the metaphysical to the scientific. The French social philosopher Henri Saint-Simon (1760-1825) who was inspired by the Enlightenment, believed that the law of universal gravity provided a unifying principle for all physical as well as social and moral phenomena.Saint-Simon, who was deeply anguished by the upheaval and chaos that followed in the wake of the French Revolution, envisioned a science of society which would provide the guiding principles for social reconstruction.He considered this new science of society a branch of physiology, which he called 'social physiology'.Saint-Simon believed that the new order of society should be regulated and governed by professional managers and technocrats.However, the new order of society envisioned by him had little space for individual freedom and autonomy. As an academic discipline, sociology emerged in Europe in the 19 th century.The growth of sociology in Europe was closely linked to the transformation of European societies, which brought about by industrialization, large scale migration of people from rural to urban areas, the collapse of feudal order, the emergence of new professional and mercantile classes, colonization, secularization, emergence of nation-states and democracy.The fundamental ideas of Western sociology were rooted in response to these massive changes (Nisbet, 1967).A substantial part of the vocabulary of Western sociology is rooted in this specific historical, and social context.Key sociological terms like class, democracy, nationalism, community, authority, rationalism, ideology, capitalism, modernity, bureaucracy, collectivism, utilitarianism, liberal and conservative were embedded in the context of the historical experience of Western societies and often has moral and ideological underpinnings( Ibid: 23) The development of sociology in the United States from the mid-19 th century till the outbreak of the World War I was greatly influenced by the massive changes that occurred in the American Society.These included the consequences of the civil war, industrialization, and urbanization, the assimilation of migrants from Europe and integration of African-Americans, during the formative period, sociology in the US was substantially influenced by the Christian project of social reform.In the early decades of the 20 th century, much of sociological research in the US was concerned with social problems arising out of industrialization, migration, urbanization.These included poverty, crime, delinquency, and rising divorce rates ( Vidich and Lyman, 1985;Turner, 1989;Wells and Picou, 1981). Eurocentric baggage has been identified as a grave limitation of Western sociology.Sociologists claim that their discipline is preeminently comparative and Universalist in character.However, their claim is not corroborated or marched by the professional literature and theoretical propositions in mainstream sociology.All the standard textbooks in the subject, which are written mostly by the American sociologists, generally focus on themes and issues that are rooted in the context of Western societies, with passing references to India, China or Latin America.An estimated three-fourth of sociological writings are concentrated on American society. A specific variant of Eurocentrism-'Judaeocentrism'-is found in the writings of some American, European and Israeli sociologists of Jewish descent.A striking example is provided by the views of a prominent Polish-born Israeli sociologist S. N.Eisentadt(1923N.Eisentadt( -2010)), winner of several coveted international prizes, honorary member of the American and European Universities.While writing about the events surrounding the creation of the state of Israel in 1848, Eisentadt highlighted the death of about 6000 Israeli soldiers in the 1947-1948 war but did not say a word about the thousands of Palestinians who were killed and displaced from their ancestral villages by the Israeli and Zionist fighters ( Haller, 2015).This is an instance to argue that the Jewish sociologists are parochial, discriminative and prejudicial on the part of their sociological analysis concerning the killing, massacres, civil war and conflict. General macro-structural theories in American sociology, which purport to be comparative and cross-cultural, are in fact largely parochial and are generalized and extrapolated from American experience to the rest of the world.One of the generalizations preferred by American gender theorists is that women do not participate in war.Maria Cole, a Polish historian and sociologist who has worked in Poland, England, USA, and Australia and has collected a good deal of historical and empirical evidence from Poland, has shown that this generalization does not hold in the case of Poland and other European societies.She has shown, on the basis of historical evidence, that women have played a substantial role in wars, insurrections, rebellion in Poland over the past two centuries.Cole argues that theories of gender inequality formulated by American sociologists are inadequate in the context of gender relations in societies other than the US. There is a surprising ignorance among Western sociologists about multiethnic societies in Africa, Asia and Latin America.In his excellent and widely-read textbook 'Sociology', British sociologist Antony Giddens writes that "Jews do not eat pork, Hindus eat pork but avoid beef" (Giddens, 2001).The statement that Hindus eat pork but avoid beef is only half true.Generally, Hindu castes and communities avoid both pork and beef. Contribution of Ibnu Khaldun in the Growth of Sociology As discussed above, Western accounts of the historical origins of Sociology are generally colored by Eurocentrism, ignorance and parochialism.It is generally believed that sociology originated in Europe in the 19 th century and the paternity of the discipline is commonly attributed to the French sociologist and social philosopher August Comte (1798-1857).This is an erroneous view.To be precise, the reflections of a sociological nature were found in earlier times and in different parts of the world.George Ritzer has rightly observed that "scholars were doing sociology long ago and in other parts of the world too (Ritzer, 1992) As such, works on sociology were found in the world of 14 th century North African historian and philosopher Ibn Khaldun and thus it emerged as a systematic field of inquiry.Ibn Khaldun was born in an Arab family in Tunis in 1332 CE.His significant contribution on society reflected in his monumental history of mankind, known as Kitab al-Ibar.( Momin, 2015) Works of Ibn Khaldun on different themes that are distinctly sociological in nature are found in the Muquaddimah, or Prolegomena, which constitutes the first subjects, including the influence of environmental conditions on society and human character, different forms of political organization and government , particularly the rise and decline of states, differences between rural and urban populations, kinship, social solidarity(asabiyya), and the interplay between economic conditions and social organizations.His thoughts also focus on education and knowledge too ( Alatas, 2013).His works on the nature of state, state-society relations, secularism and fundamentalism, and knowledge and education are fundamentally essential to explore and analyze the problems confronting the contemporary societies in the World.He not only relied on secondary sources for data collection and information relevant to society, but on personal observations and experiences too, in order to ensure accuracy and reliability in his works.This goes to say that his writings concerning human behavior and society originated during the 14 th century, much before the Western scholars discovered a discipline called sociology.Hence, his main contribution in the development of sociology as a discipline is largely ignored by popular Western sociologists in their writings. Nevertheless, the pioneering contribution of Ibn-Khaldun to the growth of sociology in 14 th century is duly acknowledged by some European scholars including historians and sociologists and thus they elevated him as a progenitor of sociology.Therefore, ignoring them or failing to acknowledge their contribution on Ibn Khaldun would be regarded as a gross generalization.His major scholarly work called Muquaddimah was translated to French in 1863 and to English in 1958 by European scholars like De Slane and by Franz respectively.A popular scholar like A.J. Toynbee(1962) has referred to the Prolegomena as " undoubtedly the greatest work of its kind that has ever been created by any mind in any time or place (Toynbee, 1962).In his book on Sociological Essays, the Austrian sociologist Ludwig Gumplowicz (1909) has given a due place to Ibn Khaldun by including a chapter on Ibn Khaldun.Among many other sociologists who regarded Ibn Khaldun as the father of Sociology, Pitirim Sorokin, C.C.Zimmerman, and C.J.Galpin are significant.A well-known Russian American sociologist Pitirim Sorokin described his work on the Prolegomena as the pioneer systematic discourse on sociology (Sorokin, 1962).More importantly, in their authoritative history of sociology, H.E.Barnes and Howard Becker have regarded Ibn Khaldun as "the greatest among the early modern sociologists" (Barnes and Becker, 1938).Ernest Gellner has been identified as one of the popular British sociologist and has contributed enormously to the growth of sociological theories and concepts.He has maintained that Ibn Khaldun is a prominent deductive sociologist and a pioneering advocate of the method of ideal types and the greatest sociologist of Islam ( Gellner, 1983).This suggests the degree of recognition and acknowledgement Ibn Khaldun has received among the European scholars thanks to his pivotal contribution to the growth of sociology in 14 th century.However, the question remains as to how Ibn Khaldun's theories have been appropriated by European scholars in their own works on sociology and other social sciences. Conclusion Ibn Khaldun has been considered one the pioneers of Muslim scholars of the pre-modern period, since he founded what he called as the science of human society or social organization.It dealt with new methodology for writing history and understanding the causes of events.Although he laid the foundation for the growth of sociology during the 14 th century, his contribution toward the field of study is little acknowledged by many scholars in the Europe.Among the many reasons attributed to this tendency, Eurocentrism has largely been identified one of fundamental reasons for it. While the earlier form of Eurocentrism is characterized by racism and stereotype of non-Western societies, the modern form of Eurocentrism is regarded as the neglect of thinking/theories or ideas that originate from non-European societies.Here, in the case of Ibn Khaldun, the problem is not the lack of knowledge about Ibn Khaldun, but the way in which knowledge and contribution of Ibn Khaldun is appropriated by scholars in the European world to the development of social sciences.Most importantly, the theoretical knowledge Ibn Khaldun developed in his works has rarely been used in the contemporary historical, sociological and empirical works of scholars in the Western world.This is what characterizes the new form of Eurocentrism and this is where the problem of Eurocentrism lies, particularly in the case of Ibn Khaldun as the founder of sociology.However, it is poignant to note that his ideas and theories have had little impact on the development of Muslim thought for several centuries, because of unknown reasons.To be precise, it is remarkable that no indigenous schools or universities in non-European world including Arab world have ever produced Khaldunian social science.By contrast, his ideas and theories have enormously convinced and impressed some of the European thinkers from the nineteenth century on.This, indeed, prompted them to regard Ibn Khaldun a progenitor of sociology and modern historiography as discussed above.However, the most important question remains as to what extent the knowledge or ideas produced Ibn Khaldun have been appropriated by scholars in the European world in their accounts on sociology or other social sciences.
v3-fos-license
2019-04-27T13:07:41.946Z
2018-08-01T00:00:00.000
134032591
{ "extfieldsofstudy": [ "Environmental Science" ], "oa_license": "CCBY", "oa_status": "HYBRID", "oa_url": "https://link.springer.com/content/pdf/10.1007/s00382-017-3962-9.pdf", "pdf_hash": "7be2206e480830b52b2a871410e7efa0d3883a2e", "pdf_src": "MergedPDFExtraction", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:1719", "s2fieldsofstudy": [ "Environmental Science" ], "sha1": "262100ed34c2c2160e1ff5c06dfaf05add60c614", "year": 2018 }
pes2o/s2orc
Different types of drifts in two seasonal forecast systems and their dependence on ENSO Seasonal forecasts using coupled ocean–atmosphere climate models are increasingly employed to provide regional climate predictions. For the quality of forecasts to improve, regional biases in climate models must be diagnosed and reduced. The evolution of biases as initialized forecasts drift away from the observations is poorly understood, making it difficult to diagnose the causes of climate model biases. This study uses two seasonal forecast systems to examine drifts in sea surface temperature (SST) and precipitation, and compares them to the long-term bias in the free-running version of each model. Drifts are considered from daily to multi-annual time scales. We define three types of drift according to their relation with the long-term bias in the free-running model: asymptoting, overshooting and inverse drift. We find that precipitation almost always has an asymptoting drift. SST drifts on the other hand, vary between forecasting systems, where one often overshoots and the other often has an inverse drift. We find that some drifts evolve too slowly to have an impact on seasonal forecasts, even though they are important for climate projections. The bias found over the first few days can be very different from that in the free-running model, so although daily weather predictions can sometimes provide useful information on the causes of climate biases, this is not always the case. We also find that the magnitude of equatorial SST drifts, both in the Pacific and other ocean basins, depends on the El Niño Southern Oscillation (ENSO) phase. Averaging over all hindcast years can therefore hide the details of ENSO state dependent drifts and obscure the underlying physical causes. Our results highlight the need to consider biases across a range of timescales in order to understand their causes and develop improved climate models. Seasonal forecasts are increasingly employed to provide regional climate predictions. For the quality of these to improve, regional biases caused by local processes must be reduced. This study uses two seasonal forecast systems to examine drifts in temperature and precipitation and This study uses the hindcasts from two operational seasonal forecast systems to study the evolution of biases as a function of forecast lead time. The Beijing Climate Center -Climate Prediction System (BCC-CPS) is the seasonal prediction system of the Beijing Climate Center (BCC) at the Summary Models and methods drifts in temperature and precipitation and compares them to the bias in the freerunning version of each model. Drifts are considered from daily to multi-annual time scales. We find that initialization error and small amounts of initial precipitation mean that the bias found over the first few days can be different from that in the free-running model (not shown on this poster). Some drifts are simply too slow to have a big impact on seasonal forecasts, even though they are important for climate projections. We define three types of drift: asymptoting, overshooting and inverse drift (away from the long-term bias). Precipitation almost always has an asymptoting drift. Temperatures on the other hand, vary between the two forecasting systems, where one tends to overshoot and the other to have an inverse drift. Finally, we ask whether there are state-dependent drifts between forecasts initialized with different ENSO phases. The magnitude of equatorial sea surface temperature drifts, both in the Pacific and other ocean basins, vary depending on the initial conditions. This is also seen for precipitation, where averaging over all hindcast years when calculating biases can hide details of the response to different ENSO phases. the Beijing Climate Center (BCC) at the China Meteorological Administration (CMA). BCC-CPS is based on the BCC Climate System Model version 1.1m (BCC CSM1.1m). Its atmospheric component has a T106 horizontal resolution and the ocean horizontal resolution is 1°×1°refined to 1/3°i n the tropics. The Met Office Unified Model (UM) Global Coupled configuration 2 (GC2) version of HadGEM3 is used in the Global Seasonal forecast system version 5 (GloSea5) at the Met Office. The horizontal resolution in the atmosphere is N216 and in the ocean is 1/4°. We have used spun-up, free-running model versions of the BCC and HadGEM3-GC2 models as controls to determine the long-term biases. To assess model biases and drifts we use 30 years of Reynolds NOAA OI V2 high resolution SST. We used this product as it is on a 1/4°grid that can resolve sharp SST gradients. We also did not want to favour any forecast system by using the SST data set it is initialized with. To evaluate precipitation we use the same 30 years of GPCP V2.2 Combined Precipitation data set. Further details on the method can be found in the caption for Figure 1. Figure 2 Types of drift encountered in the two forecast systems (letters) and the time scale of the asymptoting drifts in months (colours) for SST (top) and precipitation (bottom). From the left, the first two boxes within each region refer to BCC-CPS for May and November start dates, respectively. The last two boxes are for GloSea5 for the same months. The Figure 3 Hovmuller plot of GloSea5 SST bias drift averaged over 5°S -5°N for the Indo-Pacific and Atlantic Oceans. This is the average drift over all hindcasts and ensemble members (see the caption Overall the long-term biases are similar between the models, but how those biases are reached is different. Figure 2 shows that asymptoting drift is most common for precipitation, but not for SST, where BCC-CPS tends to overshoot and GloSea5 tends to inverse drift. This is true for both the tropics and the extra-tropics. We have not been able to determine why precipitation and SST tend to have different drifts. It is especially strange in the tropics where these two variables are often coupled. The difference between models in the SST drift can probably be explained by how they most efficiently gain/loose heat to reach their longterm bias. However, GloSea5 warms in the northern hemisphere and cools in the tropics even though the long-term biases are the opposite. There is only one region where both initial months and both forecast systems have the same drift, that is an overshoot in the Indian Ocean SST. Most asymptotic drifts reach the long-term mean in 8 months or less (especially for precipitation), but there are exceptions such as the Southern Ocean SST in November for GloSea5 and precipitation in the Pacific ITCZ in May for BCC-CMS, which take much longer. In addition, the other drift types obviously take longer than the length of a seasonal forecast to reach the long-term bias. This implies that some of the climate model biases are less important for the seasonal forecasts. Both forecast systems have a different bias evolution, in terms of the magnitude of the drift, for NINO3.4 SST for different ENSO initial conditions. The BCC model has a mean state that is biased cold and BCC-CMS drifts the most when initialized with an El Niño state. In contrast, HadGEM3-GC2 has a mean state that is biased warm and drifts the most when initialized with a La Niña state. Figure 3 shows the average drift in Glosea5 for each ENSO state. The drift is strongest in the western Indian Ocean, East Pacific and the central Atlantic for La Niña years. There also appears to be some propagation towards the maritime continent. The eastward propagation that starts from about 50°E in November has a speed of roughly 1 m/s, consistent with an equatorial Kelvin wave, which could have been caused by a change in the wind forcing from initialization to the free-running forecast. The westward propagation starting at about 160°W is faster, so is not a Rossby wave and could be mediated by the atmosphere. Another explanation for these drifts is a readjustment of the thermocline in the ocean. Asymptoting drift is of the same sign and smaller than the long-term bias. Overshooting drift is the same sign and larger than the long-term bias. Inverse drift is of the opposite sign to the long-term bias. Figure 2 shows the drifts we found. start dates, respectively. The last two boxes are for GloSea5 for the same months. The types of drift (Figure 1) are (a)symptoting, (o)vershooting and (i)inverse drift. Atlantic Oceans. This is the average drift over all hindcasts and ensemble members (see the caption of Figure 1). The top panel shows the average drift for all hindcast years (1996 -2010). The middle panel shows the drifts for only the years with an El Niño in the initial conditions and the bottom panel is for only La Niña years. There are five El Niño years and six La Niña years in the hindcast set and the difference between the two ENSO states is significant for the NINO3.4 region (not checked elsewhere). Drifts in SST and precipitation ENSO dependence
v3-fos-license
2018-04-03T06:19:14.041Z
2017-01-06T00:00:00.000
15438687
{ "extfieldsofstudy": [ "Mathematics", "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://journalofinequalitiesandapplications.springeropen.com/track/pdf/10.1186/s13660-016-1287-6", "pdf_hash": "2e3a4e438ecd542e5d1362398d08348c4ebfeb60", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:1726", "s2fieldsofstudy": [ "Mathematics" ], "sha1": "0e6f530a3ae6494b752cafe7a8eb38386c615b2e", "year": 2017 }
pes2o/s2orc
Limit properties for ratios of order statistics from exponentials In this paper, we study the limit properties of the ratio for order statistics based on samples from an exponential distribution and obtain the expression of the density functions, the existence of the moments, the strong law of large numbers for \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$R_{nij}$\end{document}Rnij with \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$1\leq i< j< m_{n}=m$\end{document}1≤i<j<mn=m. We also discuss other limit theorems such as the central limit theorem, the law of iterated logarithm, the moderate deviation principle, the almost sure central limit theorem for self-normalized sums of \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$R_{nij}$\end{document}Rnij with \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$2\leq i< j< m_{n}=m$\end{document}2≤i<j<mn=m. Introduction and main results Throughout this note, let {X ni ,  ≤ i ≤ m n } be a sequence of independent exponential random variable with mean λ n , let {X n , n ≥ } =: {(X ni ,  ≤ i ≤ m n ), n ≥ } be an independent random sequence, where {m n ≥ } denotes the sample size. Denote the order statistics be X n() ≤ X n() ≤ · · · ≤ X n(m n ) , and the ratios of those order statistics R nij = X n(j) X n(i) , ≤ i < j ≤ m n . As we know, the exponential distribution can describe the lifetimes of the equipment, and the ratios R nij can measure the stability of equipment, it shows whether or not our system is stable. Adler [] established the strong law of the ratio R nj for j ≥  with fixed sample size m n = m, and the strong law of R n for m n → ∞ as follows. Theorem A For fixed sample size m n = m and all α > -,  ≤ j ≤ m, we know For m n → ∞ and all α > -, Later on, Miao et al. [] proved the central limit theorem and the almost sure central limit for R n with fixed sample size, we state their results as the following theorem. Theorem B For fixed sample size m n = m, In this paper, we will make a further study on the limit properties of R nij . In the next section, firstly, we give the expression of the density functions of R nij for all  ≤ i < j < m n , it is more interesting that the density function is free of the sample mean λ n , this allows us to change the equipment from sample to sample as long as the underlying distribution remains an exponential. Also we discuss the existence of the moments for fixed sample size m n = m. Secondly, we establish the strong law of large number for R nij with  = i < j < m and  ≤ i < j < m, respectively. At last we give some limit theorems such as the central limit theorem, the law of iterated logarithm, the moderate deviation principle, the almost sure central limit theorem for self-normalized sums of R nij with  ≤ i < j < m. In the following, C denotes a positive constant, which may take different values whenever it appears in different expressions. a n ∼ b n means that a n /b n →  as n → ∞. Density functions and moments of R nij The first theorem gives the expression of the density functions. Theorem . For  ≤ i < j ≤ m n , the density function of the ratios R nij is Proof It is easy to check that the joint density function of X n(i) and X n(j) is Let w = x i , r = x j /x i , then the Jacobian is w, so the joint density function of w and r is Therefore the density function of R nij is The next theorem treats the moments of R nij with fixed sample size m n = m. where c m,j is a constant depend only on m and j. Obviously the γ -order moment is finite for  < γ <  and is infinite for γ ≥ . where d m,i,j is a constant depend only on m, i and j, so the γ -order moment is finite for  < γ <  and is infinite for γ ≥ . Furthermore it is not difficult to verify that L  (r) = ER  nij I{|R nij | ≤ r} varies slowly at ∞, then by the fact that if L(x) = E|X|  I{|X| ≤ x} is a slowly varying function at ∞, then L a (x) = E|X -a|  I{|X -a| ≤ x} also varies slowly at ∞ for any a ∈ R, the proof is completed. Remark . Miao et al. [] obtained the density function for R nj for fixed sample size m n = m, they also proved that the expectation of R nj is finite and the truncated second moment is slowly varying at ∞. Adler [] also claimed that all the R nj have infinite expectations for fixed sample size, so our theorems extended their results. Strong law of large numbers of R nij From our assumptions, we know that {R nij , n ≥ } is an independent sequence with the same distribution for fixed sample size m n = m. As Theorem . states that the R nj do not have the expectation, so the strong law of large numbers with them is not typical. Here we give the weighted strong law of large number as follows. At first, we list the following lemma, that is, Theorem . from De la Peña et al. [], which will be used in the proof. Proof By (.) we get c n = b n /a n → ∞, so without loss of generality we assume that c n ≥  for any n ≥ . Notice that N n= a n R nj I{R nj > c n } N n= a n ER nj I{ ≤ R nj ≤ c n } = I  + I  + I  . By (.) and (.), it is easy to show then by Lemma ., we have Then by the Borel-Cantelli lemma, we get R nj I{R nj > c n } →  a.s. n → ∞. For I  , by (.) and noting c n → ∞, we get then combining with (.), we show So the proof of (.) is completed by combining (.), (.), (.), and (.). By the same argument as in the proof of (.), we can get (.), so we omit it here. Remark . If we take a n = (log n) α n , b n = (log n) α+ , α > -, it is easy to check that conditions (.) and (.) hold with λ =  α+ , so Theorems . and . and . from Adler [] are special cases of our Theorem .. There are some other sequences satisfying conditions (.) and (.), such as (a) a n = , b n = n β , β > , λ = ; (b) a n = , b n = n(log n) γ , γ > , λ = ; (c) a n = , b n = n(log n)(log log n) δ , δ > , λ = ; (d) a n = (log log n) θ n , b n = (log n)  (log log n) θ , θ ∈ R, λ =   , so the conditions (.) and (.) are mild conditions. At the end of this remark, we point out that only when a n = L(n)/n, where L(n) is a slowly varying function, the limit value λ will be λ > , this is known as an exact strong law, one can refer to Adler [] for more details. For the weak law, i.e., convergence in probability, one can see Feller [] for full details. For R nij , i ≥ , since the expectation is finite, by the classical strong law of large numbers, we have the following. Theorem . For fixed m n = m, we have for (.) Other limit properties for R nij , 2 ≤ i < j ≤ m By the above discussion, we know that, for fixed sample size m n = m and  ≤ i < j ≤ m, {R nij , n ≥ } is a sequence of independent and identically distributed random variables with finite mean, and L(r) = E(R nij -ER nij )  I{|R nij -ER nij | ≤ r} is a slowly varying function at ∞. Therefore the limit properties of R nij for fixed sample size can easily be established by those of the self-normalized sums. We list some of them, such as the central limit theorem (CLT), the law of iterated logarithm (LIL), the moderate deviation principle (MDP), the almost sure central limit theorem (ASCLT). Denote where (·) is the distribution function of the standard normal random variable.
v3-fos-license
2016-05-12T22:15:10.714Z
2007-01-01T00:00:00.000
17595658
{ "extfieldsofstudy": [ "Mathematics" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.mdpi.com/1424-8220/7/1/129/pdf", "pdf_hash": "f0b80104f3838e9c4e23c3cdb5f8cf8232f5142c", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:1727", "s2fieldsofstudy": [ "Chemistry" ], "sha1": "f0b80104f3838e9c4e23c3cdb5f8cf8232f5142c", "year": 2007 }
pes2o/s2orc
Classification of Mixtures of Odorants from Livestock Buildings by a Sensor Array (an Electronic Tongue) An electronic tongue comprising different numbers of electrodes was able to classify test mixtures of key odorants characteristic of bioscrubbers of livestock buildings (n-butyrate, iso-valerate, phenolate, p-cresolate, skatole and ammonium). The classification of model solutions indicates that the electronic tongue has a promising potential as an online sensor for characterization of odorants in livestock buildings. Back propagation artificial neural network was used for classification. The average classification rate was above 80% in all cases. A limited, but sufficient number of electrodes were selected by average classification rate and relative entropy. The sufficient number of electrodes decreased standard deviation and relative standard deviation compared to the full electrode array. Introduction The odour emission from livestock buildings in intensive farming is causing many environmental and health problems [1]. Biological methods, which are environmentally friendly, are the preferred techniques for reducing emission of odours from livestock buildings. The bioscrubber is one of the biological methods and comprises an absorption column, in which the polluted air stream from the livestock building is washed by water droplets, and a bioreactor, which cleans and recycles the washing water coming from the absorption column [2]. Characterization of odorants, in absorption column or in bioreactor, is necessary in the optimization of the bioscrubber. It was recently observed that an electronic tongue (ET) has a high potential as an on-line sensor for odorants [3]. ET is an analytical instrument containing an array of electrodes, with partial specificity for different components in liquids in addition to an appropriate pattern recognition or multivariate calibration tool for identification and quantification of even complex liquid mixtures [4,5]. Recently, ET was used to classify different types of wine and water [6] and four molds and one yeast [7]. Electronic noses (ENs) and ETs are based on the same concept. However, ENs are used for gas analysis and ETs are used for liquid analysis [8]. In bioscrubbers, odorants are absorbed by water droplets and then sent to bioreactors for removal. Due to this concept, ET was used for characterization of solutions containing odorants [3]. The pH is an important control variable in the bioscrubber for two reasons. pH affects the transfer of odorants from the gas to the liquid phase in the absorption column, and it also affects the microbes in the bioreactor. The optimum pH in the bioreactor is in the range of 4 to 8 [9]. However, most microbial growth occurs near neutral pH [10]. The objective of this communication is to use an ET to classify different test mixtures of key odorants (i.e. model solutions). Our investigation further supports the idea of using ET for other applications, i.e. to replace taste panels for characterization of hazardous solutions (e.g. pharmaceutical applications) [11]. In a previous communication [3] we described the calibration of ET. In livestock buildings, there are huge numbers of odorants [12]. A representative selection of these odorants, called key odorants, was used in this study. The key odorants were selected to represent a variety of chemical groups and were n-butyrate (n-butanoate), iso-valerate, phenolate, p-cresolate, skatole and ammonium. ET was used to classify four test mixtures of key odorants, i.e. two test mixtures of key odorants at two different acidities (i.e. pH 6 and 8). Moreover, ET was used to classify six different test mixtures of key odorants that were prepared to give the maximum representation of a variety of chemical groups at pH 6. The electrodes were numbered in order to identify the individual electrodes that were sufficient for the classification. A pH glass electrode and a conventional Ag/AgCl reference electrode were included in the ET. Potentiometric measurements were performed using a high-input impedance multichannel voltmeter connected to a PC for data acquisition. Test mixtures of key odorants The concentrations of odorants in air samples from livestock buildings were investigated by many researchers. O'Neil and Philips [13] and Schiffman et al. [12] reviewed concentration intervals which are used as the main reference for the minimum and maximum concentrations of these odorants. Odorants are transferred to the liquid phase in the bioscrubber. The equivalent equilibrium concentrations of key odorants in water were calculated by using the dimensionless air-water partition coefficient (K AW ) [14]. Stock solutions of different concentrations were prepared separately for each key odorant in the test mixtures. More details can be found in Abu-Khalaf and Iversen [3]. Experimental design Five groups of experiments were carried out separately. Data from the first four groups of experiments were also used for calibration of the ET [3]. The first test mixture of key odorants contained: n-butyrate, iso-valerate, phenolate, skatole and ammonium. In the second test mixture, ammonium was replaced with p-cresolate. Ammonium and p-cresol were chosen because of their importance as part of the odour problems in livestock buildings [15,16]. At pH 6, deionised water was solvent. At pH 8, a buffer of KH 2 PO 4 (3.7 × 10 -3 M) and Na 2 HPO 4 (78 × 10 -3 M) was solvent. Each group of experiments comprised 50 measurements in triplicates (i.e. three different measurement cycles for each mixture). The intervals of concentrations of each odorant were subdivided into seven intervals, to get as many combinations as possible in the test mixtures. The total number of measurements was 600. Details of test mixtures are shown in Table 1. In the fifth experiment, test mixtures of key odorants were prepared to give maximum representation of a variety chemical groups, i.e. volatile fatty acids (VFAs) mixed with phenols, VFAs mixed with skatole, VFAs mixed with ammonium, etc. The test mixtures were diluted in deionised water after which the acidity was adjusted to pH 6 with NaOH or HCl. After this adjustment, the pH remained constant throughout the experiment. Each combination of the test mixtures was subjected to 15 measurements in triplicates, a total of 270 measurements ( Table 2). The interval of concentrations was divided into five subsets, which were chosen from the seven intervals used in the previous four experiments. In each group of experiments the test mixtures were measured in random order. Microsoft office Excel 2000 (Microsoft Corporation, USA) software was used to randomize the concentrations levels (seven levels in the first four groups of experiments and five levels in the fifth) in each group of experiments, using a randomization and uniform distribution function [3]. The ET was submerged in the test mixture of key odorants in a 100 ml Teflon container with a magnetic stirrer. Five minutes were sufficient for electrodes to reach stable potential in all cases. Electrodes were washed with deionised water several times between measurements, until they reached a steady potential. It was suggested that washing of electrodes is one of the solutions to avoid drift problems of electrodes in ET [17]. Table 2. Test mixtures of key odorants comprising a variety of chemical groups of selected key odorants at pH 6. Back propagation artificial neural networks One of the most widely used artificial neural networks is back propagation artificial neural network (BPNN), which is also called feed forward network. It comprises many processing elements, i.e. nodes, which are arranged in layers: an input layer, an output layer, and one or more layers in between, called hidden layers. A schematic diagram of BPNN with one hidden layer is shown in Fig. 1. A neural network software 'Predict' (v. 3.13, NeuralWare, Pittsburgh, USA), which uses BPNN and works in the framework of Microsoft Excel, was used in this study. The models in the program contain one hidden layer with different numbers of nodes, which results in a stable model [18]. Models have direct connections between input and output nodes. This enables the program to evaluate the need for a hidden layer. Moreover, models employ an adaptive gradient learning rule. A weight decay method is employed to reduce overfitting. In classification problems, the software employs hyperbolic tangent and softmax transfer functions in hidden and output layers, respectively. The use of the default parameters of 'Predict' software is recommended [19]. The default parameters and mathematical explanation of the functions are beyond the scope of this communication but they are described elsewhere [20]. In the present study, classification (supervised networks) of test mixtures of key odorants was carried out. The input (independent variable) was the electrode signals, and the correlated output (dependent variable) was the class of test mixture. The classification rate for each test mixture of key odorants and the average classification rate (ACR) were found. The average classification rate is the average of classification rates of all classes. The values of the classification rate and the ACR are shown directly in the software, and there is no need for any calculations. In each case of classification, the data were divided into train, test and validation sets. There is little agreement among researchers about the number of samples in training set for BPNN analysis. Basheer and Hajmeer [21] concluded that there are no mathematical rules for solving this problem. However, Daspagne and Massart [18] suggested that the number of samples in the training set should be at least twice the total number of weights in the BPNN topography. The latter recommendation was followed in this study. Each measurement in triplicates was treated as one sample. This triplicate was used either in train, in test or in validation set. Data were centred and scaled before classification, so each variable contributes equally in the analysis [22]. A higher ACR and a lower relative entropy are the most important factors for classification problems using 'Predict' software [23]. The relative entropy is an internal measurement in the 'Predict' classification model. It measures the shared information between probability distributions. The higher this value is, the more similar the probability distributions are. All electrodes were examined for their individual contribution to classification of test mixtures of key odorants. The goal was to achieve the highest ACR and the lowest relative entropy with the minimum number of electrodes for further classification processes. Initially all electrodes (i.e. 14 electrodes) were investigated for classification, and ACR and relative entropy were determined. By analysing the outputs of many combinations of a decreased number of electrodes, and after at least 20 trials, it was observed that eight electrodes were sufficient for classifying all test mixtures of key odorants without influencing negatively ACR and relative entropy. The total number of electrodes in the ET was reduced without any loss of analytical information. This was done before in many applications of ET, e.g. Auger et al. [24] and Soderstrom et al. [7]. Classification of test mixtures of key odorants at pH 6 The data of each test mixture of key odorants were split into train, test and validation sets. The number of different samples was 30, 10 and 10 (i.e. 90, 30 and 30 including triplicates), respectively for each test mixture of key odorants. The BPNN used 8, 4, 2 (i.e. 8 neurons in input layer, 4 neurons in hidden layer and 2 neurons in output layer). The eight neurons in input layer represented the number of electrodes, and the two neurons in the output layer represented the two classes of the test mixtures. Electrodes no. 1, 2, 5, 6, 7, 8, 9, 11 were sufficient. The classification rate for the validation set of the test mixtures of key odorants containing ammonium and test mixtures of key odorants containing pcresolate was 80% and 97%, respectively. The ACR was 88%. Classification of test mixtures of key odorants at pH 8 The data of each test mixture of key odorants were split into train, test and validation sets. The number of different samples was 30, 10 and 10 (i.e. 90, 30 and 30 including triplicates), respectively for each test mixture of key odorants. The BPNN used 8, 0, 2 Electrodes no. 1, 2, 5, 6, 7, 8, 9, 11 were sufficient. The classification rate for the validation set of both test mixtures was 100%, and consequently the ACR was 100%. Classification of test mixtures of key odorants containing ammonium at pH 6 and pH 8 The data were split into train, test and validation sets as in the previous experiment. The BPNN used 8, 0, 2 nodes. Electrodes no. 1, 2, 5, 6, 7, 8, 9, 11 were sufficient. The classification rate for the validation set of both test mixtures was 100%, and consequently the ACR was 100%. Classification of test mixtures of key odorants containing p-cresol at pH 6 and pH 8 The data were split into train, test and validation sets as in the previous experiment. The BPNN used 8, 0, 2 nodes. Electrodes no. 1,2,5,6,7,8,9,11 were sufficient. The classification rate for the validation set of both test mixtures was 100%, and consequently the ACR was 100%. Table 3 shows the classification rates and ACR for the validation sets of the different test mixtures of key odorants. ET signals respond mainly to ions in the test mixtures [7]. The percentage of ionised n-butyric acid, iso-valeric acid, phenol, p-cresol, skatole and ammonium at pH 6 is: 94%, 94%, 0.01%, 0.005%, 0% and 100%, respectively. The percentage of ionised n-butyric acid, iso-valeric acid, phenol, p-cresol, skatole and ammonium at pH 8 is: 100%, 100%, 1%, 0.5%, 0% and 95%, respectively. The results in Table 3 indicate that ET has a promising potential as a sensor for odorants. ET signals contained the fingerprints for each test mixtures of key odorants, which explains the successful classification. The total number of samples (comprising triplicates) was 90, which is equivalent to 270 measurements, i.e. 6 test mixtures × 15 samples × 3 (triplicates). The data were split into train, test and validation sets. The number of different samples was 42, 18 and 30 (i.e. 126, 54 and 90 including triplicates), respectively. Train, test and validation samples within each class of test mixtures of key odorants were considered. The number of different samples was 7, 3 and 5 (i.e. 21, 9 and 15 including triplicates), respectively. BPNN used 8, 4, 6 nodes. Electrodes no. 1, 2, 5, 6, 7, 8, 9, 11 were sufficient. The classification rates are shown in Fig. 2. Two test mixtures of key odorants having classification rate of 100%, contained VFAs and phenols, or phenols and ammonium, i.e. A and F, respectively. The test mixtures of key odorants that contained VFAs and ammonium, i.e. C, had the lowest classification rate (67%). The ACR for all test mixtures of key odorants was 81%. Most of the test mixtures of key odorants were misclassified as test mixtures C. However the objective of BPNN classification was to get the highest classification rate with lowest entropy. In the case of misclassifications, the test mixtures of key odorants were misclassified to only one different test mixture of key odorants, e.g. C was misclassified as F, and D was misclassified as E. This indicates that the classification model enables us to predict the class of the test mixtures of key odorants with an acceptable inaccuracy, e.g. C is only classified as C or F, and D is only classified as D or E. When we tested numbers of electrodes that were less than the sufficient 8 electrodes used for classification, ACR decreased in comparison with the full array (14 electrodes), e.g. when electrodes no. 2, 5, 6, 7, 8, 9 were used, the ACR decreased from 81% to 70%. If pH changed when the test mixtures of key odorants were diluted in deionised water, adjustment of pH to 6 was carried out with NaOH or HCl. After adjustment, pH stayed constant throughout the measurement period. This is expected, since the VFAs in the test mixtures have buffer capacity. BPNN classification models were superior to linear classification methods, e.g. partial least squarediscriminant analysis (PLS-DA) [11]. This was explained by the non-linear response of electrodes [25], which results from interferences between ions in the test mixtures [26]. However, PLS-DA showed a complete agreement with BPNN in some cases. PLS-DA was carried out for classification of the last three test mixtures of key odorants shown in Table 3. In these cases, the two test mixtures were easily separated in the PLS score plots, as shown in Fig. 3 to Fig. 5. Electrodes no. 1,2,5,6,7,8,9,11 were sufficient. Eight electrodes were sufficient for classification of all test mixtures of key odorants. Models using these eight electrodes resulted in the highest ACR and lowest entropy in comparison to any other number of electrodes. Also, standard deviation and RSD of triplicate measurements, i.e. repeatability, improved when the number of electrodes was decreased (Table 4). It is noticed that the standard deviation of triplicate measurements in the mixture of key odorants in phosphate buffer at pH 8 was lower than the standard deviation of triplicate measurements in deionised water at pH 6, i.e. repeatability is higher. This is because the buffered mixture contains higher and stabilized concentrations of ions. Moreover, the standard deviation in the case of test mixtures of key odorants comprising maximum number of combinations of a variety of chemical groups at pH 6 is lower than the other two experiments that were carried out in deionised water (the two test mixtures of odorants containing ammonium or p-cresolate at pH 6 in Table 1). This is because the complexity of the test mixtures, i.e. the number of key odorants, was reduced in the test mixtures of key odorants in this experiment (Table 2). Comparing the standard deviation and RSD of the sufficient number of electrodes used for calibration [3] and classification (this communication), it is obvious that the sufficient number of electrodes in the ET improved the repeatability in comparison with the ET comprising 14 electrodes (Table 4). j Potential readings and standard deviation were very small, which results in high value of RSD * Data from Abu-Khalaf and Iversen [3] Nine electrodes in total (no. 1,2,4,5,6,7,8,9,11) were sufficient for identification, quantification [3] and classification of all test mixtures of key odorants (this communication). Conclusion A calibrated ET, comprising 8 PVC plasticized cross-sensitive potentiometric electrodes, has successfully classified different test mixtures of key odorants. The ET was able to distinguish between two test mixtures of key odorants at the same pH with classification rates in the range of 88 -100%. Classification between the same test mixtures of key odorants at different pH was even higher, 100%. Also, ET classified different test mixtures of key odorants comprising a variety of the chemical groups at pH 6. As expected the repeatability of electrodes was better in this case, where the complexity of the mixture was decreased. The results presented in this study are promising for any further application of ET in livestock buildings. The ability of ET to classify different test mixtures of key odorants with a high performance, makes ET an obvious candidate as an on-line sensor for characterization of odorants in livestock buildings.
v3-fos-license
2021-08-28T06:17:19.081Z
2021-08-01T00:00:00.000
237315353
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.mdpi.com/2077-0383/10/16/3549/pdf", "pdf_hash": "3d2ac928b1a084e41879c42d7821d87699778db9", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:1728", "s2fieldsofstudy": [ "Medicine" ], "sha1": "6107b94ee297f2457e364a440d4c46c0587baf99", "year": 2021 }
pes2o/s2orc
Real-Time Evaluation of Optic Nerve Sheath Diameter (ONSD) in Awake, Spontaneously Breathing Patients (1) Background: Reliable ultrasonographic measurements of optic nerve sheath diameter (ONSD) to detect increased intracerebral pressure (ICP) has not been established in awake patients with continuous invasive ICP monitoring. Therefore, in this study, we included fully awake patients with and without raised ICP and correlated ONSD with continuously measured ICP values. (2) Methods: In a prospective study, intracranial pressure (ICP) was continuously measured in 25 patients with an intraparenchymatic P-tel probe. Ultrasonic measurements were carried out three times for each optic nerve in vertical and horizontal directions. ONSD measurements and ICP were correlated. Patients with ICP of 2.0–10.0 mmHg were compared with patients suffering from an ICP of 10.1–24.2 mmHg. (3) Results: In all patients, the ONSD vertical and horizontal measurement for both eyes correlated well with the ICP (Pearson R = 0.68–0.80). Both measurements yielded similar results (Bland-Altman: vertical bias: −0.09 mm, accuracy: ±0.66 mm; horizontal bias: −0.06 mm, accuracy: ±0.48 mm). For patients with an ICP of 2.0–10.0 mmHg compared to an ICP of 10.1–24.2, ROC (receiver operating characteristic) analyses showed that ONSD measurement accurately predicts elevated ICP (optimal cut-off value 5.05 mm, AUC of 0.91, sensitivity 92% and specificity 90%, p < 0.001). (4) Conclusions: Ultrasonographic measurement of ONSD in awake, spontaneously breathing patients provides a valuable method to evaluate patients with suspected increased ICP. Additionally, it provides a potential tool for rapid assessment of ICP at the bedside and to identify patients at risk for a poor neurological outcome. Introduction The elevation of intracranial pressure (ICP), defined as ICP > 20 mmHg, is a common life-threatening condition caused by a variety of traumatic and non-traumatic diseases. Untreated intracranial hypertension can lead to severe brain damage with a poor neurologic outcome or patients' death due to secondary ischemia or brainstem herniation, respectively [1,2]. Invasive intracranial pressure monitoring currently is the gold standard to detect intracranial hypertension. Indications for the application of these devices are based on clinical observations. For example, it is recommended to implement invasive monitoring in patients with: (i) severe traumatic brain injury; (ii) multiple injuries with an altered level of consciousness; (iii) a post-resuscitation Glasgow coma score (GCS) of 8 or less after resuscitation in the presence of an abnormal cranial CT scan (cCT); (iv) a normal cCT but >2 risk factors (systolic blood pressure, SBP < 90 mmHg, decorticate or decerebrate posturing); or (v) reduced GCS subsequent to the removal of an intracranial mass [3,4]. In daily clinical praxis, the difficulty lies in patients with GCS between 9 and 12 who may benefit from aggressive medical therapy, which can only be accurately initiated and monitored when ICP is measured invasively. However, this invasive and not always feasible technique can lead to complications such as hemorrhage, malfunction, or infection [3,5], which explains why the decision to undertake these procedures can often be difficult. Unfortunately, neuroimaging such as cranial computed tomography scan has often limited availability, requires potential harmful patient transport, and has poor performance for detection of raised ICP [6,7]. Additionally, ICP may be highly dynamic-almost instantly changing its value from normal baseline to severely elevated levels [8]. In contrast, point of care ultrasonography of optic nerve sheath diameter (ONSD) has become an alternative bedside tool to reproducibly detect intracranial hypertension [9][10][11][12]. The optic nerve is surrounded by all linings of the brain, forming the optic nerve sheath, which connects it to the intracranial space. Intracranial pressure is conducted through the nerve sheath, which, in the case of elevation of intracranial pressure, widens. In this context, ONSD with more than 5.0 mm was associated with increased ICP in severely traumatic and non-traumatic injured patients [13][14][15]. Additionally, human studies have shown that widening of ONSD occurs within minutes after acute changes in ICP [11,12,[16][17][18]. Materials and Methods After approval by the local ethics committee (Landesärztekammer des Saarlandes; Ref. ID: 151/13), this study was conducted from January 2014 to March 2015 at the Saarland University Medical Center in Homburg, Germany. It was designed as a prospective observation trial, and written informed consent was obtained from all 25 patients included in this study. Patient Cohort Patients included in this study had a continuous ICP measuring device implanted (Neurovent P-tel, Raumedic AG, Helmbrechts, Germany) due to suspected or proven intracranial hypertension. All patients were free of major symptoms of elevated intracranial pressure and hospitalized under normal ward conditions. None of the patients included were admitted under emergency conditions or showed the urge for immediate treatment. The probe was implanted through a precoronal and parasagittal burr hole (at Kocher's point). After the burr hole was drilled, dura and pia mater were incised cruciform. The polyurethane catheter of the probe was carefully advanced through the parenchyma of the frontal lobe. The housing of the telemetric device remained above the skull surface. All probes were implanted in the Department of Neurosurgery. Measurements of ICP can be started directly after implantation by placing the TDTreadP (TDTreadP, Raumedic AG, Helmbrechts, Germany), a special reading unit, above the closed wound. All data were saved on a datalogger (Datalogger MPR 1, Raumedic AG, Helmbrechts, Germany). Data readout was attained with special software Datalog (Datalog, Raumedic AG, Helmbrechts, Germany). ONSD Measurement The reading unit (TDTreadP) was placed and secured over the implanted P-Tel probe at least 24 h prior to the ONSD measurement procedure. The reading unit remained above the P-Tel probe until ONSD measurements were finished. While the ONSD measurement was done, ICP values were not visible to the examining physician. All measurements were carried out in the afternoon to avoid bias by circadian rhythm changes. For the measurement, patients rested in the supine position for five minutes. Meanwhile, the presence of increased intracranial pressure was checked by excluding typical symptoms. When patients stayed without clinical signs of exacerbating ICP, measurement of OSND was performed. ICP Data were stored by the datalogger for every second, and a readout of total data was carried out afterwards. For statistical analysis, we used mean values of recorded ICP data for each measurement. The optic nerve sheath diameter was measured using a LogicE ® system (LogicE ® , Fa. GE Healthcare, Solingen, Germany) with a high resolution (7-10 MHz) linear array ultrasound transducer probe (9L-RS). Penetration-depth was optimized to generate an image of the eye filling the screen. Preset "small parts", usually used for an ultrasound of nerval structures, provided as a standard ultrasound preset by the device was chosen to avoid exceeding tissue and mechanical index. To ensure an energy level not exceeding current recommendations, we took care of mechanical and tissue index (MI < 0.2; TI < 1.0) while adjusting emitted energy of the ultrasound probe [19,20]. A large amount of standard water-soluble ultrasound transmission gel was applied to the patient's closed eyelid (Aquasonic100 ® , Fa. C+V Pharma Depot GmbH, Versmold, Germany). Before starting the measurement, the awake patient was placed in the supine position and left for 5 min. Within this time, cardiopulmonary monitoring was installed. In accordance with current recommendations, the globe was scanned in the transverse and horizontal plane in both eyes and the ONSD was determined at a predefined point 3 mm posterior to the globe [21]. ONSD was measured three times transversally and horizontally directly after each other. Additionally, non-invasive blood pressure, heart rate and peripheral oxygen saturation were collected during the measurement. Statistical Analysis All acquired data were anonymized and transferred to our study database (Microsoft ® Excel 2010, Microsoft ® Corporation, Redmond, WA, USA). All data were verified for integrity and plausibility on the day of the patient's discharge. Fisher's exact test was performed to compare patients with lower intracranial pressure (≤10 mmHg) and higher intracranial pressure (>10 mmHg) regarding dichotomous variables. For continuous variables, the differences between the two groups were compared using Student's t-tests (Welch's t-tests in case of inhomogeneous variances, respectively). Continuous variables were expressed as mean and standard deviation (SD). Two-sided p-values of less than 0.05 were considered statistically significant. A Bland Altman diagram was used to compare the measurement of the ONSD horizontal eye and the ONSD measurement of the vertical eye. Accuracy and precision were calculated for horizontal and vertical measurements. Linearity between the ONSD and the P-tel probe measurement was tested by Pearson correlation coefficients. Different receiver operating characteristic (ROC) curves were constructed to evaluate the predictive power of horizontal and vertical ONSD measurements for the occurrence of increased intracranial pressure (>10 mmHg). The Youden Index was used to calculate optimal cut-off points for ONSD measurement for the prediction of increased intracranial pressure (>10 mmHg). The positive predictive value is defined as the number of true positives/(number of true positives + number of false positives). The negative predictive value is defined as the number of true negatives/(number of true negatives + number of false negatives). All data analyses were performed using SPSS Statistics 19™ (IBM, Ehningen, Germany). Results Patient's characteristics are shown in Table 1. Among these two groups, no statistical differences were found. Correlation of the ONSD with Intracranial Pressure Mean values of each three horizontal and vertical ONSD measurements of the left eye correlated well with the ICP measured by the P-tel probe (horizontal: Pearson R = 0.74; vertical: Pearson R = 0.68; Figure 1). Also, the corresponding mean values of ONSD measurements of the right eye correlated well with the ICP (horizontal: Pearson R = 0.79; vertical: Pearson R = 0.80; Figure 2). Accuracy and Precision of ONSD Measurements ONSD of the left eye measured by the horizontal view conformed to that of the vertical view with high accuracy (bias: −0.09 mm) and precision (two standard deviations of measurement error: ±0.66 mm, Figure 3). ONSD of the right eye measured by the horizontal view conformed to that of the vertical view with high accuracy (bias: −0.06 mm) and precision (two standard deviations of measurement error: ±0.48 mm, Figure 3). ROC Analysis The predictive value of ONSD measurements for the occurrence of high normal range intracranial pressure (>10 mmHg) showed an area under the curve (AUC) of 0.91 (95% CI, 0.83-0.99; p < 0.001), a sensitivity of 92%, and a specificity of 90%. The optimal cutoff value for ONSD measurement was found to be 5.1 mm (Figure 4). The positive predictive value of the ONSD horizontal and vertical measurement of both eyes to identify patients with an increased ICP (10.1-24.2 mmHg) was 0.97; the negative predictive value was 0.73. Discussion In this prospective study, ONSD predicts increased intracranial pressure (>10 mmHg) with high accuracy and precision in awake, spontaneously breathing patients. To the best of our knowledge, this is shown for the first time using an intraparenchymatic P-tel probe which allows real-time measurements of intracranial pressure (ICP) in awake patients. Thus, our results indicate that ultrasonographic measurement of ONSD could be a strong predictor of elevated ICP in this patient cohort. Brain damage is related to a direct "primary" lesion, such as intracerebral bleeding, severe cerebral concussion, and brain tumors, or to indirect "secondary" causes, such as diffuse prolonged cerebral hypoxia. Eventually, these conditions may result in severe dramatic brain edema with uncontrollable intracranial hypertension [22]. Importantly, intracranial hypertension can occur suddenly without early warning signs and may be highly dynamic, thereby changing its value from normal to severely elevated within minutes [8]. This underlines the need for a bedside monitoring tool to detect raised ICP immediately and to initiate adequate therapy as early as possible. This is especially true in spontaneously breathing patients who suffer from brain injury but did not initially receive any invasive ICP monitoring device. Today, ONSD evaluation with ultrasound is well accepted as a reliable indicator of intracranial hypertension, with high intra-and inter-observer reliability and with a whole range from 4.3 to 7.6 mm [23,24]. In patients suffering from severe brain damage, ONSD values of about 5.9 to 6.3 mm have been reported [14] with no values lower than 5.8 mm in cases where ICP is >20 mmHg [25]. Therefore, a cut-off value of 5 mm has been suggested [9,26,27]. Despite these numerous studies, there is no consensus regarding a definitive threshold for elevated ICP [21,28,29], which is even true for awake, spontaneously breathing patients with cerebral pathologies. Our findings are in line with previously mentioned results, showing a slightly wider optic nerve sheath diameter in patients with increased ICP, ranging from 3.8 to 5.6 mm, and an optimal cut-off value for the prediction of increased ICP of 5.1 mm. This was true for both eyes, independent of the implantation side of the pressure probe. Of note, we defined increased ICP > 10 mm Hg, pointing out that ultrasonographic measurement of ONSD is a highly reliable tool to detect ICP even in the upper normal range. This is important in the way that ONSD sonography seems to detect patients at risk for developing severe cerebral complications at a very early stage. Nevertheless, the importance of our results that represent our final take-home message is not to promote a single ONSD measurement or the absolute value of ONSD as an adequate stand-alone monitoring tool in patients with brain injury but to point out the capability of this non-invasive method as a first-line, bedside diagnostic tool that can be easily and repeatedly performed. Limitations Our present cohort consisted only of patients with hydrocephalus. Implantation of P-Tel probe is indicated narrowly, which leads to a low number of total patients and a long study period. We, therefore, deliberately included these patients who received a P-tel probe implant prior to the study, offering the possibility to measure online ICP in awake, spontaneously breathing patients. Nevertheless, the population examined was small and inhomogeneous to strongly support the results. The collected data of ONSD were correlated with mean values of ICP values of the whole measurement procedure. Therefore, the influence of fluctuating ICP values on statically measured ONSD cannot be ruled out. Nevertheless, we assume that this influence is of less importance because patients rested in the supine position for five minutes before starting ONSD evaluation. Because of this, potentially raised ICP should be recognized by our repeated measurements. We are aware that the underlying pathology differs from patients with traumatic or ischemic brain injury, but as all these pathologies result in the same final course, we believe that this is of less importance but needs to be kept in mind while interpreting our data. Conclusions Ultrasonographic measurement of ONSD in awake, spontaneously breathing patients provides a valuable method to evaluate patients with suspected increased ICP. As a non-invasive monitoring method, it provides a potential bedside tool for rapid assessment of ICP, even when still in the upper normal range, to identify patients at risk for a poor neurological outcome.
v3-fos-license
2016-11-01T19:18:48.349Z
2016-03-15T00:00:00.000
18534985
{ "extfieldsofstudy": [ "Medicine", "Biology" ], "oa_license": "CCBYSA", "oa_status": "GOLD", "oa_url": "https://newprairiepress.org/cgi/viewcontent.cgi?article=1001&context=fgr", "pdf_hash": "aab0d6eeb6b5a253376c54940d939f72fae1c031", "pdf_src": "Anansi", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:1729", "s2fieldsofstudy": [ "Biology" ], "sha1": "9b133d5e20410fb4af8f8b37b06f880e8eec517e", "year": 2016 }
pes2o/s2orc
A mus-51 RIP allele for transformation of Neurospora crassa This report describes the construction and characterization of mus-51, an allele for high-efficiency targeted integration of transgenes into the genome of the model eukaryote Neurospora crassa. Two of the mus-51 strains investigated in this work (RZS27.10 and RZS27.18) can be obtained from the Fungal Genetics Stock Center. The two deposited strains are, to our knowledge, genetically identical and neither one is preferred over the other for use in Neurospora research. Introduction Non-homologous end joining, a DNA repair pathway that joins damaged DNA ends without regard for homology, was a hindrance to targeted transgene integration in N. crassa until Ninomiya et al. (2004) discovered that using an NHEJ mutant as a transformation host greatly increases the efficiency of this process.For example, one can achieve targeted transgene integration levels of nearly 100% by including either mus-51 Δ ::hph or mus-51 Δ ::bar in the transformation host's genetic background (Ninomiya et al., 2004;Colot et al., 2006).However, these alleles prevent one from using both hph and bar as selectable markers in consecutive transformations of the same host.Construction of a marker-free mus-51 null allele could eliminate this deficiency.Below, we describe the construction and characterization of a mus-51 null allele called mus-51 RIP70 .Additionally, we describe a method to distinguish between mus-51 + and mus-51 RIP70 genotypes by restriction endonucleasemediated digestion of PCR products. Strains and growth conditions Vogel's minimal medium (VMM) (Vogel, 1956) with or without histidine (0.5 g/L) was used for vegetative cultures.Synthetic Crossing Medium (SCM) (Westergaard and Mitchell, 1947) with or without histidine (0.5 g/L) was used for crosses.BDS medium (1% sorbose, 0.05% glucose, and 0.05% fructose) (Brockman and de Serres, 1963) with or without methyl methanesulfonate (MMS, 0.22 µl/ml) was used in a preliminary screen for mus-51 RIP alleles.BDS medium with hygromycin (200 μg/ml) and/or cyclosporin A (5 μg/ml) was used to screen for hygromycin and cyclosporin A resistance.Top and Bottom Agar were used in transformation experiments as previously described (Harvey et al., 2014).The key strains used in this study are listed in Table 1. Plasmid construction Plasmid pTH1256.1 was constructed by cloning the hph selectable marker from pCB1004 (Carroll et al., 1994) into the ApaI site of pBM61 (Margolin et al., 1997).Oligonucleotides P518 and P519 (Table 2) were then used to PCR-amplify a 2170 base pair (bp) fragment of mus-51 + .This PCR product was cloned into the NotI site of plasmid pTH1256.1 to create plasmid pSS2.12.Plasmid pSS2.12 thus contains the sequences necessary to insert the 2170 bp mus-51 + fragment next to the his-3 locus while converting his-3 to his-3 + . Table 2. Oligonucleotides used in this study. Isolation of mus-51 RIP70 HSS1.21.4 was crossed with FGSC 9716 by simultaneous inoculation of each strain to opposite sides of a 100 mm petri dish containing SCM plus histidine.The petri dish was incubated on a laboratory bench top for five weeks.Ascospores were collected, soaked in sterile water at 4 °C for over 24 hours, heat-shocked at 60 °C for 30 minutes, and plated onto VMM.Histidine-prototrophs were selected and screened for sensitivity to methyl methanesulfonate (MMS).The mus-51 coding region of an MMS-sensitive progeny named RSB1.70was analyzed by Sanger sequencing, found to be mutated, and named mus-51 RIP70 .Next, protoperithecia of F2-26 were fertilized with conidia from RSB1.70.This cross produced progeny RZS27.4,RZS27.6,RZS27.10, and RZS27.18.The mus-51 locus in each of these four strains was PCR-amplified with oligonucleotides P777 and P778 and the PCR products were sequenced by Sanger sequencing with oligonucleotides P699, P974, P975, P1001, P1050, and P1051.Sequences were analyzed with Bioedit 7.2.5 (Hall, 1999).The full sequence of mus-51 RIP70 can be obtained from GenBank under accession number KU860571. Polymerase chain reaction (PCR) Genomic DNA was isolated from lyophilized mycelia with the IBI Scientific Plant Genomic DNA Mini Kit.PCR was performed with New England Biolabs Phusion High-Fidelity DNA polymerase or MidSci Bullseye Taq DNA polymerase.When restriction endonuclease-mediated digestion of PCR products was needed to differentiate between two products of similar size, 2.5 μl of a completed PCR reaction was digested with a restriction endonuclease in a 25 μl reaction under standard conditions. The csr-1 + gene deletion assay A 3718 bp csr-1 + deletion vector was obtained by PCR-amplifying the csr-1 Δ ::hph locus from strain P8-65 with oligonucleotides P583 and P584.The PCR product was purified with an IBI scientific Gel/PCR DNA Fragment Extraction Kit and 500 ng were electroporated into conidia as previously described (Margolin et al., 1997) with selection for hygromycin resistance.Hygromycin-resistant transformants were then screened for resistance to cyclosporin A. Results and Discussion The mus-51 RIP70 allele is 84% identical to wild type Repeat-induced point mutation (RIP) introduces transition mutations into repeated DNA sequences within the nuclei of sexual cells just prior to meiosis (Cambareri et al., 1989;Selker, 1990).Therefore, we used RIP in an attempt to generate a mus-51 null allele at its native location on chromosome IV by first placing a 2170 bp fragment of mus-51 + next to the his-3 + locus on chromosome I.We then placed the resulting his-3 + ::mus-51 2170 -carrying transformant (HSS1.21.4) through a sexual cross with strain FGSC 9716.Strains mutated in mus-51 are sensitive to the DNA damaging agent methyl methanesulfonate (MMS) (Ninomiya et al., 2004).We thus selected RSB1.70, an MMS sensitive-progeny of HSS1.21.4 × FGSC 9716 (data not shown), for further analysis.Sanger sequencing confirmed RSB1.70 to carry a mutated mus-51 allele, which we named mus-51 RIP70 . The mus-51 RIP70 allele contains 341 transition mutations spread over a 2134 bp region of chromosome IV (Figure 1).The mutations begin 211 bp before the start of the mus-51 + coding sequence and they end 255 bp before the end of the coding sequence (Figure 1).If we assume that all RIP mutations result from C to T transition events, 228 mutations must have originated on the coding strand and 113 mutations must have originated on the template strand (Figure 1, red lines).The high number of mutations suggests that mus-51 RIP70 is a null allele.Moreover, the mus-51 RIP70 allele encodes 30 early stop codons and over 100 amino acid substitutions relative to mus-51 + . The mus-51 RIP70 allele increases the efficiency of targeted transgene integration The efficiency of targeted transgene integration when using mus-51 RIP70 in the genetic background of a transformation host was measured with a csr-1 + -gene deletion assay.Deletion of csr-1 + enhances resistance to cyclosporin A (Bardiya and Shiu, 2007), allowing one to identify csr-1 Δ genotypes by screening on medium containing the compound.We transformed four strains, RZS27.4 (mus-51 + ), RZS27.6 (mus-51 + ), RZS27.10 (mus-51 RIP70 ), and RZS27.18(mus-51 RIP70 ), with a csr-1 Δ ::hph deletion vector.While nearly all mus-51 RIP70 transformants (99.5%) were resistant to cyclosporin A, only 37.5% of the mus-51 + transformants were resistant (Figure 2).To confirm that cyclosporin A resistance resulted from replacement of csr-1 + allele by the csr-1 Δ ::hph deletion vector, PCR was used to examine the csr-1 locus in a cyclosporin A-resistant and a cyclosporin Asusceptible transformant from each transformation host.While all of the cyclosporin A-resistant isolates carried the csr-1 Δ ::hph allele at the csr-1 locus, none of the susceptible isolates did (Figure 3).These results confirm that the mus-51 RIP70 allele can be used for high-efficiency targeted integration of transgenes in N. crassa.The mus-51 RIP70 allele improves the efficiency of transgene integration by homologous recombination.A csr-1 + deletion assay was performed with two mus-51 + strains (RZS27.4 and RZS27.6) and two mus-51 RIP70 strains (RZS27.10 and RZS27.18).Each strain was transformed with a csr-1 + deletion vector.The csr-1 Δ ::hph deletion vector replaces the entire coding region, as well as upstream regulatory sequences, with an hph selectable marker (Bardiya and Shiu, 2007).Only transformants resulting from replacement of csr-1 + with csr-1 Δ ::hph should be resistant to cyclosporin A. A total of 96 transformants were isolated, 24 for each of the transformation hosts.Conidial suspensions were prepared from each transformant and tested for growth on minimal BDS medium with or without hygromycin and cyclosporin (please note that hygromycin was an unnecessary addition because all transformants were previously shown to be resistant to hygromycin).Cyclosporin A resistance is indicated by the formation of large dense colonies with rough borders, while susceptibility is indicated by small thin colonies with smooth borders.A) A total of 6 out of 24 RZS27.4transformants were resistant.B) A total of 12 out of 24 RZS27.6 transformants were resistant.C) A total of 21 out of 23 RZS27.10 transformants were resistant.Strain 13 did not grow on minimal medium so it was not included.D) A total of 24 out of 24 RZS27.18transformants were resistant.Key: i) P6-07, ii) P8-65, iii) P6-07 and P8-65 [i.e., a suspension of conidia from both strains].Numbers 1-24 refer to the 24 transformants from each transformation host.Red font is used to indicate resistance to cyclosporin A. The restriction endonuclease RsaI can distinguish between mus-51 RIP70 and mus-51 + alleles One disadvantage of the mus-51 RIP70 allele is the inability to identify the allele by screening for growth on a common antibiotic.To address this issue, we devised a simple PCR-assay to distinguish between mus-51 RIP70 and mus-51 + alleles.In this assay, the mus-51 locus is PCR-amplified with oligonucleotides P974 and P975 and the resulting 478 bp PCR product is digested with the restriction endonuclease RsaI.The digested product is then analyzed by standard agarose gel electrophoresis.RsaI will only digest the mus-51 + PCR product (Figure 4).This procedure can also be combined with the conidial-PCR method of Henderson et al., (2005) for increased efficiency. Figure 1 : Figure 1: The mus-51 RIP70 allele contains 341 transition mutations.Positions 2,520,251 through 2,522,899 on chromosome IV of the N. crassa reference genome (version 12) are depicted in the diagram.The 2170 bp fragment, which was inserted next to the his-3 + locus on chromosome I, is delineated by black vertical lines labeled "transgene start >" and "> transgene end".The coding sequence of mus-51 + is marked with the black vertical lines labeled "ATG>" and ">TGA".The short vertical red lines denote the locations of C to T transitions on the coding strand (shown above chromosome) or the template strand (shown below chromosome). Figure 4 : Figure 4: The mus-51 RIP70 allele can be distinguished from mus-51 + by RsaI-based digestion of PCR products.A) Partial DNA sequences for mus-51 + and mus-51 RIP70 are shown.Note that oligonucleotides P974 and P975 anneal perfectly to both alleles (red arrows).Although these primers amplify a PCR product of identical length from mus-51 + and mus-51 RIP70 , an RsaI recognition site (5' GTAC 3'), which exists in mus-51 + but not mus-51 RIP70 (red rectangle), can be used to distinguish the PCR products.B) Oligonucleotides P974 and P975 were used to PCR-amplify the mus-51 region from the 11 strains described in Figure3B.The PCR products were digested with RsaI and analyzed by agarose gel electrophoresis.The mus-51 + allele produces two fragments with sizes of 369 and 109 bp, but the 109 bp fragment is often too faint to detect without additional gel staining.The mus-51 RIP70 allele produces a single fragment of 478 bp.Lanes are labeled as in Figure3B. Table 1 . Strains used in this study.
v3-fos-license
2018-04-03T01:15:49.926Z
1998-08-21T00:00:00.000
24184233
{ "extfieldsofstudy": [ "Biology", "Medicine" ], "oa_license": "CCBY", "oa_status": "HYBRID", "oa_url": "http://www.jbc.org/content/273/34/21933.full.pdf", "pdf_hash": "eff30f75db666e10a51e69f72fa24d45982094b8", "pdf_src": "Highwire", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:1730", "s2fieldsofstudy": [ "Biology", "Chemistry" ], "sha1": "f6561b8568792251c4bd688522e583daba8fe448", "year": 1998 }
pes2o/s2orc
The Ability of a Variety of Polymerases to Synthesize Past Site-specific cis-syn, trans-syn-II, (6–4), and Dewar Photoproducts of Thymidylyl-(3′→5′)-thymidine* The role of photoproduct structure, 3′ → 5′ exonuclease activity, and processivity on polynucleotide synthesis past photoproducts of thymidylyl-(3′ → 5′)-thymidine was investigated. Both Moloney murine leukemia virus reverse transcriptase and 3′ → 5′ exonuclease-deficient (exo−) Vent polymerase were blocked by all photoproducts, whereas Taq polymerase could slowly bypass the cis-syn dimer. T7 RNA polymerase was able to bypass all the photoproducts in the order cis-syn> Dewar > (6-4) > trans-syn-II. Klenow fragment could not bypass any of the photoproducts, but an exo−mutant could bypass the cis-syn dimer to a greater extent than the others. Likewise T7 DNA polymerase, composed of the T7 gene 5 protein and Escherichia coli thioredoxin, was blocked by all the photoproducts, but the exo− mutant Sequenase 2.0 was able to bypass them all in the order cis-syn > Dewar > trans-syn-II > (6-4). No bypass occurred with an exo− gene 5 protein in the absence of the thioredoxin processivity factor. Bypass of the cis-syn andtrans-syn-II products by Sequenase 2.0 was essentially non-mutagenic, whereas about 20% dTMP was inserted opposite the 5′-T of the Dewar photoproduct. A mechanism involving a transient abasic site is proposed to account for the preferential incorporation of dAMP opposite the 3′-T of the photoproducts. Dipyrimidine sites are the major sites of UV-induced photoproducts and mutations (1)(2)(3)(4)(5)(6)(7). The four main classes of photoproducts formed by ultraviolet light at dipyrimidine sites (shown in Fig. 1 for a TpT 1 site) are the cis-syn and trans-syn (trans-syn-I (8) and trans-syn-II (9)) cyclobutane dimers and the (6 -4) pyrimidine-pyrimidone photoproducts and their Dewar valence isomers (10 -13). All of these photoproducts have been found to lead to mutations in Escherichia coli under SOS conditions by use of site-specific photoproduct-containing bacteriophage vectors, but the (6 -4) and Dewar photoproducts are far more mutagenic than either the cis-syn and trans-syn isomers (14 -18). The extent to which a particular photoproduct contributes to UV-induced mutations at a particular site not only depends on its mutagenicity, but also depends on its rate of induction, repair, and DNA synthesis bypass (7,12). At the moment, the relative contribution of individual DNA photoproducts to UV-induced mutations is not known, nor are the detailed mechanisms by which DNA photoproducts are repaired or bypassed. Recently, we have prepared homogeneous 49-mer oligonucleotides containing the four major photoproduct classes of TpT (19) 2 for use as substrates for the necessary in vitro and in vivo mechanistic studies. Herein, we report the use of these 49-mers and 72-mers containing a T7 promoter to study the role of photoproduct structure and the 3Ј 3 5Ј exonuclease activity and processivity of polymerases on DNA and RNA synthesis past these photoproducts. EXPERIMENTAL PROCEDURES Enzymes, Reagents, and Equipment-The preparation of the photoproduct-containing 49-mers has been reported elsewhere (19). 2 Other oligonucleotides were purchased at a local facility and purified by ion exchange high performance liquid chromatography. Oligonucleotide concentrations were measured by absorbance at 260 nm using estimated extinction coefficients (20). T4 polynucleotide kinase and exo Ϫ Vent (21) were purchased from New England Biolabs. Taq DNA polymerase, wild-type T7 DNA polymerase, Sequenase Version 1.0 (22), Sequenase 2.0 (␦28 K118-R145; Ref. 23), Klenow fragment (KF), exo Ϫ KF (D355A/E357A; Ref. 24), T7 RNA polymerase, and deoxynucleotide triphosphates were purchased from U. S. Biochemical Corp. Concentrations of commercial enzymes were calculated from data obtained from the supplier. The D5A/E7A T7 gene 5 protein (25) and E. coli thioredoxin components of T7 DNA polymerase were a generous gift of Isaac Wong and Kenneth Johnson (University of Pennsylvania). Moloney murine leukemia virus reverse transcriptase (MMLV RT) was purchased from Life Technologies, Inc. dNTPs were from Fisher, and NTPs were from Sigma. [␥-32 P]ATP (2 M, 10 Ci/l) was purchased from Amersham Pharmacia Biotech. Dideoxy sequencing mixes were prepared in Sequenase buffer (40 mM Tris⅐HCl, 10 MgCl 2 , and 5 mM DTT) with each nucleotide triphosphate at 300 M, with the eponymous nucleotide triphosphate in a ratio of 1:3 ddNTP to dNTP. Unless otherwise stated, all electrophoresis was conducted on 0.4-mm-thick, 37.5cm-long, 7 M urea, 1:19 cross-linked, 15% acrylamide gel at 1800 V. DNA fragments were visualized by autoradiography with Kodak XAR-5 film. Densitometry was performed on a Joyce-Loel Chromoscan 3 or a Molecular Dynamics computing densitometer model 300A. The percentage of a primer-elongated product is computed as the percentage of the total amount of extended products. Primer Extension by Taq Polymerase-Primer extensions were conducted by incubating 7 nM 15-mer primer annealed to 70 nM 49-mer template with 100 or 200 M dNTPs and 0.5 units of enzyme in a total volume of 10 l (67 mM Tris⅐HCl, pH 8.8 at 25°C, 17 mM (NH 4 ) 2 SO 4 , 6.7 mM MgCl 2 , 1 mM DTT, and 20 g/ml BSA) for 30 min at 60°C. The polymerase was added last to the prewarmed solution. The reactions were quenched by the addition of 15 l of 95% formamide. Primer Extension by KF and exo Ϫ KF-Primer extensions were conducted by incubating 7 nM 15-mer primer annealed to 70 nM 49-mer template with 100 nM enzyme (0.8 units of KF, 1.9 units of exo Ϫ KF) and 100 M dNTPs in a total volume of 10 l (50 mM Tris⅐HCl, pH 7.5, 10 mM MgCl 2 , 1 mM DTT, and 100 g/ml BSA) for 30 min at 37°C. The reactions were quenched by addition of 15 l of 95% formamide. Primer Extension by Wild Type T7, Sequenase 2.0, and D5A/E7A Gene 5 Protein with and without Thioredoxin-Primer extension were conducted by incubating 7 nM 15-mer primer annealed to 70 nM 49-mer template with 100 M dNTPs and 100 nM enzyme (0.1 units of wild-type T7, 1.2 units of Sequenase 2.0), preincubated with or without 2 M thioredoxin in a total volume of 10 l (40 mM Tris⅐HCl, pH 7.5, 50 mM NaCl, 20 mM MgCl 2 , and 10 mM DTT) for 30 min at 37°C. The reactions were quenched by the addition of 15 l of 95% formamide. Sequencing the Sequenase 2.0 Bypass Products-Thirty pmol of 5Јlabeled primer was annealed to 20 pmol of template. To each primed template was added 13 units of Sequenase 2.0 with buffer and dNTPs for a total volume of 5 l (200 M dNTPs, 40 mM Tris⅐HCl, pH 7.6, 10 mM MgCl 2 , and 5 mM DTT. The reaction was incubated at 37°C for 90 min, quenched with 20 l of 95% formamide, and electrophoresed. The fulllength product was excised, eluted, and dialyzed. An estimated 50 fmol or less of each bypass product was annealed to 25 fmol of 5Ј-end labeled 16-mer primer, d(AGCTACCATGCCTGCA). Sequenase version 1.0 (2.6 units) was added, for a final volume of 8 l (40 mM Tris⅐HCl, pH 7.6, 10 mM MgCl 2 , and 5 mM DTT). To each of four tubes containing 5 l of dideoxy mix was added 2 l of the annealed primed template. The reactions were incubated for 5 min at 37°C, and quenched by the addition of 9 l of 95% formamide. The reaction mixtures were heatdenatured, electrophoresed, and visualized by autoradiography Substrates- The photoproduct-containing 49-mers ( Fig. 1) were designed to be suitable for a variety of repair and replication studies, and their preparation and characterization have been reported previously (19). 2 The substrate for transcription was constructed by primer extension of a hybrid between a 66-mer containing a T7 RNA promoter (26) and the photoproduct-containing 49-mers. DNA and RNA synthesis reactions opposite the photoproducts with well characterized polymerases were principally undertaken to determine the enzymatic properties and conditions that facilitate bypass of DNA photoproducts. For the experiments described herein, the important design feature of the 49-mers is that the photoproducts are centrally located in a deoxyoligonucleotide long enough to serve as a template for primer extension by polymerases. With the exception of the trans-syn-II containing 49-mer, these photoproduct containing 49-mers have also been incorporated into M13 vectors and used to obtain photoproduct mutation spectra in E. coli under SOS conditions (18). Effect of Polymerase and dNTP Concentration on Photoproduct Bypass-The results of the primer extension reactions as a function of polymerase and dNTP concentration are displayed in Figs. 2 and 3. All polymerases were able to fully extend the primers on the undamaged templates even at the low dNTP concentration, except for MMLV RT, which did not fully extend at 1 M dNTPs. Not surprisingly, then, MMLV RT was almost completely blocked by all the photoproducts even at 100 M dNTPs, stopping primarily one nucleotide prior to the 3Ј-T of the photoproducts, and terminating primarily three nucleotides prior to the photoproducts at low dNTPs ( Fig. 2A). The exo Ϫ KF primarily stopped one nucleotide prior to and opposite the 3Ј-T of all the photoproducts at low dNTPs, and primarily opposite the 3Ј-T at high dNTPs (Fig. 2B). In this experiment, exo Ϫ Klenow was also able to bypass the cis-syn dimer in 2, 8, and 19% yields at 1, 10, and 100 M dNTPs, respectively. The exo Ϫ Vent could not bypass any of the lesions, and stopped primarily one prior to all the photoproducts at low dNTPs, but stopped primarily opposite the 3Ј-T at high dNTPs (Fig. 2C). Interestingly, Vent was able to advance the primer opposite the 5Ј-T for the (6 -4) and Dewar products, but not the cis-syn and trans-syn-II products. Previous unpublished work in this laboratory found that Taq DNA polymerase was able to bypass the cis-syn dimer, but not the trans-syn-I dimer at 60°C (27). In this study, Taq polymerase was also found to bypass the cis-syn dimer in about 10% yield at 60°C in the presence of either 100 or 200 M dNTPs, but could not bypass the trans-syn-II, (6 -4), or Dewar products, even at 200 M dNTPs (data not shown). In all cases, synthesis stopped primarily opposite the 3Ј-T of all of the photoproducts (Ͼ53%) with significant amounts of termination one prior and opposite the 5Ј-T of the photoproducts (7-15%). Sequenase 2.0 could bypass all the lesions at high dNTP concentrations, and stopped primarily opposite to the 3Ј-T of the photoproducts at 10 M dNTPs (Fig. 3). Extension opposite the undamaged template was too fast to measure, but the pseudo first-order rate constant was at least 15 min Ϫ1 , and should be greater than 500 min Ϫ1 based on published kinetic data (25). Of all the photoproducts, the cis-syn dimer was bypassed the fastest by Sequenase 2.0, with a pseudo first order rate constant of 0.49 min Ϫ1 . The Dewar product was bypassed the second fastest at 0.071 min Ϫ1 and the trans-syn-II dimer slightly more slowly at 0.040 min Ϫ1 . The (6 -4) product was bypassed the slowest of all with a rate constant of 0.0063 min Ϫ1 . Effect of Exonuclease Activity and Processivity on Bypass-The effect of the 3Ј 3 5Ј exonuclease activity was determined by comparing the action of wild-type KF and T7 polymerase with corresponding exonuclease-deficient mutants on the photoproduct containing templates (Figs. 4 and 5). There was a dramatic increase in the ability of both polymerases to extend opposite and past the lesions in the absence of exonucleolytic proofreading ability. Although wild-type KF led to Ͻ3% bypass of any of the photoproducts at 100 M dNTPs, exo Ϫ KF led to 47% bypass of the cis-syn dimer, and smaller, but significant amount (2-4%) of bypass of the other photoproducts under identical conditions (Fig. 4). Likewise, wild-type T7 DNA polymerase was unable to bypass any of the photoproducts at 100 M dNTPs, but Sequenase 2.0 led to Ͼ23% bypass of all the photoproducts (Fig. 5). The effect of processivity on bypass was determined by comparing the action of the exonuclease-deficient T7 polymerase with and without its processivity cofactor (Fig. 6). T7 polymerase is composed of a 1:1 complex of the T7 gene 5 protein and E. coli thioredoxin (28). Without thioredoxin, a D5A/E7A exonuclease-deficient T7 gene 5 protein was unable to bypass any of the photoproducts and terminated primarily prior to the 3Ј-T of the photoproducts. Upon the addition of thioredoxin, however, a significant fraction of the cis-syn and Dewar products were bypassed (Ͼ23%) and termination occurred primarily opposite the 3Ј-T of the photoproducts (Fig. 6). Unlike what was observed for the ⌬28 exonuclease-deficient Sequenase 2.0, only about 3% of the trans-syn-II and (6 -4) products were bypassed by the D5A/E7A exonucle-ase-deficient mutant in the presence of excess thioredoxin. Sequence Determination of the Sequenase 2.0 Bypass Products-With the exception of the (6 -4) product, the bypass products of the photoproduct-containing 49-mers were obtained in sufficient quantity to be sequenced several times by the dideoxy method (Fig. 7). The bypass products of the undamaged, cis-syn dimer, and trans-syn-II dimer templates did not appear to have any mutations above a background level of about 5% for all sites that are produced during sequencing. Densitometric analysis of the dideoxy sequencing bands of the bypass product of the Dewar photoproduct in comparison to that of the undamaged template indicated that about 20% of thymidine had been introduced opposite the 5Ј-T of the photoproduct in place of deoxyadenosine. To confirm that T could indeed be incorporated by the polymerase opposite the 5Ј-T of the Dewar product, and to investigate the selectivity of nucleotide incorporation opposite the (6 -4) product, the rates of dTMP and dAMP incorporation opposite both products were determined (data not shown). Extension of a 15-nucleotide primer terminating in A opposite the 3Ј-T of the (6 -4) product with 100 M dATP occurred 25 faster than with 100 M dTTP, compared with 7.4 times faster for the Dewar product. Extension of a 16-nucleotide primer terminating in T opposite the 5Ј-T of the (6 -4) product was 26 times faster than that terminating in A, compared with 40 times faster for the Dewar product, demonstrating that the mutagenic products could be further elongated. Because the polymerase is exonuclease-deficient, and both mutagenic and nonmutagenic products can be readily elongated, the selectivity of nucleotide incorporation opposite the 5Ј-T in the bypass product is solely governed by the selectivity of nucleotide incorporation in the elongation step opposite the 5Ј-T. Thus, one might expect that dTMP is also incorporated opposite the 5Ј-T of the (6 -4) product in the bypass product, but possibly at a lower frequency than for the Dewar product. Transcription Past the Photoproducts-RNA synthesis opposite the dimers was also briefly investigated with T7 RNA polymerase and the 72-mer duplex containing the T7 RNA promoter. At 800 M NTPs, all the photoproducts could be bypassed with almost the same relative order as observed for the exo Ϫ T7 DNA polymerase, Sequenase 2.0, except that the trans-syn-II isomer was bypassed the slowest (Fig. 8). In 1 h, the ratio of bypass products to termination products was 4.9, 0.4, 0.72, and 2 for the cis-syn, trans-syn-II, (6 -4), and Dewar photoproducts, respectively. Arrest was found to occur at multiple sites surrounding the photoproduct site, but could not be accurately assigned, though it does appear that the T7 RNA polymerase could advance one nucleotide further opposite the cis-syn photoproduct than any of the other photoproducts. In addition to the predominant full-length products, small amounts of ϩ1 and Ϫ1 full-length product were observed, which have also been observed by others (29), as well as some nonspecific longer products that may be self-encoded run-offs (30). DISCUSSION One of our primary goals was to determine how differences in polymerase and lesion structure and properties would affect extension opposite and past DNA photoproducts. A second was to find a DNA polymerase that could bypass all the lesions with high enough efficiency to allow the effects of 3Ј 3 5Ј exonuclease activity and processivity on the bypass reactions, and to allow isolation and sequencing of the bypass products and eventually study the bypass mechanisms in detail. Polymerases deficient in 3Ј 3 5Ј exonuclease activity were selected for study first, because 3Ј 3 5Ј exonucleolytic cleavage is known to compete with elongation opposite and past DNA damage and slow down or completely inhibit DNA damage bypass (31)(32)(33). Primer extension opposite the photoproducts was examined as a function of dNTP concentration, as the rate of bypass was expected to increase with increasing dNTP concentration based on early studies on heterogeneous templates (31) and later studies with site-specific cis-syn dimers (34). Of the six 3Ј 3 5Ј exonuclease-deficient polymerases studied, MMLV RT, Taq, exo Ϫ Vent, exo Ϫ KF, Sequenase 2.0, and T7 RNA polymerase, only four of them were able to bypass at least one of the photoproducts, and only two were able to bypass all four photoproducts. Both T7 RNA polymerase and the exo Ϫ T7 DNA polymerase Sequenase 2.0 were able to bypass all the lesions (Figs. 3 and 8), whereas exo Ϫ KF and Taq were only able to significantly bypass the cis-syn dimer. When slightly different conditions were used, including a longer reaction time and the addition of 100 g/ml BSA, exo Ϫ KF was able to synthesize past a small fraction of the trans-syn-II, (6 -4), and Dewar products (Fig. 4). The ability of Sequenase 2.0 to bypass all four products is not unexpected, as T7 DNA polymerases have been reported to bypass a wide variety of bulky adducts and intrastrand crosslinked species. The exo Ϫ T7 polymerases have been reported to bypass the bulky 2-aminofluorene (AF) adduct of guanine (35,36), and the bulky 7-bromomethylbena[a]anthracene (37) and styrene oxide (38) adducts of adduct of dA. The exo Ϫ -deficient T7 polymerase also bypasses photochemically cross-linked TA sites (39), and cis-diamminedichloroplatinum(II) cross-linked purines in GG, AG, and GCG (40) though no bypass was observed in different sequence context for the GG and AG sites (41). On the other hand, exo Ϫ T7 polymerase has not been found to bypass acetylaminofluorene (AAF) (32,36) or benzopyrenediolexpoxide (BPDE) (42) adducts. KF has also been found to bypass a variety of bulky lesions, including the AF adduct of guanine (35,43), and the bulkier AAF adduct of guanine (43) and C4Ј-modified bases (44). It can also bypass model estrogen DNA adducts (45), styrene oxide adducts (38) and certain stereoisomers of BPDE adducts of G (46). In contrast to what we observe for the dipyrimidine photoproducts, the TA* photoproduct is more easily bypassed by exo Ϫ KF than by Sequenase 2.0 (39). A similar trend was observed for a cis-platinum adduct of a GG site in one sequence context (41) but not in another (40). The ability of the thermostable Taq polymerase to bypass some forms of DNA damage is not unprecedented, as it has been recently reported to bypass the cis-syn thymine dimer and a (6 -4) product to a small extent (47), as well as 7, 8-dihydro-8-oxoadenine, a lesion that causes little distortion to the DNA duplex (48). Because exo Ϫ Vent did not bypass any of the photoproducts, it may be a better choice for quantifying these products in genes by methods based on quantifying full-length polymerase chain reaction products (49,50) or polymerase chain reaction termination products (47,51,52) Our finding that all the photoproducts studied, which have often been classified as bulky adducts, can be bypassed by T7 RNA polymerase, contrasts with results observed for prokaryotic and eukaryotic RNA polymerases. Cyclobutane dimers have been shown to halt Escherichia coli RNA polymerase both in vitro (53) as well as in vivo (54). In transcription-coupled repair, RNA polymerase arrest initiates a series of events that involve excision of a small section of DNA containing the damage followed by new gap-filling DNA synthesis (Ref. 55; reviewed in Ref. 56). Similarly, eukaryotic RNA polymerase II is also fully inhibited by UV-induced adducts (57), and proceeds to initiate what is thought to be a similar series of events as its prokaryotic counterpart. On the other hand, T7 RNA polymerase is often found to be able to transcribe past many types of DNA damage. For example, modified bases such as 8-oxoguanine and an abasic site analog do not block transcription, whereas AF and AAF adducts show an increasing ability to block transcription (58). Cytosine arabinoside (29), as well as single nucleotide gaps (59 -61) also prove unable to arrest T7 RNA polymerase, though they may result in miscoding by the polymerase. The bulky BPDE DNA adducts inhibit transcription by T7 RNA polymerase to varying degrees which depend on the stereochemistry of the adduct (62). Polymerases That Were Incapable of Bypassing the Photoproducts-MMLV RT led to very little if any bypass of any of the photoproducts, stopping almost exclusively one nucleotide prior to all four photoproducts at 100 M dNTP concentrations, which may make it useful for mapping the location of photoproducts in irradiated DNA. Although we are not aware of any reports of MMLV RT used in studies of damage bypass, the reverse transcriptase from avian myeloblastosis virus has been found to terminate at DNA photoproducts, but unlike MMLV RT, termination appeared to occur opposite the 3Ј-T of the dimer (63). Avian myeloblastosis virus RT has also been found to bypass cis-thymine glycol lesions (64) and abasic sites (65). Human immunodeficiency virus RT has been shown to be blocked by all but one of the six stereoisomeric BPDE adducts of G (66). The exo Ϫ Vent was also unable to bypass any of the lesions, but stopped at different positions depending on the lesion and dNTP concentration (Fig. 2C). At 100 M dNTPs, exo Ϫ Vent stalled opposite the 3Ј-base (first base) of the cis-syn and transsyn-II dimers, but partly extended opposite the 5Ј-base (second base) of the (6 -4) and Dewar products. It is difficult to understand why incorporation opposite the 5Ј-base of the (6 -4) and Dewar products is easier than for the cis-syn and trans-syn-II dimers, and we are unaware of any other reports using exo Ϫ Vent on damaged templates with which to compare our results. Effect of Photoproduct Structure on Bypass-Of all the photoproducts, the cis-syn dimer was bypassed most easily by any given polymerase. Sequenase 2.0 bypassed the cis-syn dimer about 7 times faster than the Dewar product, about 11 times faster than the trans-syn-II dimer, and about 77 times faster than the (6 -4) product. We interpret this as a consequence of relatively close resemblance of the cis-syn dimer structure to a undamaged dithymine site (Fig. 1), and to the relatively little distortion that it causes to normal DNA duplex (67)(68)(69). The correspondingly slow bypass of the (6 -4) product of TT can likewise be attributed to the fact that it has been found to greatly distort DNA structure and not to base pair to an opposed A based on an NMR structure (69) or only weakly so based on an unrestrained molecular dynamics calculation (70). Though no structure for the trans-syn-II dimer in a duplex exists, its lower rate of bypass can be attributed to its 3Ј-T, which is locked into a syn orientation and places the methyl group in the base pairing region, thereby sterically blocking the addition of nucleotides to the primer terminus (Fig. 1). The finding that the Dewar product is more rapidly bypassed than the (6 -4) product was predicted previously on the basis of molecular modeling studies of the dinucleotide products indicating that the Dewar product could be fit to a B DNA structure better than could the (6 -4) product (71). Photoisomerization of the pyrimidone ring of the (6 -4) product converts it from an extended flat planar ring to a more compact tentlike structure. Effect of Exonuclease Activity and Processivity on Bypass-Two properties that have been suggested as important in the ability of a polymerase to bypass a lesion are the presence and activity of a proofreading (3Ј 3 5Ј) exonuclease (31,64) and the processivity of the polymerase (72). KF and T7 DNA polymerases were chosen to study the effects of exonuclease activity on bypass, because both enzymes could be obtained in exonuclease-proficient and -deficient versions, and have been the subject of a number of detailed kinetic studies (25,73,74). The exo Ϫ Klenow was able to extend one nucleotide further than wild-type Klenow on all the lesions before stalling, and was able bypass 47% the cis-syn dimer compared with 3% for the wild-type Klenow under otherwise identical conditions (Fig. 4). Likewise, no bypass was seen with wild-type T7 polymerase, but Sequenase 2.0 gave substantial amounts of bypass of all lesions (Fig. 5). These results are in accord with the observation that synthesis opposite irradiated templates was increased when exo Ϫ KF and T7 DNA polymerase were used in place of the wild-type enzymes (75). Recently, elimination of the 3Ј 3 5Ј exonuclease activity of KF, has been shown to greatly accelerate bypass of an abasic site analog (33). To examine the effects of processivity on bypass, we again made use of the T7 DNA polymerase system, by taking advantage of the 1000-fold increase in processivity conferred on T7 gene 5 polymerase subunit by thioredoxin, (28). Thioredoxin enhances the processivity of the gene 5 protein by increasing the lifetime of the polymerase-DNA complex (76). The difference in bypass ability of an exo Ϫ T7 gene 5 protein (25) in the presence or absence of thioredoxin (Fig. 6), was similar to the difference seen in the absence or presence of the exonuclease (Fig. 5). In the presence of thioredoxin, the exo Ϫ gene 5 protein was able to bypass all the photoproducts, stalling opposite the first and second bases of the photoproducts. In the absence of thioredoxin, however, the gene 5 protein was not able to bypass any of the photoproducts, and stalled one base prior to every photoproduct site. These results are similar to those that we observed to occur in the bypass of cis-syn and trans-syn-I dimers by calf thymus polymerase ␦ (pol ␦) in the presence and absence of its processivity factor, proliferating cell nuclear antigen (PCNA) (77). Addition of PCNA has also been shown to greatly increase the bypass of abasic sites by pol ␦ (78). The A-rule Revisited: Origin for the Incorporation of A Opposite the 3Ј-T of Photoproducts by Sequenase 2.0 -One general hypothesis for the preferential incorporation of A opposite DNA damage is the A-rule (79,80), which proposes that preferential dATP binding by the polymerase governs nucleotide incorporation when it encounters a non-instructional lesion, typified by an abasic site. Because the mutation spectra of UV-irradiated DNA could be explained by incorporation of A, dipyrimidine photoproducts were originally classified as non-instructional. Recently this classification has been called into question, and hence the use of the A-rule to explain mutations caused by DNA photoproducts. Lawrence and co-workers have argued that the cis-syn dimer must be an instructive lesion by virtue of its high coding specificity in E. coli relative to abasic sites (81), and its ability to engage in near normal hydrogen bonding to adenine (67)(68)(69). Furthermore, G is incorporated opposite the C in a cis-syn dimer of TC in E. coli (82) and opposite the 3Ј-base in the (6 -4) products of both TT and TC (83). In contrast, Sequenase 2.0 puts A opposite the 3Ј-T of all the photoproducts, irrespective of structure. Why then would one polymerase add G opposite the 3Ј-T of the (6 -4) product and another polymerase add A? The argument that DNA photoproducts are instructional and not subject to the A-rule was based on observations in E. coli under SOS conditions and may in fact not apply to all polymerases. A possible expla-nation for the incorporation of A opposite the 3Ј-T of all the photoproducts by Sequenase 2.0 comes from examination of the recent crystal structure of a complex between an exo Ϫ T7 polymerase and a template primer in the presence of a ddNTP. In this structure, the template is forced to take a sharp 90°turn at the polymerase active site following the nucleotide opposite which the dNTP is incorporated (84). Because all dipyrimidine photoproducts covalently link two nucleotides together, the 3Ј-pyrimidine of a photoproduct cannot be made to occupy the site opposite which the dNTP resides, and is instead forced out of the active site with the rest of the template (Fig. 9). This would create an empty site, much like an abasic site, which would therefore be non-instructional, and lead to the preferential incorporation of A. Bending the template at the active site may be an important and general mechanism for preventing or greatly attenuating translesion synthesis at the sites of intrastrand cross-linked nucleotides such as dipyrimidine photoproducts, cis-platinum adducts, and other intrastrand cross-linked nucleotides, and has been observed in human pol ␤ (85) and Bacillus polymerase I (86). Once the A is incorporated, the template can move by one nucleotide, and the entire photoproduct can now be bound in the active site. Incorporation of a nucleotide opposite the 5Ј-pyrimidine will then be mediated by the instructional properties of the 5Ј-pyrimidine and the fit of the photoproduct in the active site. Origin of the Preference for Incorporation of A Opposite the 5Ј-T of the Photoproducts-In considering the rate and selectivity of nucleotide incorporation opposite the 5Ј-pyrimidine, it is useful to examine the results of the primer extension reactions by Sequenase 2.0 opposite the 5Ј-T of the photoproducts in light of the mechanism by which T7 DNA polymerase maintains high fidelity on undamaged templates (74). The rate-determining step during processive synthesis is a conformation change after dNTP binding, and before bond formation. In this induced-fit model, it is speculated that the conformational change selects the correct dNTP by recognizing its correct Watson-Crick geometry. Likewise, 3Ј-end mismatches drastically slow the conformational change necessary to incorporate the next nucleotide. By analogy, dNTP incorporation opposite and past lesions will be governed by the how closely the nascent base pair resembles the Watson-Crick geometry of normal B DNA. This model argues against the importance of hydrogen bonding, and argues for the importance of resemblance to Watson-Crick geometry as providing the instruction to the polymerase. Indeed, the selection of correct ge-ometry has been suggested by several groups as the primary means by which polymerases maintain high fidelity (74,(87)(88)(89)(90)(91). This is also borne out by recent crystal structures of a Bacillus DNA polymerase I complexed to template-primers and T7 DNA polymerase and pol␤ complexed to template-primers and ddNTPs (84,85). For the case of the dinucleotide photoproducts studied, all have a 5Ј-T that retains the H-bonding properties of T, though the conformation of the 5Ј-T depends on the particular photoproduct. Thus, the preferential incorporation of A opposite the 5Ј-T of all these products is likely to be the result of the ability to form a Watson Crick-like base pair. The differing rates of bypass of the photoproducts is probably the result of deviations of the geometry from an ideal Watson-Crick base pair caused by distortions induced by the photoproducts that slow phosphodiester bond formation during the rate-determining step in bypass. For all the photoproducts, the major termination band corresponds to termination opposite the 3Ј-T of the dimer suggesting that the slowest or rate-determining step in bypass involves the extension step opposite the 5Ј-T (step E2 of Fig. 9). Biological Implications-Of the four major dipyrimidine photoproducts of TT, the cis-syn dimer was found to be the most easily bypassed by the polymerases studied, which correlates with the fact that it is the least disruptive of DNA structure (69). The cis-syn thymine dimers have also been found to be more easily bypassed than trans-syn-I dimers by pol ␦/PCNA (77), and a vector containing a cis-syn thymine dimer was found to be more efficiently replicated by lagging strand synthesis than a (6 -4) product in cell free HeLa extracts (92). The greater efficiency of cis-syn dimer bypass would support the notion that cis-syn dimers have the highest mutagenic potential of the four major photoproducts (12), as they are also the most slowly repaired by excision repair systems (93)(94)(95). Although there is no direct evidence at this point, it may be that cis-syn dimers may also be the most easily bypassed by transcription systems, and therefore also the least readily repaired by transcription-coupled repair. It is known, however, that cis-syn dimers of TT sites are not very mutagenic when bypassed by KF in vitro (34,96) or in E. coli under SOS conditions or in yeast (14,97). Likewise, bypass of cis-syn thymine dimer in an SV40 vector by HeLa cell free extracts appears to be non-mutagenic (92). On the other hand, it has been shown that the deamination products of C-containing cis-syn dimers are highly mutagenic, almost exclusively causing C 3 T mutations, FIG. 9. Proposed model for elongation opposite dipyrimidine photoproducts by T7 DNA polymerase. A, model for elongation opposite undamaged DNA, which is based on the crystal structure of a primer-template, and ddNTP complex with T7 DNA polymerase, which reveals that the template is forced to make a right angle turn after the catalytic site for primer extension. B, a model for elongation opposite a photoproduct between the two Ts (denoted by an equal sign). Each successive step in elongation opposite a photoproduct of TT is indicated by En. Because the bases in dipyrimidine photoproducts are covalently linked together, the 3Ј-nucleotide of these photoproducts cannot be accommodated in the active site during primer elongation opposite this site. the major mutation induced by UV light (15, 98 -100). Thus, cis-syn dimers have four features (7) that make them prime candidates as the principal products involved in mutagenesis at dipyrimidine sites. 1) They are the major photoproducts induced by UV light, 2) they are the least rapidly repaired of the dipyrimidine photoproducts, 3) they are the most easily bypassed, and 4) they can be highly mutagenic. The observation that the Dewar isomer of the (6 -4) product is more easily bypassed than the (6 -4) product by Sequenase 2.0, as previously predicted based on its structure (71), would also confer a higher mutagenic potential on this product than its (6 -4) isomer.
v3-fos-license
2019-03-16T13:08:07.634Z
2016-10-10T00:00:00.000
52239214
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "http://www.scirp.org/journal/PaperDownload.aspx?paperID=71266", "pdf_hash": "3cd518155dcd56faaf8a700e519539a19cda7a32", "pdf_src": "ScienceParseMerged", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:1731", "s2fieldsofstudy": [ "Medicine" ], "sha1": "3cd518155dcd56faaf8a700e519539a19cda7a32", "year": 2016 }
pes2o/s2orc
Establishment of Dose Reference Levels for Nuclear Medicine in Sudan In this study, a national survey for establishment of Nuclear Medicine (NM) Dose Reference Levels (DRLs) for adult patients was carried out. The Administrated Activity (AAs) (MBq) was collected from six nuclear medicine departments. Factors influencing the image quality were also observed. The established Sudan National DRLs represent the AA value corresponding to 75th percentile of the AA frequency distribution. Generally, Sudan National DRLs and average AAs are comparable with the papers published in the international literature. All Sudanese DRLs values were found within the international range. While it is noted that the Sudanese DRLs is higher than the values of ARSA except for the MIBI pharmaceuticals that used in both parathyroid and myocardial perfusion scan and for TcDTPA that used for Dynamic Renal scan study the DRLs values were decreased. In compared with UNSCEAR 2008 data, the average dose (MBq) for Sudanese we note that the bone scan falls within the average values while it’s lower in all other scans except for parathyroid scan in which the AAAs increase more than twice. When compared to BSS 1996, it showed variation in increased and decreased AAAs. There may be potential for reducing the higher values of AAs, in co-operation with Nuclear Medicine staff. Introduction Diagnostic reference Levels (DRLs) have been introduced by the International Com-How to cite this paper: Ali, W.M., Elawad, R.M. and Ibrahim, M.A.A. (2016) Establishment of Dose Reference Levels for Nuc-mission on Radiological Protection ICRP publication 60 [1] and 37 [2] and by European Directive 97/43/Euratom [3] for assisting the optimization of radiological investigation.The objective of DRLs is to help avoid radiation dose to the patient that does not contribute to the clinical purpose of a medical imaging task.This accomplished by comparison between the numerical values of the DRLs (delivered from relevant regional, national of local data) and the mean or other appropriate value observed in practice for a suitable reference group of patient or a suitable reference phantom.All DRLs have been given in term of Administrated Activity (AA) in Mega Becquerel (MBq).There is a large variation between DRLs given by countries.DRLs in NM are based on AAs used for normal size patients (typically 70 ± 15 Kg).The concept of DRLs is not based on the 75th percentile but on the AA necessary for good image quality during a standard procedure.Committee 3 of ICRP encourages authorized bodies to set DRLs that best meet their specific needs and that are consistent for the regional, national or local area to which they apply [4].In nuclear medicine, the effective dose is directly proportional to AA.Therefore, it is highly important to give guidance for a dosage and the following effective dose, especially concerning pediatric patients [5]. Method The data of this study were collected by complying the checklist that concerning the administrated activities (AAs) (MBq) to the standard sized adult patients (i.e.70 ± 10 Kg) for standard procedure necessary to obtain the optimum diagnostic information as recommended by previous study [6].The data were collected from six nuclear medicine departments (100% of total Number of nuclear medicine Departments in Sudan).i.e. this study covers all the nuclear medicine activities in Sudan without Exclusion to any units or department.administrated activities frequency distribution were obtained for the nuclear medicine diagnostic scan presented in Table 1 and Table 2 and in Figure 1 with comparing the values to the ARSAC [7], UNSCEAR 2009 A [8], and BSS 1996 B [9].For each patient the dose was calculated according to the patients weight using the formula of patient dose for exam = (slandered dose of exam × patient weight)/standard weight (70 Kg), and this is a fastest methods for obtaining the patient dose on routine work rather than the use of body Mass index (BMI).Sudan National DRLs were accomplished according to which the DRLs represent the administrated activities value corresponding to the 75th percentile of the distribution.Also, the average AAs (AAAs) and the relevant ranges were calculated.A = Typical AAs.B = Guidance levels (maximum usual activities). Result Established Sudan DRLs (i.e.national) are presented in Table 1 and Figure 2. The Sudanese AAAs (i.e.national) are presented in Table 2 and Figure 3. Factors, such as instrumentation's poor performance and procedures followed by the staff, which influence the image quality and may result in higher AAs, were also investigated during the survey and found satisfactory.Figure 1 represents the AAAs of one example of nuclear medicine scan that was commonly performed in Sudan which is 99m Tc-DTPAuse for renal system function evaluation.We noted clearly that the range of AAAs of DTPA range from 188.1 to 223.9 MBq with no matching values between the departments because the AAAs depend on the department protocol and the averaged of patients scanned per unit time. Sudan DRLs All Sudanese DRLs values were found within the international range [5] as they showed in Table 1, the Sudan DRLs for Bone scan, Thyroid Scan, Static renal scan, Dynamic Renal scan, Parathyroid scan, and Myocardial scan are 777, 185, 173.9, 206.5, 555 and 740 MBq respectively compare to international range of (500 -1110), (75 -222), (70 -183), (150 -540), (400 -900), and (300 -1480) MBq respectively.While it is noted that the Sudanese DRLs is higher than the values of ARSA [7] except for the MIBI pharmaceuticals that used in both parathyroid and myocardial perfusion scan and for 99m TcDTPA that used for Dynamic Renal scan Study the DRLs values were decreased. Opposite to ARSA, the nuclear medicine centers and activities in Sudan was newly introduced, and for the reasons of Socio-economic factors somehow the dose in sometimes was not optimized even with a final diagnosable image. Sudan AAAs In compared with UNSCEAR 2008 data [8] the average dose (MBq) for Sudanese we note that the bone scan falls within the average values while its lower in all other scans except for parathyroid scan AAAs which increase more than twice (555 MBq compared to 200 MBq) as the technologist used high dose for optimum detection of the pathology of the small sixed parathyroid gland because of the lack of high resolution camera in some center that must be used for imaging small organs and gland like parathyroid, but as DRLs the parathyroid gland value is still within the acceptable range of international range [5] [400 -900 MBq] and also the values was still lower than that of ARSA [8] [600 MBq].In Routine work in nuclear medicine departments in which the parathyroid scan carried out the technologist use the dose increase by a factors of 1.5 to 2 from a measurable dose using patient weight is provide an optimum image quality, the experiments of this values was carried out earlier with the feedback of nuclear medicine specialist about the influence if increasing the radiation dose on the image quality.While when compared to BSS 1996 [9] it showed variation in increased and decreased AAAs. For establishments of DRLs the 75th percentile methods will use, while in case of future re-evaluation an "optimum values" is recommended to be use in NM DRLs instead of 75th percentile [10]. Conclusions In some studies, Sudan DRLs and AAAs appear lower than the values found in literature while in other cases observed higher. Meeting the DRLs does not automatically mean that good practice is performed [8]. There is a minimum activity for each radiopharmaceuticals and a baseline activity that is multiplied with factor given in tables according to the patient weight [5].Sudan DRLs for nuclear Medicine Diagnostic studies are to be as national guideline and should not be exceeded only for individual patients who are over standard weight. Figure 1 . Figure 1.AAAs frequency distribution for Dynamic Renal scan with 99m Tc-DTPA AAAs = Average Administrated Activities. Figure 2 . Figure 2. The calculated Sudan DRLs for nuclear medicine examination. Figure 3 . Figure 3.The calculated Sudan AAAs for nuclear medicine examination. Table 2 . Sudan AAA (values are A in MBq, for adults).
v3-fos-license
2024-07-04T06:17:41.062Z
2024-07-02T00:00:00.000
270923028
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "HYBRID", "oa_url": "https://link.springer.com/content/pdf/10.1007/s00429-024-02825-0.pdf", "pdf_hash": "e54adc2ee46b490e093adadfc8be7035bd217edf", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:1735", "s2fieldsofstudy": [ "Biology" ], "sha1": "b04ba974f1ba913676734b8b9166b1d6eed63fd8", "year": 2024 }
pes2o/s2orc
Pioneers of cortical cytoarchitectonics: the forgotten contribution of Herbert Major The study of cortical cytoarchitectonics and the histology of the human cerebral cortex was pursued by many investigators in the second half of the nineteenth century, such as Jacob Lockhart Clarke, Theodor Meynert, and Vladimir Betz. Another of these pioneers, whose name has largely been lost to posterity, is considered here: Herbert Coddington Major (1850–1921). Working at the West Riding Asylum in Wakefield, United Kingdom, Major’s thesis of 1875 described and illustrated six-layered cortical structure in both non-human primates and man, as well as “giant nerve cells” which corresponded to those cells previously described, but not illustrated, by Betz. Further journal publications by Major in 1876 and 1877 confirmed his finding of six cortical strata. However, Major’s work was almost entirely neglected by his contemporaries, including his colleague and sometime pupil at the West Riding Asylum, William Bevan-Lewis (1847–1929), who later (1878) reported the presence of both pentalaminar and hexalaminar cortices. Bevan-Lewis’s work was also later credited with the first illustration of Betz cells. Introduction The history of the study of cortical cytoarchitectonics dates back to the late eighteenth century and the independent findings of Francesco Gennari, Felix Vicq d'Azyr, and Samuel von Soemmering in the 1780s of a myelinated band in the occipital cortex.With the advent from the mid-nineteenth century onwards of new techniques in brain sectioning and staining which permitted microscopical examination of cortical tissue, various descriptions of the laminar structure of cortex appeared.However, consensus as to the six-layered nature of mammalian neocortex was slow to emerge, with various opinions expressed as to the exact number of strata, ranging through four (Kölliker 1854), five (Meynert 1872), six (Baillarger 1840), eight (Clarke 1862(Clarke -1863)), and even nine layers (Ramón y Cajal 1899; DeFelipe and Jones 1988). Studies of brain histology also addressed the various cell types seen within the cortex.In 1874 Vladimir Betz had described, but not illustrated, giant pyramidal cells in the precentral gyrus.Initially published in Russian, his work was later translated into German in 1874 and English in 1875 (Betz 1875).The comparative histology of the brains of human and non-human primates was also topical at this period in the last quarter of the nineteenth century.For example, Thomas Huxley had written a "Note on the resemblances and differences in the structure and the development of the brain in man and apes" for the second edition of Darwin's Descent of man published in 1874 (Darwin 1874). Of the many researchers contributing to these studies of cortical histology (Hakosalo 2006;Triarhou 2020Triarhou , 2021)), one who remains known to posterity is William Bevan- Lewis (1847Lewis ( -1929)).Working at the West Riding Asylum at Wakefield in the north of England, where David Ferrier had undertaken experimental studies in 1873 which characterised the motor areas of the cerebral cortex (Larner 2023a), Bevan-Lewis's publications in 1878 described a five-layered cortex in the motor area (Lewis and Clarke 1878;Lewis 1878), as per the influential model of Theodor Meynert.Bevan-Lewis also illustrated, apparently for the first time, the giant cells of the motor cortex previously described by Vladimir Betz (1874). However notable the studies of Bevan-Lewis were, it transpires that he was not the first pathologist from the West Riding Asylum at Wakefield to investigate, describe and illustrate the strata of the cortex, or comment on and illustrate giant nerve cells.His predecessor, and in some senses his mentor, Herbert Coddington Major (1850Major ( -1921)), had published on and illustrated six cortical layers in the human and non-human primate brain in his doctoral thesis of 1875 (Major 1875a) and in papers published in 1876 and 1877 (Major 1875(Major -1876a(Major , 1876(Major , 1877a)), in contrast to Bevan-Lewis's initial view of a five-layered cortical organisation in the motor area.Moreover, in his 1875 thesis Major described and illustrated giant nerve cells in the cortical layers. Herbert Coddington Major (1850-1921) As, to our knowledge, only one brief biographical article on Major has appeared (Larner 2024), some details of his life are given before proceeding to discuss his work in cortical cytoarchitectonics and histology. Born in Jersey, in the Channel Islands, on 30th January 1850, he was baptised Herbert Coddington Mauger (pronounced Major).After his school education in Jersey, he went to Edinburgh to study medicine and graduated (MB CM) in 1871.During his time in Edinburgh he attended the class in Medical Psychology set up by Thomas Laycock (1812-1876), Chair of the Practice of Physic at Edinburgh University from 1855.This instruction in medical psychology and mental diseases was novel in British medical schools at this time and served to influence a number of Edinburgh students to take up careers in this discipline, either in asylum medicine (e.g.James Crichton-Browne, Thomas McDowall, Robert Lawson) or in general medicine with an interest in diseases of the brain (David Ferrier, John Milner Fothergill).Prior to his translation to Edinburgh, Laycock had also influenced the young John Hughlings Jackson during his studies at the York Medical School in the 1850s. By the time Major, as he was now known, graduated from Edinburgh, James Crichton-Browne had been the Medical Superintendent at the West Riding Asylum in Wakefield for some years, setting up the facilities and recruiting the personnel required to undertake systematic studies of patients with insanity.These facilities included a dedicated pathological laboratory.Into this environment, which has latterly been characterised as a "research school" (Finn 2012), Major was introduced as a Clinical Clerk (unpaid, but in receipt of board and lodging) in 1871.Having proved himself, in the words of Crichton-Browne, "an indefatigable Clinical Clerk for twelve months", Major was "promoted to the position of Assistant Medical Officer, which he now occupies with credit".This promotion to a salaried position was announced in the medical journals in August 1872. Major's interest and meticulous work in pathology was already well underway at this time.At the Annual Meeting of the British Medical Association held in Birmingham in August 1872, Crichton-Browne showed "some beautifully prepared sections of Brain-Structure [sic] in Health and Disease, the work of Dr. Herbert C. Major of the West Riding Asylum".Major attended the medical conversazione at the Asylum in October 1872, an annual meeting arranged by Crichton-Browne to showcase the work of the institution, particularly the research projects undertaken by members of the resident junior staff.Major presided over a table "filled with a large number of microscopical preparations from the Asylum collection" (Anon., 1872), no doubt many prepared by Major himself.He repeated this display at the conversazione of 1873, 1874, and 1875. It was in 1872 that Major initiated his series of papers published in the West Riding Lunatic Asylum Medical Reports, the house journal of the Asylum which had been founded by Crichton-Browne to disseminate the findings of research undertaken there (Larner 2023b).Over the next four years, Major published six papers in this journal (Major 1872a(Major , b, 1873(Major , 1874a(Major , 1875b(Major , 1876)), more than anyone else save Crichton-Browne, as well as elsewhere (Major 1874b(Major , 1875(Major -1876b)).He also completed his thesis, Histology of the brain in apes (Major used the term "apes" in a manner different from current usage) for the MD degree of Edinburgh University and which received the gold medal (Major 1875a).This interest in comparative neurohistology afforded further publications (Major 1875(Major -1876a(Major , 1877a)).It is little wonder then that one of his junior colleagues at Wakefield Asylum, John Hunter Arbuckle, described Major at this time as "the first authority on the minute structure of the cerebral cortex of man and monkeys" (Arbuckle 1876). In 1875, William Bevan-Lewis was appointed as a Clinical Assistant at the Asylum (Larner and Triarhou 2023) where Major, now the Deputy Medical Director (Major 1874a), encouraged him as he began his career in pathological research. Asylum life was not all work.Consistent with practice in other asylums of the time, the patients at Wakefield were provided with entertainment in the form of dances, concerts and amateur theatricals, with Asylum staff often taking roles in the latter.Major was no exception, for example appearing in the farce "The Day after the Wedding" on 17th November 1874 in the role of "Colonel Freelove", and in the comedy "Faint heart never won fair lady" on 12th February 1875 as "GUZMAN (a Gentleman, who by becoming a Page turns over a new leaf, as he usually uses his High Powers on more distinguished parts)" [capitals and italics in original playbill; West Yorkshire Archive Service, C85/1362]. Major was appointed Medical Director of the West Riding Asylum in early 1876 following Crichton-Browne's resignation.The administrative burden of the role undoubtedly took him away from his pathological work, as indicated by the diminution in his published output (Major 1879(Major , 1879(Major -1880(Major , 1882(Major -1883) ) and a turn towards administrative data (Major 1877b(Major , 1884(Major -1885)).He resigned the superintendency in 1884 on the grounds of ill health, to be succeeded by Bevan-Lewis.Major resumed clinical work, as an honorary physician at Bradford Infirmary, in 1885 and became consultant physician in 1898, before moving to Bedford in 1900 as Honorary Pathologist to the Bedford County Hospital.He retired to Jersey in 1907, having married Mary Ann Balleine there in 1906.Major died in 1921, in relative obscurity, having moved in 1920 to Oxford.To our knowledge, only a single obituary was published (Anon., 1921). Major's key publications on cortical cytoarchitectonics Four of Major's publications relate specifically to cortical cytoarchitectonics and histology (Major 1875a(Major , 1875(Major -1876a(Major , 1876(Major , 1877a)).Each of these will be addressed in turn, but it should first be noted that although Major had certainly written on cortical layers in previous publications, in only two instances was a layer qualified numerically, specifically as the "second layer" (Major 1873: p 102;1875b: p 168).That he recognised there were more than two cortical layers might also be inferred from his account written in 1872: In all my sections of the grey matter in this part [occipital lobes] of the healthy brain, I have found the arrangement of the cell elements to be singularly constant.The large nerve cells form two distinct layers, one of which lies superficial, the other on the deep aspect of another well marked intermediate layer, formed almost entirely of small round or oval nerve cells and nuclei.The latter is situate about midway in the depth of the cortical substance.(Major 1872a: pp 49-50). Nowhere, however, had Major illustrated cortical lamination, his drawings being limited to particular cell types as observed in healthy and diseased brain. Thus, it was in his thesis, Histology of the brain in apes, that Major first described and illustrated the laminar structure of the brain in humans and non-human primates as being comprised of six layers (Major 1875a).This handwritten work was produced "after a period of four years of almost constant study of the human brain" (Major 1875a: p 4; our transcription).Major used two human brains as "standards of comparison", from men aged 16 and 21 who had both been killed suddenly.The brains from eight different species of non-human primate were available to Major, their sources unspecified.By his own account, the methodology for the preparation of tissues followed that of Lockhart Clarke (Clarke 1862(Clarke -1863)).It was from "Brain D", of the "Macacus radiatus" or Bonnet monkey (now Macaca radiata, Bonnet macaque), that the illustration of the cortical layers was made (Major 1875a: p 30, his Fig. 9).This was contrasted with a section from the healthy human brain (Major 1875a: p 31, his Fig. 10) but it was not specified from which human subject it came.These beautiful drawings leave no room for ambiguity about the hexalaminar structure perceived by Major and his text describes the cell types in each layer, along with illustrations thereof (Fig. 1). In addition to the lamination, Major's text described the cell types in each layer, along with illustrations thereof (Fig. 1A, B).Towards the end of the thesis, he also mentioned "bodies to which I have given the name of giant nerve cells" (Major 1875a: p 61) which he had observed in the ascending parietal convolution, but neither layer nor cell size was specified.Of these giant nerve cells (Fig. 2), Major commented that "There can be no mistaking them when they have once been seen, their rarity and their great size as compared with the other corpuscles surrounding them at once attracting notice.The branches are very numerous" (Major 1875a: p 63; our transcriptions).No mention was made of the work of Betz.Major had previously noted similar cells in the brain of a patient afflicted with general paralysis (Major 1874a). In the January 1876 issue of the Journal of Mental Science, Major published his histological findings from the brain of a baboon (Major 1875(Major -1876a)), called by him a Chacma Baboon or Cynocephalus porcarius (now Papio ursinus).Prefacing his findings, Major stated that: My own work in this direction has till now been limited to the brain in the smaller apes, a study of the cortex in which formed the subject of a graduation Thesis presented to the University of Edinburgh.… so far as I have been able to ascertain, this essay was the first record of systematic comparison (though, of course, limited in extent) between the nerve elements of the cortex in man as compared with the ape, and … forms, I believe, at the present time, the only literature of the subject in this or any other country.(1875-1876a: p 500) With regard to the number and appearance of the cortical strata in the baboon, Major was explicit: I wish to state at once, and very decidedly: -1st, that the number in the Chacma corresponds exactly with that in man, in the frontal and parietal, as well as in the occipital lobe (Major 1875(Major -1876a: p 503) : p 503) He proceeded to describe and illustrate the cell types in the six layers but, unlike the material in his thesis, there was no drawing here of a section to show the cortical lamination.Writing of the fifth layer, he noted: In this situation, however, more frequently perhaps than in any other, very large nerve cells are found.Usually, these have the characters of the large nucleated, pale bodies, already frequently referred to, but they sometimes resemble closely the large pyramidal cells before described in connection with the third stratum in the anterior portions of the hemispheres (Major 1875(Major -1876a: p 507) : p 507) In the context of the third layer, Major said of these bodies that "wherever seen, their peculiar characters enable them to be recognised at once, so different are they from the others".Major attempted to measure these pyramidal cells of very large size, finding them "as much as 10/250 mm long by 5/250 mm broad" (Major 1875-1876a: p 505), hence 40 μm by 20 μm. In the sixth volume of the West Riding Lunatic Asylum Medical Reports, dated 1876 but not actually published until early in 1877, Major reported on "The histology of the island of Reil" (Major 1876).Herein, after noting the findings of Kölliker, Lockhart Clarke, and Meynert, he stated that: In a Thesis presented to the University of Edinburgh (1875), on the 'Histology of the Brain in Apes,' I described six cortical layers as being the usual arrangement in the human brain.In the 'Journal of Mental Science' for January 1876, in a paper on the brain of the Chacma Baboon, I again showed that in the human subject the six-layer type of the cortex was the usual one.(Major 1876: p 5). The hexalaminar cortical appearance was illustrated in this paper with drawings from both healthy and morbid brains, along with the cell types observed (Major 1876: Plate I and Plate II, respectively; reproduced here as Fig. 3).Of the nerve cells, Major commented "I can observe nothing unusual: -nothing that would seem to imply (as in the case of the so-called giant cells of the vertex) any special and peculiar functions" (Major 1876: p 6). In Major's two-part Lancet paper of July 1877, based on his thesis but using data from only four non-human primate species, rather than the eight reported in the thesis, the sixlayered cortex in both human and non-human primates was again illustrated.The drawing of the human brain (Major 1877a: p 46, his Fig. 1) was a reproduction of Plate I from 3A).In this paper, Major gave credit for "a human nerve-cell from the cerebral cortex admirably demonstrated by my colleague, Mr. Bevan Lewis, in which the branches are as many as twelve in number" (Major 1877a: p 86).No mention was made of giant nerve cells. Hence both by description and by illustration, it is clear that Major viewed the cerebral cortex of man and of nonhuman primates as consisting of six layers, his first illustration of this arrangement dating to his thesis of 1875 and repeated in his papers of 1876 and 1877.He had observed and illustrated "giant nerve cells" in his thesis, but without reference to Betz, and had attempted to measure these cells in the baboon's brain. William Bevan-Lewis (1847-1929) Herbert Major pre-dated Bevan-Lewis as both pathologist and Medical Superintendent at the West Riding Asylum in Wakefield.Although younger than Bevan-Lewis by three years (Bevan-Lewis came to asylum medicine relatively late in his career), the evidence from Bevan-Lewis's earliest publications suggests that Major was helpful, if not a mentor, to him.In his first paper, published in the West Riding Lunatic Asylum Medical Reports, Bevan-Lewis confirmed the opinion of Major concerning morbid changes in the peripheral nerves of patients with general paralysis (Lewis 1875: p 86), and in a paper on microscopical techniques (a subject Bevan-Lewis was later to make his own) he acknowledged "the valuable assistance and encouragement rendered to me by Dr. Herbert Major -a well-known authority in these matters" (Lewis 1876: p 248). Furthermore, it is certain that Bevan-Lewis knew of Major's thesis, since at the annual medical conversazione held at Wakefield Asylum in November 1875: The table presided over by Dr. Herbert Major and Dr. Bevan Lewis was crowded with a quite unique collection of microscopic preparations from the collection belonging to the asylum, illustrating the histological condition of the convolutions of the human brain in the healthy adult, in the foetus, and in various forms of insanity; and similar series of the medulla oblongata, spinal cord, sciatic nerve, and sympathetic ganglia.Among these preparations were the series illustrating Dr. Major's thesis for the M.D. degree of Edinburgh, which received the gold medal, and his own and Dr. Lewis's papers in the West Riding Reports (Anon., 1875) [our italics]. Bevan-Lewis's first foray into the subject of cortical architectonics appears to be the paper he co-authored with Henry Clarke which was read at the Royal Society on 24th January 1878, communicated by David Ferrier.In this work describing cortical lamination and the giant cells found in the motor area, only a single mention of Major is to be found, to the effect that Major "follows Baillarger in regarding the cortex of the vault and that of the central lobe as consisting of six layers" (Lewis and Clarke 1878: p. 42).In contrast, Bevan-Lewis and Clarke favoured a five-layered model of the motor area, as illustrated in their Plate 1, following the scheme of Meynert.Their only reference to Major's papers on the subject was to his 1876 publication in the West Riding Lunatic Asylum Medical Reports (Major 1876) but not to his other studies which had described cortical lamination in the brains of human and non-human primates (Major 1875a(Major , 1875(Major -1876a(Major , 1877a)). Bevan-Lewis's subsequent single-author paper in the inaugural issue of Brain, published in April 1878, reported the presence of both pentalaminar and hexalaminar cortices, each typical of a certain definite area, but "no abrupt passage from one form of cortical lamination to that of another is ever seen".He again referenced Major's West Riding Lunatic Asylum Medical Reports paper to the effect that he "extends the limits of the six-laminated cortex to the central lobe or insula", but in Bevan-Lewis's view: There is a five and a six-laminated cortex, each typical of a certain definite area: but, whilst the six-layered formation is found extensively spread over the convolutions of the parietal and other regions, the fivelaminated type is pre-eminently characteristic of the motor area of the brain.(Lewis 1878: p 80 [italics in original]). Bevan-Lewis did refer in passing (Lewis 1878: p 92) to Major's publication on the Chacma baboon (Major 1875(Major -1876a) ) although not in relation to cortical lamination, but he did not refer to Major's other studies which had described cortical lamination in the brains of human and non-human primates (Major 1875a(Major , 1877a)).As for cell types, Bevan-Lewis noted of the motor cortex that: Another highly important feature of this region is the presence of large ganglionic cells which under the title of "giant cells" were made the subject of special attention by Professor Betz over three years ago (Lewis 1878: p 80). Referencing the German translation of Betz's original paper, he noted that Betz found these cells to range from 40 to 120 μm long, and from 50 to 60 μm broad, whilst his own measurements were from 30 to 96 μm long, and from 12 to 45 μm broad with a maximum size of 126 by 55 μm.No mention was made of Major's observation of "giant nerve cells"; admittedly their reported size (40 μm by 20 μm) was somewhat smaller than that found by Betz but within the range reported by Bevan-Lewis. This mixture of five-and six-layered cortex remained Bevan-Lewis's position, as evident in the extensive section of 30 pages devoted to cortical lamination in his textbook of mental diseases which first appeared in 1889 (Bevan Lewis 1889: pp 85-114).Here, speaking of the lamination of the motor cortex in man, he stated: It is all the more essential that its structure in man should be clearly defined here, since it has been the subject of dispute between such writers as Meynert, Betz, Baillarger, Mierzejewski, and others, some authorities speaking of it as a five-laminated and others as a six-laminated type.At the outset, therefore, it is well to define our own view of the case, which is briefly as follows: the cortex typical of motor areas is a five-laminated formation, and the more absolutely the granule cell formation (which, when intercalated, gives us the six-laminated type) is excluded, the more highly specialised become those groups of enormous nerve cells which go by the name of the "nests" of Betz.Where, therefore, these cell-clusters are best represented, there we find a five-laminated, not a sixlaminated, cortex; in other words, at these sites the granule-cell layer no longer exists.(Bevan Lewis 1889: p 99). There is only a single mention of Major in this section, related to his description of cells at the bottom of the fifth layer being "reclinate", but no reference to Major's publications is given (Bevan Lewis 1889: pp 101-102). The absence of Major's name in the quoted list of writers on the subject of cortical structure is perhaps surprising.Although his studies had principally been on occipital cortex rather than the motor area per se, Lewis and Clarke (1878: p 42) themselves had noted that Major "follows Baillarger in regarding the cortex of the vault and that of the central lobe as consisting of six layers" [our italics].Yet when Bevan-Lewis came to discuss "the six-laminated cortex typical of sensory areas" in his textbook (Bevan Lewis 1889: p 101) there was still no mention of Major.The only two papers by Major cited in the entire textbook (viz.Major 1872, at 471, but without specifying which of his two papers in the West Riding Lunatic Asylum Medical Reports of that year; and Major 1879-1880, at 457) do not include his key publications on cortical cytoarchitectonics.Whether these omissions were a mere oversight, astonishing as that possibility now seems, or a wilful exclusion on Bevan-Lewis's part cannot be decided with the currently available evidence, but a general pattern of minimal or non-citation of his colleague appears to emerge. Discussion Whereas Bevan-Lewis's "work was fully appreciated by specialists at home and abroad" (Triarhou 2021: p 59), Francis Walshe calling him "the pioneer of cortical cytoarchitectonics" (Walshe 1948: p 208n2), Major's work was essentially forgotten.True, the Chacma baboon paper was referenced in Henry Maudsley's Physiology of mind (3rd edition) of 1876, but to our knowledge the only discussion of Major's work on the layers of the cerebral cortex and his differences with Bevan-Lewis appeared in a history of the Wakefield Asylum (Todd and Ashworth, who also acknowledged that "Major never did receive the acclaim that his research in the sphere of comparative neurohistology so richly deserved" (Todd and Ashworth, n.d.: p 178). However, there are some notable exceptions to this neglect of Major's work.The Swedish physician Carl Hammarberg (1865-1893) completed one of the earliest comprehensive cytoarchitectonic studies of the human cerebral cortex, concentrating both on normal cortical histology and pathology.He wrote his MD thesis in Swedish (Hammarberg 1893), with a title translated as 'Clinico-pathological studies of intellectual disability along with studies of the normal anatomy of the cerebral cortex'.After his premature death at the age of 28 years, his thesis was translated and published in German (Hammarberg 1895).Hammarberg (1893Hammarberg ( , 1895) cited Major's paper on the insula, and mentioned Major on three occasions, as follows (our translations): Gowers follows Bevan Lewis' account.Major and Baillarger mention this [3rd] cell layer, but place it between the 4th and 5th layers.(Hammarberg 1895: p 12).There is probably a confusion here with the authors' pyramidal cells in (Bevan Lewis) or below (Baillarger, Major) the 4th layer (Hammarberg 1895: p 13).Obersteiner emphasizes (after H. Major) that the cortex in the insula does not deviate from the common type.(Hammarberg 1895: p 37). In the textbook on comparative neuroanatomy coauthored by the Polish neurologist Edward Flatau and the Berlin neuroanatomist Louis Jacobsohn published in 1899, two of Major's articles are cited: on the white whale (Major 1879) and on the Chacma baboon (Major 1875(Major -1876a)).Further, Flatau and Jacobsohn (1899: p 110) reported the brain of the orangutan to weigh 375 g, according to Major.In his landmark monograph on the cerebral cortex, Korbinian Brodmann (1868-1918) cited Major's 1876 papers on the insula and the Chacma baboon.In a footnote to the opening statement ("Since the first pioneering research of Meynert and Betz, a continuous stream of workers has studied the cellular lamination of the cerebral cortex and its specific modifications in man and in individual animals") of the first chapter, Brodmann explains (Brodmann 1909;Garey 2006): The comprehensive literature is cited in my earlier works; I shall only mention here those authors who have worked independently in this field; they are: Meynert, Betz, Mierzejewsky, Baillarger, Major, Bevan Lewis, Clarke, Arndt, Berliner, Hammarberg, Roncoroni, Nissl, Kolmer, Bolton, Schlapp, Cajal, Farrar, Koppen, Hermanides, Löwenstein, Campbell, O. Vogt, Mott, Watson, Elliot Smith, Rosenberg, Haller etc. Envoi In an attempt to "rectify certain undeserved historical neglects" of landmark discoveries in cortical cytoarchitectonics, previous papers have revisited the work of individuals who, prior to Brodmann, investigated the histology of the human cerebral cortex (Triarhou 2020(Triarhou , 2021)).Herbert Major should be added to this list since, as shown here, he had described and illustrated hexalaminar cortical structure and noted "giant nerve cells" as early as 1875.A case has also been presented suggesting he described and illustrated what later came to be called von Economo neurones in his thesis (Larner and Triarhou 2024).These findings were almost entirely neglected by the neuroscientific community at the time and thereafter.The reasons for this neglect are unclear, but might perhaps relate in part to Major's key findings being presented in a thesis (Major 1875a) and in a relatively obscure journal (Major 1876) and hence not widely disseminated and available to other researchers.Major's work only came to our attention as part of ongoing studies of the history of the West Riding Asylum (Larner 2023a(Larner , b, 2024)). Fig. 2 Fig. 2 Fig. 15 (p 62) drawing from Major's thesis, illustrating "giant nerve cells" in the ascending parietal convolution of human brain.Major denoted A as "Large cell" and B as "cell of ordinary size" Fig. 3 A Fig. 3 A Plate I and (B) Plate II from Major 1876, whose explanation of the Plates is as follows: Plate I.-Section through a gyrus of the Island of Reil, showing the cortex of the summit of the gyrus (healthy).Plate II.-Section through the cortex of the Island of Reil at the bottom of a sulcus (morbid).In both:-1, 2, 3, 4, 5, 6, indicate
v3-fos-license
2016-06-17T04:58:41.648Z
2013-05-25T00:00:00.000
16065415
{ "extfieldsofstudy": [ "Biology", "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.frontiersin.org/articles/10.3389/fmicb.2013.00181/pdf", "pdf_hash": "337e0a73285b78385437e79c38a50b677417cbc6", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:1736", "s2fieldsofstudy": [ "Environmental Science" ], "sha1": "337e0a73285b78385437e79c38a50b677417cbc6", "year": 2013 }
pes2o/s2orc
In situ production of branched glycerol dialkyl glycerol tetraethers in a great basin hot spring (USA) Branched glycerol dialkyl glycerol tetraethers (bGDGTs) are predominantly found in soils and peat bogs. In this study, we analyzed core (C)-bGDGTs after hydrolysis of polar fractions using liquid chromatography-atmospheric pressure chemical ionization-mass spectrometry and analyzed intact P-bGDGTs using total lipid extract (TLE) without hydrolysis by liquid chromatography-electrospray ionization-multiple stage mass spectrometry. Our results show multiple lines of evidence for the production of bGDGTs in sediments and cellulolytic enrichments in a hot spring (62–86°C) in the Great Basin (USA). First, in situ cellulolytic enrichment led to an increase in the relative abundance of hydrolysis-derived P-bGDGTs over their C-bGDGT counterparts. Second, the hydrolysis-derived P- and C-bGDGT profiles in the hot spring were different from those of the surrounding soil samples; in particular, a monoglycosidic bGDGT Ib containing 13,16-dimethyloctacosane and one cyclopentane moiety was detected in the TLE but it was undetectable in surrounding soil samples even after sample enrichments. Third, previously published 16S rRNA gene pyrotag analysis from the same lignocellulose samples demonstrated the enrichment of thermophiles, rather than mesophiles, and total bGDGT abundance in cellulolytic enrichments correlated with the relative abundance of 16S rRNA gene pyrotags from thermophilic bacteria in the phyla Bacteroidetes, Dictyoglomi, EM3, and OP9 (“Atribacteria”). These observations conclusively demonstrate the production of bGDGTs in this hot spring; however, the identity of organisms that produce bGDGTs in the geothermal environment remains unclear. bGDGTs have been reported from terrestrial hot springs in Yellowstone National Park, where the major source of bGDGTs was suggested to be soil runoff (Schouten et al., 2007). On the other hand, the presence of bGDGTs in mesophilic bacteria is recognized to be possibly a relict feature from thermophilic ancestors, as ether bonds as well as membrane-spanning core lipids have previously been reported in some thermophilic bacteria (Langworthy et al., 1983;DeRosa et al., 1988;Huber et al., 1992Huber et al., , 1996. The Great Basin in western United States is an endorheic region with widely distributed geothermal activity. The hot springs of the Great Basin are characterized by low-inorganic energy yielding species such as ammonia, hydrogen sulfide, or hydrogen (Zhang et al., 2008), which are in contrast with more inorganic energy-rich geothermal systems fueled by subsurface volcanism (e.g., Yellowstone, Kamchatka, and Italy). The biological research in Great Basin hot springs has recently made important findings in lipid biomarker biogeochemistry and microbial carbon and nitrogen cycling processes (Pearson et al., 2004(Pearson et al., , 2008Zhang et al., 2006Zhang et al., , 2007Huang et al., 2007;Costa et al., 2009;Dodsworth et al., 2011Cole et al., 2013). Here we show multiple lines of evidence that bGDGTs are produced in situ in Great Boiling Spring (GBS) in the Great Basin. SAMPLING GBS is a large geothermal spring located in the US Great Basin near the town of Gerlach, Nevada [N40 • 39.689 W119 • 21.968 ; 9.15 m deep, 7.6 m diameter; described in Costa et al. (2009) andDodsworth et al. (2011)]. GBS has a relatively well-mixed, oxic water column and a relatively uniform clay bottom composed primarily of smectite, illite, kaolinite, quartz, and zeolite (Costa et al., 2009). Sediment samples (top ∼1 cm of sediment/water interface) were collected at five locations (Sites A, B, C, D, and E; Figure 1) in February 2010; at each location sediment was homogenized on site in a sterile pie tin. Subsamples of the sediment homogenate were separated into a 50-mL polypropylene tube for lipid analysis. Other subsamples were collected for a variety of other analyses, including identification of predominant minerals and 16 S rRNA gene pyrosequencing. Temperature and pH were measured at the precise location of sampling prior to sample collection using a LaMotte pH 5 meter (LaMotte, Chestertown, MD). The current paper focuses on analysis of GDGTs. The details of the mineralogy, 16S rRNA gene pyrosequencing, and field chemistry were reported in detail previously . In addition, eight enrichments designed to stimulate growth of cellulolytic organisms were incubated in situ. Nylon bags (100 micron pore size, 10 × 10 cm) were filled with 20 g of either aspen shavings (AS) or ammonia fiber explosion (AFEX)treated corn stover (CS). Bags were loaded into 20 × 12 × 5 cm polypropylene boxes punctured with ∼100 0.5 cm holes to allow water exchange and incubated either suspended in spring water or buried ∼1 cm deep in the sediment. The polypropylene boxes were anchored into the sediment or to structures adjacent to the spring by using stainless steel wire. Lignocellulose materials were incubated for 64 days (Figure 1, Site C; ∼77 • C) or 92 days (Figure 1, Site A; ∼85 • C). The difference in the incubation times was based on the time required to observe visual changes to the lignocellulose substrates that were consistent with cellulolysis. After incubation, the bags were removed, homogenized, and distributed into a sterile 50 mL polypropylene tube FIGURE 1 | Sites in GBS where sediments were collected and in situ cellulolytic enrichments were incubated. The soil transect started at the edge of the hot spring at the lower right corner of the photo. as described above for sediments. Details of lignocellulose degradation and 16S rRNA gene pyrosequencing were reported previously . The sample code for cellulolytic enrichments consists of three parameters, temperature (77 • C or 85 • C), cellulose substrate (A, aspen shavings; C, corn stover), and incubation environment (W, suspended in water; S, buried in sediment) as described earlier . As described in Peacock et al. (2013), the temperature at each incubation site (within 0.5 meters of each lignocellulose enrichment) was tracked for the majority of the duration of the incubation using high-temperature iButtons (Maxim Integrated, San Jose, CA). During the time the temperature was tracked, temperatures ranged from 68 • C to 82 • C at site C (mean 77 • C) and 74-88 • C at site A (mean 85 • C). Finally, soil samples were collected at 10-, 20-, 30-, 50-, 150-, 200-, 300-and 500 cm distance from the edge (zero cm) of the spring (Figure 1) in order to provide a contrast in bGDGT profiles between the soil and the hot spring, which serves to evaluate possible soil contamination into the hot spring. This is a potential concern because of previous report of bGDGTs in soil next to hot springs in the Great Basin (Peterse et al., 2009). The soil temperature was determined by inserting a stainless steel temperature probe ∼3 cm into the soil where the sample was collected. The soil pH was determined in the lab following a previously described procedure . All samples were frozen on dry ice in the field and stored at −80 • C before analysis. DNA from each sample was extracted by using the FastDNA Spin Kit for Soil (MP Biomedicals, Solon, OH); raw data were presented in Cole et al. (2013) and Peacock et al. (2013). LC-MS ANALYSIS OF POLAR bGDGTs (HYDROLYSIS METHOD) Lipid extraction, fractionation, and separation of core (C)-and hydrolysis-derived polar (P)-bGDGTs followed a sonication method described in Zhang et al. (2012), in which the P-bGDGTs were calculated as the difference between the hydrolyzed and non-hydrolyzed polar fractions. The GDGTs were analyzed on an Agilent 1200 liquid chromatography equipped with an automatic injector coupled to QQQ 6460 MS and Mass Hunter LC-MS manager software using a procedure modified from Hopmans et al. (2004). Detection was performed using the Agilent 6460 triple-quadrupole spectrometer MS with an atmospheric pressure chemical ionization (APCI) ion source . Separation of peaks was achieved using a Prevail Cyano column (2.1 mm×150 mm, 3 μm; Alltech Deerfield, IL, USA) maintained at a temperature of 40 • C . The detection limit of the LC-MS was 0.8 pg . LC-MS ANALYSIS OF INTACT POLAR bGDGTs (NON-HYDROLYSIS METHOD) While the hydrolysis method gives total abundance of all polar bGDGTs, it does not identify the types of polar bGDGTs. To identify specific head groups of the intact polar lipid bGDGTs, total lipid extracts (TLEs) were also analyzed by a reverse phase liquid chromatography-electrospray ionization-multiple stage mass spectrometry (RP-ESI-MS n ) at University of Bremen, Germany (Zhu et al., in review). In brief, the analysis of TLEs was performed on a Dionex Ultimate 3000 ultra-high pressure liquid chromatograph (UHPLC) coupled to a Bruker maXis Ultra High Resolution orthogonal accelerated quadrupole-time-of-flight (qTOF) tandem MS/MS, equipped with an electrospray ionization source (ESI) in positive ionization mode (Bruker Daltonik, Bremen, Germany). Ether lipids were eluted through an ACE3 C 18 column (3 μm, 2.1 × 150 mm; Advanced Chromatography Technologies Ltd., Aberdeen, Scotland), starting with 100% eluent A isocratically for 10 min, followed by a gradient to 24% eluent B in 5 min, and then to 65% eluent B in 55 min at a flow rate of 0.2 mL/min, where the eluent A was 100:0.04:0.10 of methanol/formic acid/14.8 M NH 3aq and B was 100:0.04:0.10 of 2-propanol/formic acid/14.8 M NH 3(aq) . The column was washed with 90% eluent B for 10 min and subsequently re-equilibrated with 100% A for another 10 min. Ether lipids were scanned from m/z 100 to 2000 in a positive mode at a scan rate of 1 Hz with automated data-dependent fragmentation of the three most abundant ions. To ensure mass accuracy, an internal lock mass (m/z 922.0077) and tuning mixture solution (m/z 322. 0481, 622.0290, 922.0098, 121.9906, 1521.9715, and 1821.9523) were infused directly into the ion source throughout a complete run and at the near end of the run, respectively. Lipids were detected as protonated [M+H] + , ammoniated [M+NH4] + , and sodiated [M+Na] + molecular ions and identified by retention time, accurate masses (better than 1 ppm), and diagnostic fragments (Weijers et al., 2006;Liu et al., 2010). STATISTICAL ANALYSES Mann-Whitney U tests and Wilcoxon signed rank tests were used as non-parametric alternatives to independent-and paired samples t-tests to explore relationships between bGDGT and experimental conditions. Linear regressions were calculated to quantify relationships between temperature and bGDGT fractions from sediment samples. These analyses were all calculated at the 0.05 level of significance. Cluster analysis was performed on C-bGDGTs and hydrolysisderived P-bGDGTs from soil and the hot spring samples using the base program in R 2.12.1. The relative abundances of C-bGDGTs and P-bGDGTs from all samples were imported into R and the Euclidean method was used to compute the distance matrix and generate a hierarchical clustering tree. Spearman's rho, non-parametric correlation coefficients, were calculated to identify positive relationships between total bGDGTs (normalized to ng DNA) and relative abundance of phyla based on quality-filtered pyrotag sequence reads from cellulose enrichments. Analyses were completed for phyla that occurred at ≥ 1% relative abundance in one or more of the in situ cellulose enrichments. Subsequently, the same statistical framework was applied to individual Operational Taxonomic Units (OTUs) defined at 97% within phyla that were positively correlated with bGDGT abundance. Results are reported for 1-tailed significance. ABUNDANCE OF C-AND HYDROLYSIS-DERIVED P-bGDGTs Sediment samples from the hot spring had C-bGDGTs ranging from 12 ng/g dry sediment to 280 ng/g dry sediment (Table 1), while P-bGDGTs were about 4-5-fold less abundant than C-bGDGTs ( Table 1). Linear regression analyses indicated statistically significant, negative relationships between temperature and all bGDGT fractions (when normalized to gram dry sediment) from the hot spring sediments, with r 2 values ranging from 0.80 to 0.82 (p < 0.05, Figure 2). The soil samples were also dominated by C-bGDGTs (up to 2-318 ng/g) with P-bGDGTs being less than 10 ng/g in five out of six samples ( Table 1). bGDGTs were not detected in cellulosic substrates before incubation (data not shown) and in situ cellulolytic enrichments had C-and P-bGDGTs ranging from 2.0 ng/g solids to 88 ng/g solids. The cellulolytic enrichments had significantly higher percentages of P-bGDGTs than the hot spring sediments (Table 1; FIGURE 2 | Regression analyses indicated negative, linear relationships between the absolute abundance of C-and hydrolysis-derived P-bGDGTs in hot spring sediments and temperature (sig. < 0.05). bGDGT abundance was normalized to g dry mass of sample. Mann-Whitney U test, p = 0.003), indicating enrichment of bGDGT-producing bacteria among cellulolytic consortia. Within the cellulolytic enrichments lower temperature sites (∼77 • C) had higher total bGDGT concentrations (C-plus P-bGDGTs) as compared to their corresponding higher temperature sites (Wilcoxon signed rank test, p = 0.068), which is consistent with the temperature relationships observed in the hot spring sediments (Figure 2). These observations suggest that organisms producing the bGDGTs tend to have higher biomass at lower temperatures (e.g., 60 • C) in GBS. Comparisons of paired cellulolytic enrichments showed that incubations within the sediment contained a higher absolute abundance of bGDGTs than their corresponding water column incubations (Wilcoxon test, p = 0.068), suggesting that anaerobic conditions favored bGDGT-producing organisms in this hot spring. Genetic data suggest the enrichments buried in the sediments were anaerobic. For example, the majority of 16S rRNA gene pyrotags described in Peacock et al. (2013) were strict anaerobes (e.g., dominant groups are Archaeoglobales, Thermotogales, Thermofilaceae, etc.). Paired comparisons of cellulosic enrichment substrates revealed that the absolute abundance of bGDGTs was elevated in corn stover as compared to aspen shavings (Wilcoxon test, p = 0.068), indicating a potential preference of bGDGT organisms for corn stover. Schouten et al. (2007) first reported C-bGDGTs in hot springs, which accounted for up to 64% of total GDGTs. However, absolute concentrations of bGDGTs were not reported, precluding any comparisons with our data. Peterse et al. (2009), on the other hand, reported C-bGDGTs from geothermally heated soil near two hot springs in the Great Basin, which had the majority of C-bGDGTs (0.9-760 ng/g dry wt) in the same range as what is reported here ( Table 1). More recently, two studies have reported the presence of bGDGTs associated with hydrothermal deposits in the mid-ocean ridges (Hu et al., 2012;Lincoln et al., 2013). Collectively, these observations suggest that bGDGTs can be produced at elevated temperatures, which may indicate a possible origin of bGDGTs in thermophilic bacteria. COMPOSITION OF bGDGTs IN SEDIMENTS AND ENRICHMENTS Hot spring sediment samples contained bGDGT I, Ib, Ic, II, IIb, IIc, and III ( Table 2). In most sediment samples, bGDGT I was the dominant bGDGT, but bGDGT Ib, Ic, II, and III were also present in significant amounts (>15% relative abundance) in one or more sediment samples. Cellulose enrichment generally led to simplification of bGDGT profiles, with bGDGT I comprising up to 100% of total bGDGTs in some enrichments with aspen shavings, and bGDGT II as the only other lipid detected in enrichments with aspen shavings. Enrichments with corn stover were more complex. bGDGT I was the dominant lipid in most corn stover enrichments, but bGDGT Ib, Ic, II, IIc, and III were also present in significant amounts (>15%) in one or more corn stover enrichments. The dominance of bGDGT I is consistent with the high relative abundance of bGDGT I in other hot spring environments (Schouten et al., 2007), in soils of warmer climate (Weijers et al., 2007a), and in some geothermally heated soils, although bGDGT I and II, with various degrees of cyclization, were present in roughly equal amounts in others (Peterse et al., 2009). The different composition of bGDGTs in the cellulolytic enrichments compared with those of the hot spring sediment samples (Sites A-E, Figures 1, 3), along with the increase in P-bGDGTs over C-bGDGTs, suggests that the enrichments stimulated growth of a distinct population of b-GDGT-producing thermophiles. This is also supported by cluster analysis based on relative abundance of C-and P-bGDGTs in soil samples as the majority of them were distinct from those collected from the hot spring environment (Figure 3). We also calculated the methylation index of branched tetraethers (MBT) and cyclization ratio of branched tetraethers (CBT), and the temperature and pH estimates derived from them, according to Weijers et al. (2007a) (Table 2). In the hot spring sediment samples, the calculated pH values averaged 7.2 ± 0.4 (n = 5) for C-bGDGTs and 7.6 ± 0.4 (n = 5) for P-bGDGTs; the former was significantly (P < 0.05) lower than the average pH values (8.0 ± 0.6, n = 6) calculated from soil C-bGDGTs (Table A1), which is close to the average measured pH value of the soil samples (7.8 ± 0.9, n = 6) ( Table 1). FIGURE 3 | Cluster analysis based on the relative abundance of C-bGDGTs (A) and hydrolysis-derived P-bGDGTs (B) from hot spring and soil samples, which distinguish the lipid profiles in hot spring sediments and cellulolytic enrichments from the majority of surrounding soil samples. to the mean annual air temperature (MAAT) for the region (10.7 • C; Table 1). Schouten et al. (2007) showed that one of the Yellowstone hot springs had MBT and CBT temperatures very close to the local MAAT and suggested that the bGDGTs derived from the surrounding soil area. While we cannot unequivocally exclude the possibility of soil contamination in the hot spring sediment samples in GBS, the MBT-CBT temperatures calculated from the enrichments conducted at high temperature are also in the range of 15-32 • C ( Table 2), which cannot be explained by soil contamination. Overall, these results suggest that the MBT-CBT proxies derived from soil environments may not be applicable in geothermal environments and that factors other than pH or temperature may control the relative abundance of bGDGTs in geothermal systems. INTACT POLAR LIPID bGDGTs ANALYZED BY RP-ESI-MS n Intact glycosidic bGDGTs without hydrolysis have been detected in a peat bog where bGDGT-producing bacteria thrive (Liu et al., 2010). In the GBS, activity of bGDGT-producing bacteria was evidenced by the occurrence of monoglycosidic bGDGT Ib detected using total lipid extract (Figure A2), which accounted for ca. 6.5% of total C-bGDGTs (in terms of total peak areas of [M+Na] + , [M+H] + , [M+NH 4 ] + ). In contrast, intact polar bGDGTs were undetectable in the adjacent soil samples by RP-ESI-MS n , even after purification of soil extracts by preparative HPLC (data not shown), which is consistent with the low P-bGDGTs by the hydrolysis method (above). However, the reverse phase LC-MS method failed to detect other polar bGDGTs obtained by the hydrolysis method ( Table 2). 16S rRNA GENE PYROSEQUENCES Previously published pyrosequencing data further supported the in situ production of bGDGTs in the hot spring environment. Peacock et al. (2013) demonstrated that each of the cellulolytic substrates led to enrichment of thermophilic, cellulolytic consortia, as documented by (1) significant increase in DNA yield over un-incubated cellulose substrates, (2) changes in lignocelluloses substrate composition consistent with cellulolysis, and (3) dramatic changes in microbial community composition, with dominant community members closely related to known cellulolytic and hemicellulolytic Thermotoga and Dictyoglomus, cellulolytic and sugar-fermenting Desulfurococcales, and sugarfermenting and hydrogenotrophic Archaeoglobales. The relative abundance of several bacterial phyla in the pyrotag datasets was significantly correlated with bGDGT abundance: Bacteroidetes, Dictyoglomi, and candidate phyla EM3 and OP9 ("Atribacteria" (Figure 4; Table A2). Within the Bacteroidetes, only a single species-level OTU was positively correlated with bGDGT abundance (Table A3); however, that pyrotag could not be assigned to a class or any lower taxonomic level. Similarly, EM3 and OP9 were represented by one and two species-level OTUs that correlated with bGDGT abundance, respectively, but no cultures are available for those candidate phyla (Table A3). Although these groups are plausible sources of bGDGTs, definitive experiments to test whether they are sources of bGDGTs may await isolation and study of representative strains. The phylum Dictyoglomi was represented by two OTUs that correlated with bGDGT abundance that could be ascribed to the genus Dictyoglomus (Table A3). That genus is currently comprised of only two closely related species, Dictyoglomus thermophilum and D. turdigum, both of which are obligate fermenters that are known to decompose components of hemicellulose and cellulose (Saiki et al., 1985;Patel et al., 1987). Dictyoglomus spp. Rt46-B1 is known to produce predominantly C 16:0 phospholipid derived fatty acids and traces (<10%) of C 14:0 , C 15:0 , C 17:0 , C 18:0 , C 20:0 , aC 17:0 , C 16:1ω11c , and C 18:1ω9c (Patel et al., 1991). Our own analysis of lipid extracts from pure cultures of Dictyoglomus thermophilum DSM 3960 T and D. turdigum DSM 6724 T , both grown on DSMZ 388 medium with gentle agitation in the dark at 70 • C for 2 days, failed to reveal bGDGTs (data not shown). This argues against Dictyoglomi as a source of bGDGTs, although it is possible that such organisms may produce bGDGTs in the natural environment but not under the laboratory conditions. Lastly, Acidobacteria that are commonly believed to be sources of soil/peat bog bGDGTs (Weijers et al., 2006) constituted less than 0.01% of total pyrotags and did not show a significant correlation with bGDGTs in the enrichment, even though the primers used for PCR each match >95% of Acidobacteria 16S rRNA gene sequences. In summary, we show three lines of evidence supporting the in situ production of bGDGTs at high temperature in a Great Basin hot spring: (1) the greater abundance of hydrolysis-derived P-bGDGTs over C-bGDGTs in enrichments vs. the opposite in the soil samples and the distinct composition of bGDGTs in the enrichments, (2) the presence of the intact monoglycosidic-bGDGT Ib identified by the RP-ESI-MS n in the hot spring and its absence in the surrounding soil, and (3) the significant correlations between bGDGTs and certain groups of thermophilic bacteria combined with the low abundance of Acidobacteria, as inferred from pyrotag data, in all samples. funded by the U.S. Department of Energy, DOE grant DE-EE-0000716, and the Joint Genome Institute at the DOE (CSP-182). LC-MS analysis was performed at the State Key Laboratory of Marine Geology at Tongji University through a National "Thousand Talents Program" of China and at MARUM, University of Bremen. Brian P. Hedlund is grateful for support from Greg Fullmer through the UNLV Foundation. Chun Zhu and Kai-Uwe Hinrichs acknowledge funding from the Deutsche Forschungsgemeinschaft in form of a postdoctoral fellowship granted to Chun Zhu by the DFG-Research Center and Excellence Cluster "The Ocean in the Earth System" and an instrument grant for the acquisition of the UPLC-qTOF system (Inst 144/300-1).
v3-fos-license
2016-03-14T22:51:50.573Z
2014-08-01T00:00:00.000
2258699
{ "extfieldsofstudy": [ "Biology", "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.mdpi.com/1999-4915/6/8/3311/pdf", "pdf_hash": "a4f654d63a84a1efac1e5d5d7c232f797251799a", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:1737", "s2fieldsofstudy": [ "Biology", "Medicine" ], "sha1": "a4f654d63a84a1efac1e5d5d7c232f797251799a", "year": 2014 }
pes2o/s2orc
Immunogenetics of Small Ruminant Lentiviral Infections The small ruminant lentiviruses (SRLV) include the caprine arthritis encephalitis virus (CAEV) and the Maedi-Visna virus (MVV). Both of these viruses limit production and can be a major source of economic loss to producers. Little is known about how the immune system recognizes and responds to SRLVs, but due to similarities with the human immunodeficiency virus (HIV), HIV research can shed light on the possible immune mechanisms that control or lead to disease progression. This review will focus on the host immune response to HIV-1 and SRLV, and will discuss the possibility of breeding for enhanced SRLV disease resistance. Introduction The caprine arthritis encephalitis virus (CAEV) and the Maedi-Visna virus (MVV) are enveloped RNA viruses in the lentivirus genus of the Retroviridae family [1,2]. While small ruminant lentiviruses (SRLVs) were once considered to be species-specific, recent studies suggest that they can be transmitted between sheep and goats [3], and can recombine to form new CAEV-MVV strains [4]. These viruses primarily infect monocytes, macrophages, and dendritic cells [5], and like the human immunodeficiency virus (HIV), infection is lifelong and can persist for months or years in a latent or NF-B, and Sp1 [27], important host immune related transcriptional factors which all lead to transcriptional activation and virus replication. Host Immunity to Lentiviral Infections There are substantial gaps in our knowledge of host innate and acquired immune responses to SRLV. Due to similarities between HIV-1 and SRLV, HIV-1 research can improve our understanding of SRLV immune responses. This section will provide an overview of the immune response to HIV-1, and discuss the current knowledge of the immune response to SRLV infection. Toll-like Receptors and Antiviral Peptides Toll like receptors (TLR) are host pattern recognition receptors (PRRs) that play a key role in innate recognition of a variety of conserved pathogen-associated molecular patterns (PAMPs). Ligation of PAMPS with PRRs induces intracellular signaling pathways that involve a series of phosphorylation events mediated through the common adaptor molecule, MyD88, reviewed by [28]. Downstream effects of these signaling pathways result in the activation and translocation of NF-B or AP1 transcription factors to the nucleus, and subsequent pro-inflammatory cytokine production that facilitates the recruitment of innate effector cells such as neutrophils and macrophages to clear the pathogen [29]. During HIV-1 infection, the roles of TLR 7 and 8 have been investigated. Both TLR7 and 8 recognize single-stranded viral RNA (ssRNA), and this results in the induction of intracellular signaling cascades that involve NF-κB transcriptional activation, which ultimately results in the production of a variety of pro-inflammatory cytokines such as the type 1 interferons (IFN-α and -β), IL-6, and TNF-α [30,31]. Although this serves as an anti-viral defense mechanism through the induction of antiviral peptides such as APOBEC3G (A3G), tripartite motif 5-alpha (TRIM5α), and tetherin, persistent immune activation also induces viral replication through the activation of the NF-B response element located within the HIV-1 LTR [27]. HIV-1 replication can also be induced through the activation of other TLR signaling pathways. For example, Ranjbar et al., [32], observed that HIV-1 replication can also be induced through activation of TLR2 during Mycobacterium tuberculosis infection. Since the recruitment of MyD88 and activation of either NF-B or AP1 is conserved across TLRs 1, 2, 4, 5, 6, 7, 8, and 9 [28], recognition of a variety of bacterial and viral PAMPS may all contribute to HIV-1 transcriptional activation and disease progression. Despite the induction of HIV-1 replication mediated through TLR signaling, there are host-adapted antiviral proteins that serve to control HIV-1 replication. Expression of these antiviral peptides is upregulated by type 1 interferons that are induced in response to TLR activation [33]. TRIM5α, for example, is a lentivirus restriction protein that mediates cellular restriction against retroviruses in a species-specific manner [34]. Although human TRIM5 poorly restricts HIV-1, polymorphisms within the human TRIM5α gene have allowed for the development of human TRIM5α with high HIV-1 restriction activity [34]. The exact mechanism by which TRIM5α works is not well understood, however, it does concentrate around the viral core and recognizes and binds to the viral capsid to facilitate rapid uncoating [35]. This disrupts the reverse transcription process since uncoating is a carefully regulated process, and early or delayed uncoating may negatively affect virus infectivity [36]. Once the capsid has been targeted by TRIM5α, the viral core is disassembled which is likely mediated by proteosomal association with TRIM5α-virus complexes [36]. A3G is another antiviral protein expressed by lymphocytes, macrophages and dendritic cells following cellular stimulation by IFN-α and IL-2 [37]. A3G restricts HIV-1 infection by two separate mechanisms. The first involves the packaging of A3G during viral assembly directly into viral capsids in the absence of vif, and this occurs through the interaction with the viral capsid proteins [38]. This provides A3G with direct access to the HIV-1 genome where it induces mutations during the reverse transcription process by editing cytosine residues to uracil residues in the proviral minus strand [37][38][39]. The mutated viral DNA is then degraded due to reduced stability or the inability to be incorporated into the host genome [40]. However, these mutations are not always sufficient to prevent proviral integration, if the viral DNA still integrates successfully, the mutations often alter viral open reading frames or introduce premature stop codons resulting in misfolded or truncated viral proteins that are unable to produce infectious particles [37,40]. These misfolded viral proteins provide host cells with a pool of viral peptides that can be presented onto major histocompatibility complex (MHC) I molecules for antigen presentation. Expression of A3G may also enhance MHC I presentation and promotes the activation of cytotoxic T lymphocytes (CTLs) [37]. The other mechanism by which A3G limits HIV-1 infectivity is by directly interfering with the reverse transcription process, which may occur through the direct interaction between A3G and the viral reverse transcriptase enzyme [41]. Although A3G is highly effective at controlling virus replication, HIV-1 has adapted mechanisms to counteract the effects of A3G [37,40]. The HIV-1 accessory gene, vif, is essential for virus replication, but also has a role in promoting A3G degradation [25,37,40]. Vif binds A3G and mediates its polyubiquitination, tagging it for proteosomal degradation [39], but may also inhibit A3G mRNA translation by directly binding A3G mRNA to alter its stability [39]. Tetherin is also induced by type 1 IFNs and restricts HIV-1 by preventing virion release from the cell surface [42]. Since tetherin is an integral membrane protein with cytoplasmic, transmembrane, and extracellular domains, it can be incorporated into the membrane of virion particles as they bud from the cell surface [43]. Consequently, this serves to anchor virion particles to both the cell surface and to other virion particles as they bud from the host cell [42]. Once anchored to the cell membrane, virion particles can be endocytosed and degraded in lysosomal compartments [44]. Interestingly, HIV-1 has adapted a mechanism to prevent viral tethering. The HIV-1 vpu gene for example, encodes an integral membrane protein that interacts with tetherin transmembrane domains [45]. The vpu protein prevents the incorporation of tetherin into the envelope of virion particles and down regulates tetherin expression at the cell surface by trafficking tetherin to the trans-golgi network and away from the sites of virion assembly prior to lysosomal degradation [45]. Other antiviral proteins include SAMHD1 and the zinc finger antiviral protein (ZAP). SAMHD1 is a host protein found in resting macrophages, dendritic cells, and CD4 + T cells that cleaves deoxynucleoside triphosphages (dNTP) into deoxynucleosides and inorganic triphosphates, which depletes the dNTP pool required for HIV-1 reverse transcription [46]. This prevents the synthesis of full-length double stranded viral DNA and therefore prevents integration of proviral DNA [46]. The vpx gene of HIV-2 and the simian immunodeficiency virus (SIV) has the ability to disrupt SAMHD1 by interacting with the C-terminus and promoting proteosomal degradation [46]. HIV-1, however, does not contain the vpx gene so HIV-1 replication is actively suppressed in resting CD4 + T cells [47]. ZAP has been identified for its role in restricting the murine leukemia virus (MLV) along with HIV-1, however, some viruses can replicate normally in ZAP-expressing cells [48]. ZAP restricts HIV-1 by depleting multiply spliced mRNA by recruiting poly(A)-specific ribonucleases that shorten the poly(A) tail and directs mRNA to exosomes for degradation [48]. Natural Killer (NK) Cells Further research investigating the innate control of HIV-1 has focused on identifying the roles of natural killer (NK) cells in controlling viral replication. The exact role of NK cells during HIV-1 infection is not well understood, however, it has been suggested that NK cells serve as a means of controlling viral replication prior to the induction of HIV-1-specific CD8 + T cell responses [49]. NK cells target HIV-1 infection by directly killing infected cells through the killer immunoglobulin-like receptor (KIR)-mediated recognition of target cells, degranulation resulting in granzyme and porforin release, the Fas-Fas ligand pathway, antibody-dependent cell-mediated cytotoxicity (ADCC), and modulating adaptive immune responses with IFN- production [50]. During acute HIV-1 infection, NK cells are known to rapidly proliferate [49], however, these cells may not be fully functional [48]. For example, Naranbhai et al., [51] observed reduced cytotoxic NK cell responses during acute HIV-1 infection. Similarly, antibody dependent cell mediated cytotoxicity (ADCC), the process by which antibodies bind to a specific pathogen and subsequently crosslink with Fc receptors on NK cells, macrophages, neutrophils, and mast cells, is also impaired during acute HIV-1 infection [50]. However, this may be attributed to the time required to generate HIV-1 specific antibody levels [50]. Once sufficient HIV-1 specific antibody levels were generated, strong NK cell responses were observed in chronic HIV-1 infected patients [52]. Although reduced cytotoxic responses and ADCC may be impaired during acute infection, the proportion of activating and inhibitory receptors on NK cells appears to be elevated [50]. As the disease progresses, a lower ratio of inhibitory to activating receptors has also been observed [50]. NK target cell lysis involves a balance between activating and inhibitory signals mediated through the ligation of MHC class I molecules. An increase in activating receptors during chronic HIV-1 infection implicates NK cells as important effector cells for controlling disease progression, however, further investigation into the diverse role of NK cells during HIV-1 infection is warranted.  T Cells T cells are a unique subset of innate immune effector cells that possess the T cell receptor. Unlike CD4 + and CD8 + T cell subsets, T cells do not require antigen presentation to become activated. The T cell subset can be further divided into V1 and V2 cells, which are localized to mucosal surfaces and peripheral blood, respectively [53]. Little is known about the role of T cells in HIV-1 infection; however, they do appear to play an important role in the control of HIV-1 viral replication. For example, Fenolilo et al., [53] observed an expansion of V1 cells during HIV-1 infection, and these cells contained elevated gene expression levels for IFN- and IL-17. Similarly, in African patients, V1 cells appeared to be expanded during HIV-1 and HIV-2 infection, and V1 T cell counts appeared to be positively correlated with CD4 + T cell counts [54]. In addition to their cytotoxic activity and pro-inflammatory cytokine production, T cells have also been identified for their role in ADCC. In vitro studies have suggested that V2 cells from HIV-1 patients are expanded and have potent ADCC activity [55]. Given that V2 cells are present in the circulation, they have direct access to both circulating antibodies and circulating HIV-1 virion particles and may prove effective at controlling HIV-1 dissemination throughout the body. Acquired Immunity to HIV-1 The acquired immune response involves cell-mediated (CMIR) and antibody-mediated immune responses (AbMIR). A balance between both responses is essential to maintaining overall health. The CMIR is primarily designed to combat intracellular pathogens such as viruses, and intracellular bacteria and parasites, whereas, the AbMIR has developed to combat extracellular pathogens such as extracellular bacteria and parasites. The CMIR involves antigen uptake and presentation by professional antigen presenting cells (APC) such as dendritic cells (DC) and macrophages that reside within epithelial cell layers. Following antigen uptake, the DCs migrate to a draining lymph node where they produce IL-12 to promote Th1 cell differentiation, and present antigens via MHC class II molecules to CD4 + T helper cells. This allows CD4 + T cells to become activated and produce the Th1 subset of cytokines, IFN- and IL-2. Production of IL-2 by CD4 + T cells is particularly important as it is necessary for the development and expansion of antigen-specific CD8 + T cells that recognize foreign antigen in the context of MHC class I molecules as a means of killing infected cells. The Th1 cytokine profiles also induce immunoglobulin (Ig) G2 production by B cells, which promotes complement activation and opsonization [56]. The AbMIR also involves antigen processing and presentation by DCs to CD4 + T cells, however, different cytokine profiles allow for the differentiation of Th2 cell subsets. These key cytokines include IL-3, 4, 5, 9, 10, and 13, which serve to suppress Th1 immune responses. In order to produce high levels of antigen specific antibodies, B cells must first be primed during initial antigen exposure. During this primary response, B cells differentiate and expand into memory or plasma cells. The plasma cells initially secrete IgM and small amounts of IgG. Upon subsequent antigen exposure, memory B cells become activated to rapidly produce high levels of IgG1 and other antibody isotypes. Immune responses to HIV-1 infection tend to vary greatly from individual to individual; therefore, HIV-1 patients are generally classified based on their ability to control infection [57]. Although there is no standardized viral set point or CD4 + T cell level to define a HIV-1 controller or progressor, in general, a controller is able to maintain low viral loads and CD4 + T cell levels in the absence of highly active antiretroviral therapy (HAART); whereas HIV-1 progressors have high viral loads and low CD4 + T cell counts [58]. However, many of the factors that differentiate a controller from a progressor are unclear, and despite over 30 years of HIV-1 research, there is still a great deal that is largely unknown. Cell-Mediated Immune Response to HIV-1 The CMIR to HIV-1 is unique in that HIV-1 preferentially infects CD4 + CCR5 + T helper cells. During acute HIV-1 infection, the virus rapidly replicates and CD4 + T cell population numbers decline [59]. This decline in CD4 + T cells is attributed to both direct killing by the virus, and targeted CD8 + cytotoxic T lymphocyte (CTL) responses [59]. Maintenance of CD4 + T cells levels is essential to maintain HIV-1 specific CTL responses, which is associated with control of the infection [60]. During acute HIV-1 infection, HIV-1 specific CTLs emerge prior to neutralizing antibodies [61]. However, robust CTL responses to HIV-1 may not necessarily control the infection, as the viral nef protein is known to down-regulate MHC class I molecules to evade CTL responses [62]. It has also been suggested that CTL-mediated control of HIV-1 is specific to certain HIV-1 peptides. CTLs have been observed to mount the strongest cytotoxic responses to the gag and nef peptides [63,64]. However, this also induces selective pressure on HIV-1 to mutate these regions creating viral escape mutants [65]. It has been suggested that these HIV-1 escape mutants may be associated with the inability to control infection; however, this association is not always clear [66]. Some studies have suggested that these mutations come at a fitness cost to the virus [61,67,68], and escape tends to occur rapidly during acute infection and declines as the infection reaches the chronic state [61,69]. In some instances, the rate of mutation occurs so rapidly that after the initial infection, the transmitted virus or founder virus is completely lost [61]. Therefore, CTL responses that occur during acute infection must continuously adapt to changing HIV-1 peptides [61,70]. As the infection becomes chronic, immunological exhaustion becomes apparent due continuous immune activation in response to viral replication [71,72]. Consequently, CD8 + CTLs often exhibit an exhausted phenotype with increased CTLA-4 and PD-1 receptor expression and reduced cellular function [71,73]. HAART can help prevent this by limiting viral replication, however, if left untreated, the disease will continue to progress [73]. Antibody-Mediated Immune Response to HIV-1 Antibody responses to HIV-1 infection do not emerge until approximately 13 days post infection [74]. These antibodies include gp41-specific IgM and IgG, which are non-neutralizing [74]. As the infection progresses, IgG1 specific gp120 antibodies are produced [75]. These antibodies are specific to a variety of epitopes on gp120 including the CD4 binding site, glycan-containing regions, and the V3 loop [75]. The early production of non-neutralizing antibodies has little effect on viral infectivity and control of viral load; however, they do play a role in ADCC and viral opsonization [76]. During acute HIV-1 infection, IgG1-virus immune complexes are predominately comprised of gp41 antibodies and are present at relatively low levels compared to the viral load [76]. In contrast, during chronic HIV-1 infection, these virus-immune complexes were more abundant and are comprised of gp120-specific IgG1 [76,77]. This suggests that although HIV-1 specific antibodies emerge early after infection, they are not effective at controlling the viral load [76]. Neutralizing antibodies begin to emerge as early as 16 weeks post infection [75]. However, these early neutralizing antibodies may not be fully functional, as broad neutralizing ability does not emerge in HIV-1 patients until 2-3 years post infection [75]. Some HIV-1 infected patents, in contrast, never develop neutralizing antibodies, and it is unclear what factors contribute to these differences [78]. Additionally, the emergence of neutralizing antibodies may not necessarily control the infection. For example, Mikell et al. [78] observed a positive correlation between neutralizing ability and viral load, and Euler et al. [79] observed lower percentages of CD4 + T cells in patients with strong neutralizing activity; suggesting that HIV-1 may mutate env epitopes to escape virus neutralization. Other T Cell Subsets Recent studies have implicated roles of Th17 and Treg cells in the pathogenesis of lentiviral infections. Research investigating the role of Th17 cells in HIV-1, for example, found that long-term non-progressors had higher Th17 cell numbers compared to disease progressors, and higher Th17 cell numbers were associated with a lower viral load [80]. Additionally, it has been established that Th17 cells and Treg cells interact during HIV-1 infection to regulate immune responses [81]. Long-term non-progressors tended to have Th17/Treg ratios similar to uninfected controls, whereas, individuals that did not easily control the rate of viral replication had depleted Th17 numbers and increased Treg numbers suggesting a switch towards an anti-inflammatory immune response [81]. However, it is unclear if maintenance of Treg cell populations during HIV-1 infection is associated with rapid or delayed disease progression. It has been suggested that Treg-mediated IL-10 production is also associated with HIV-1 disease progression as it suppresses specific CD4 + T cell responses [82]. However, other studies have suggested that high levels of Treg cells can help prevent disease progression by limiting CD4 + T cell and CTL responses, thus limiting continuous immune activation and preventing immunological exhaustion [83,84]. Therefore, further research is required to understand the role of Tregs in HIV-1 disease progression. Toll-Like Receptors and Antiviral Peptides The role of viral-induced TLR signaling has not been widely studied in sheep and goats, however, during SRLV infection, TLR 7 and 8 become activated inducing IFN-α, IL-6, TNF-α production and subsequent antiviral protein expression [85]. It is unclear if TLR signaling pathways induce SRLV replication, or if the SRLV genome has a NF-B transcriptional binding site in the promoter. However, given the importance of macrophages as innate immune effector cells, macrophage maturation and activation can induce SRLV replication. There is considerably less research investigating the roles of the intrinsic restriction factors TRIM5α, A3G, and tetherin in SRLV infection, however, TRIM5α has recently been identified in sheep and goats, and has been found to be effective at restricting SRLV [86]. An A3G-like protein has also been identified in sheep, and has shown cytodine deaminase activity [87]. Like HIV-1, SRLV contains the accessory vif gene to combat the restrictive activity of A3G, and SRLV vif appears to restrict A3G across species [87]. Tetherin has been investigated in sheep due to its role in restricting endogenous retroviruses [88]. Since the SRLV genomes lack the accessory gene vpu, tetherin likely has high SRLV restriction activity. However, further investigation into the roles of TLR activation and intrinsic restriction factors in limiting SRLV infection is necessary. NK Cells The role of NK cells in SRLV infection has not been investigated; however, given the importance of NK cells for HIV-1 infection, it is likely that they play an important role in the control of SRLV. It is unclear how NK cells target SRLV-infected macrophages; however, we may speculate that they recognize and bind infected cells and virion particles through a number of mechanisms including KIR-mediated recognition, degranulation, complement activation, ADCC and the production of IFN- which serves to either kill infected cells, or modulate virus-specific immune responses. ADCC has been investigated as a possible control mechanism in MVV infected sheep [89]. Sheep vaccinated with a recombinant env protein had higher IgG2 antibody titers an IgG1 and polyclonal serum had ADCC activity compared to MVV-infected non-vaccinated animals suggesting that MVV specific IgG2 may have strong ADCC activity early in infection [89].  T Cells In ruminants, γδT cells comprise approximately 70% of all lymphocytes in young animals, and are an important part of the innate immune system [90]. CAEV-infected goats have a significantly higher proportion of γδT cells compared to healthy goats, which suggests that these cells may be important for controlling SRLV infection [90][91][92]. Given that T cells tend to localize to mucosal surfaces, it is possible that this cell type plays a crucial role in limiting SRLV entry and mediating early immune responses against these viruses. Acquired Immunity to SRLV During SRLV infection, both branches of the acquired immune system are activated though it remains unclear how each relates to either host protection or disease progression [93]. Like HIV-1, the degree of the immune response influences the viral load, which is correlated to the severity and presence of clinical disease symptoms [94,95]. Animals that respond with a CMIR are often referred to as long-term non-progressors because they exhibit a persistent viral infection but lack clinical symptoms and have a low viral load. These animals produce high levels of IgG2 antibodies specific to gp135, and a dominant subset of gp135 responsive Th cells displaying high levels of IFN-γ gene expression [94,95]. In contrast, arthritic animals tend to mount a type 2 or AbMIR, characterized by high polyclonal SRLV reactive IgG1 antibody titers and a dominant subset of Th2 cells with low proliferation and enhanced IL-4 gene expression [84,95]. Cell-Mediated Immune Response to SRLV The CMIR is likely the most efficient response for controlling viral load. Since SRLV, unlike HIV-1, does not infect CD4 + T cells, the maintenance of CD4 + T cell populations during SRLV infection will allow for the development and maintenance of SRLV-specific CTLs. However, SRLV may interfere with CD4 + T cell proliferation as CAEV-infected arthritic goats had reduced CD4 + T cell proliferation compared to CAEV-infected asymptomatic goats [96]. Reduced lymphocyte proliferation was also observed in clinically affected sheep compared to MVV-infected asymptomatic animals [97]. Since the SRLV genome does not contain the viral nef gene, down-regulation of MHC I by SRLV likely does not occur. However, SRLV may down-regulate MHC class II molecule expression on SRLV-infected macrophages [91], and down-regulation of CD80 co-stimulatory molecules has been observed in sheep with clinical disease symptoms [97]. Overall, this suggests that SRLV infection appears to interfere with antigen processing and presentation, and thus limits the ability of antigen presenting cells to activate CD4 + T cells and induce CTL responses. Although the CMIR is the preferential response for maintaining a low viral load, the presence of type 1 cytokines is not sufficient to control viral replication [94]. In fact, some of the Th1 cytokines, including IFN-γ, TNF-α and GM-CSF, activate the SRLV promoter and induce viral replication [98]. These cytokines activate the viral promoter in the U3 region 70 base pair repeat and this is mediated through the STAT1 pathway [99]. This also requires at least one gamma-activating site (GAS) within the viral promoter [100], indicating that monocyte differentiation and macrophage activation can induce viral transcription. Although not all SRLV strains contain a GAS, the presence of a GAS in the SRLV LTR is not necessary for viral replication [101]. However, high levels of viral replication in response to Th1 cytokines may lead to a cycle of continuous immune activation and to eventual immunological exhaustion and disease progression. Antibody-Mediated Immune Response to SRLV The antibody response to SRLV generally targets epitopes on the gp135, gp38, and capsid proteins [102]. Antibody responses can emerge as early as 2-4 weeks post infection, and tend to fluctuate during the first 6 months of infection [103]. Additionally, like HIV-1 these early antibodies are specific to linear epitopes and are thus non-neutralizing [94,96]. However, as discussed previously, these early antibodies may play an important role in ADCC [89]. Neutralizing antibodies can take as long as 2 years to emerge and can control virus infection [104]. However, like HIV-1, SRLV virus epitopes may mutate in response to selection pressure imposed by host immunoglobulins [104,105]. These mutations tend to occur in the fourth variable domain of gp135 and the mutation of a conserved cytosine was shown to change the neutralization epitope [106], these mutations likely contribute to disease progression. Antibody responses, in general, may contribute to SRLV disease progression, since asymptomatic animals also tend to mount a CMIR with a low titer of gp135-specific IgG2, while arthritic animals exhibit very high levels of IgG1 and a higher IgG1/IgG2 ratio than asymptomatic animals [107]. Although further research is necessary to better understand the roles of neutralizing antibodies in the control of SRLV infection, it is evident that an AbMIR is not sufficient to control infection [108], and readily contributes to disease progression. Immune Dysregulation The dynamics of immune evasion strategies of SRLV are unclear, however, SRLV infection can alter macrophage function, and immune dysregulation is apparent. This is particularly evident in animals that exhibit clinical signs of arthritis or mastitis, as these inflammatory conditions are characterized by dense mononuclear cell infiltration accompanied by necrosis and edema [108]. Arthritic animals also exhibit thickening and fibrosis of the articular capsule, and erosion and ulcer formation of the articular cartilage occurs in severe cases [109]. Consequently, lesions form in the joint synovial membranes as well as the mammary gland [5]. Histological analysis of these lesions revealed large numbers of macrophages, CD8 + T cells and B cells, and the proportion of B cells present in the lesions increases as the infection persists [110]. To investigate the immune dysregulation that occurs during SRLV infections, Lechner et al., [111] examined the cytokine profiles produced by CAEV-infected macrophages in culture. This study revealed that infected macrophages produced elevated IL-8 and monocyte chemotactic protein-1 (MCP1), but had reduced levels of TGF-β mRNA [101]. In addition to this, reduced levels of TNF-α, IL-1β, IL-6 and IL-12 mRNA, and increased levels of GM-CSF mRNA were observed in lipopolysaccharide (LPS)-stimulated CAEV-infected macrophages [111]. Higher levels of pro-inflammatory cytokine gene expression, such as IL-1β, IL-6, IL-12, and TNF-α likely contribute to increased immune activation and trafficking of effector cells to the infection site causing inflammation and lesion formation. Elevated levels of GM-CSF has also been observed in alveolar macrophages of MVV infected sheep [112], and SRLV infection can induce a phenotypic shift in macrophages from pro-inflammatory M1 to an anti-inflammatory M2 macrophages that favoured viral replication [113]. Interestingly, several reports have suggested that SRLV-infected goats are not immunocompromised [5,6,111,114], as is the case for HIV-1 patients. However, the altered cytokine profiles observed in CAEV-and MVV-infected macrophages, along with the reduced DTH response to mycobacterial antigens in MVV-infected sheep, suggest that altered cellular functions may directly affect immunity [115]. Genetics of Lentiviral Resistance There is little research investigating the genetic parameters that affect resistance or susceptibility to SRLV, however, research with HIV-1 patients has led to the discovery of a variety of genetic polymorphisms that appear to be associated with disease resistance, or slower disease progression. Most of the genes associated with HIV-1 resistance encode a variety of immune molecules such as MHC class I, chemokine receptor (CCR) 5, KIR, and TLRs. For example, the HLA-B*27, B*57, the Bw4 alleles as well as heterozygosity at these class I loci are associated with slower disease progression [116]. However, the effect of these alleles on HIV-1 disease progression is limited to Caucasian and African populations [117]. In the Japanese population, HLA-B*52, B*67, and C*12 have been associated with a lower viral load [117]. East Asian populations also have a high proportion of individuals carrying a deletion in the A3B coding region, which increases susceptibility to HIV-1 infection [118]. There are also significant population differences in the CCR5 Δ32 mutation that is associated with resistance to HIV-1 [119]. The Δ32 mutation, for example, is only found in European, West Asian, and North African populations and shows a north-to-south decline [120]. Individuals homozygous for this mutation have a non-functional CCR5 receptor, and protection has been observed in both Δ32 homozygous and heterozygous individuals [120]. Similarly, breed differences in resistance to SRLV, as well as polymorphisms in ovine TLR 7, 8, CCR5 and MHC genes, have been found to be associated with resistance to SRLV [114,[121][122][123][124]. For example, a higher proportion of polymorphisms in the ovine TLR 7 and 8 leucine rich repeat have been identified in MVV-infected sheep [121], and a deletion in the ovine CCR5 gene was associated with a reduced proviral load in rambouillet, Polypay and Columbia breeds [124]. Given these genetic parameters, it may be possible to selectively breed sheep and goats to have enhanced SRLV resistance. Many of the genetic polymorphisms associated with HIV-1 resistance were identified using genome-wide association studies (GWAS). To date, few sheep GWASs, and no goat GWASs have been carried out to identify genes associated with SRLV resistance. One possible reason for this may be due to difficulty in reliably identifying the phenotype, using current diagnostic methods. For example, the agar gel immunodiffusion (AGID) test and the enzyme-linked immunosorbent assay (ELISA) are widely used to detect SRLV infection [125]. Both of these tests, however, depend on the presence of host antibodies that are specific to SRLV, which are influenced by variables such as age and health status of the animal [126]. It is also possible that the AGID and ELISA tests may be influenced by genetic differences among SRLV strains [7]. Polymerase chain reaction (PCR) is also being used to test for the presence of viral RNA and proviral DNA in different tissues. This test is more sensitive than serological testing, however, problems with primer binding heterogeneity can make detection difficult [127] and the increased cost of PCR makes this relatively inaccessible to most producers [11]. Despite these challenges, a recent ovine GWAS identified a transmembrane protein gene, TMEM154, as a candidate gene for SRLV resistance [128]. The role of this protein is currently unknown, but it is expressed at high levels in B cells and monocytes, which suggests that it may be of immunological importance [128]. Breed differences have also been identified for the TMEM154 gene, the Dalsebred, Herdwick, and Rough Fells breeds had a higher allele frequency for the TMEM154 mutation that is associated with SRLV resistance [123]. An additional ovine GWAS identified another transmembrane protein, TMEM38A, as a possible gene associated with SRLV resistance, whereas the DPPA2 gene was associated with susceptibility [129]. The DPPA2 gene is involved with embryonic lung development and suggests that altered lung development in sheep may increase susceptibility to SRLV infections [129]. Although breeding livestock for enhanced disease resistance is becoming an increasingly popular means of improving animal health, in the context of SRLV infections, using genomic selection to breed for SRLV resistance may not be practical. As observed in HIV-1 studies, resistance or slower disease progression is a polygenic trait, involving a complex interaction between a variety of different innate and acquired immune genes [130,131]. It may, however, be possible to breed for resistance using phenotypic rather than genotypic selection. In sheep, helminth resistance is also a polygenic trait and resistant sheep have been bred based on phenotypic parameters such as fecal egg count [132]. However, it is unclear if breeding for resistance to one disease will increase susceptibility to others. Also, if SRLV resistance was introduced based on phenotypic selection, it is unclear how the virus will adapt. Since SRLV mutation rates are high and the virus already mutates in response to immune-based selection pressure, it is possible that the virus will quickly adapt, rendering the breeding program redundant. An additional approach to breeding for SRLV resistance is by breeding for overall enhanced immune responses (EIR). This approach involves measuring CMIR and AbMIR in response to various antigens. A recent GWAS in dairy cattle found several single-nucleotide polymorphisms (SNPs) in the MHC locus that were associated with both AbMIR and CMIR [133]. In goats, the MHC haplotype Be10-D2 was associated with rapid seroconversion and higher antibody titers compared to the Bel-D5 haplotype [114]. Given this association, it is possible that further investigation and SNP discovery will allow for the identification of EIR sheep and goats that readily control SRLV disease progression. Conclusions The immune response to lentiviral infections is a complex and dynamic response, and to date very little is known about how the immune system responds to these infections. Extensive research on HIV-1 has greatly contributed to our knowledge of the dynamics of the host-virus interaction; however, HIV and SRLV, despite their similarities are two distinct viruses and extrapolating knowledge from HIV research must be approached with caution. It is evident from the research presented here that gaps exist in our knowledge of SRLV, and before exploring the possibility of breeding for resistance, extensive research needs to be done to better understand SRLV immune responses. These knowledge gaps primarily lie in the areas that focus on how SRLV modulate the host immune response and the dynamics of early infection. As previously discussed, for example, the Th1/Th2 paradigm appears to apply to SRLV infection; however, if in fact SRLV infection steers macrophage polarization towards a M2 phenotype favoring the Th2 immune response, then one would expect that the disease would progress rapidly and clinical disease would be apparent in a large proportion of animals. However, it often takes years before clinical SRLV infection becomes apparent. Therefore, future studies should firstly investigate how SRLV infection alters macrophage function as a whole. To start, cytokine production and intracellular signaling should be investigated. It would also be beneficial to investigate how endogenous stress hormone levels affect disease resistance and progression, and how host macrophages respond to SRLV during co-infection with other pathogens. Since macrophages play an important role as both innate effector cells and antigen presenting cells, an improved understanding of SRLV-infected macrophage function will improve our understanding of how SRLV is controlled during early infection, how the acquired immune response is induced, and how SRLVs modulate the host immune response. Conflicts of Interest The authors declare no conflict of interest.
v3-fos-license
2018-04-03T04:20:19.014Z
2001-03-02T00:00:00.000
38552006
{ "extfieldsofstudy": [ "Medicine", "Biology" ], "oa_license": "CCBY", "oa_status": "HYBRID", "oa_url": "http://www.jbc.org/content/276/9/6797.full.pdf", "pdf_hash": "4735b90e80e51cc7521e1303f598b421ec0579ca", "pdf_src": "Adhoc", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:1741", "s2fieldsofstudy": [ "Biology", "Chemistry" ], "sha1": "9a1faab64ad0283678166dcbc235bd09b101ddd0", "year": 2001 }
pes2o/s2orc
Altered Regulation of Cell Cycle Machinery Involved in Interleukin-1-induced G 1 and G 2 Phase Growth Arrest of A375S2 Human Melanoma Cells* Interleukin-1 (IL-1) inhibits the growth of A375S2 human melanoma cells by arresting them at G 1 and G 2 phases of the cell cycle. The arrests are preceded by a rapid decrease in kinase activities of cyclin E-Cdk2 and cyclin B1-Cdc2, which are critical for G 1 -S and G 2 -M progression, respectively. IL-1 quickly enhances the protein expression of the CDK inhibitor p21 cip1 . The induced p21 binds preferentially to cyclin E-Cdk2, and the increase in p21 binding parallels the decrease in cyclin E-Cdk2 activity. Thus, p21 is likely to be responsible for the inhibition of cyclin E-Cdk2 activity and G 1 arrest. Coinciding with the decrease in cyclin B1-Cdc2 activity, there is an increase in tyrosine phosphorylation of Cdc2, suggesting that an increase in the inactive Tyr-15-phos-phorylated form of Cdc2 is involved in the decrease in cyclin B1-Cdc2 activity and G 2 arrest. Furthermore, we found that IL-1 causes rapid dephosphorylation of p107, but not of pRb or p130, while the total protein levels of p130 are increased. Thus, IL-1 may exert its growth-arresting effects via p107 and p130 pathways rather than through pRb. Interleukin-1 ␤ (IL-1), 1 originally defined as a monocytederived factor mitogenic for thymocytes, is now known to affect many biological activities, including the ability to alter immunologic, inflammatory, hematopoietic, and homeostatic responses in the host system. In vitro, IL-1 has been shown to inhibit the growth of certain tumor cells (1)(2)(3). A role for IL-1 in host defense against tumors has been further suggested by its ability to augment natural killer cell activity (4), monocytemediated tumor cytotoxicity (5), and T-and B-cell responses (6) and to induce tumor regression in mice (7). These properties have made IL-1 an attractive candidate for potential application for certain solid tumors (8 -12), although several problems including various adverse effects such as fever and hypotension remain unresolved. Elucidation of the molecular mechanisms that mediate the antiproliferative effect of IL-1 on tumor cells would not only be of help in resolving these problems; it would also provide valuable information on how cell proliferation is regulated negatively by extracellular signals. In cell culture, various types of tumor cells, such as melanoma (4,(13)(14)(15)(16)(17)(18)(19), breast carcinoma (20), myeloid leukemia (21,22), ovarian carcinoma (23), and lung adenocarcinoma (24) cells, have been shown to be susceptible to the antiproliferative action of IL-1. A highly susceptible human melanoma cell line, A375-C6, is commonly used to study the mechanism of the IL-1-mediated growth arrest. The action of IL-1 in A375-C6 cells has been documented to be mediated through specific cell surface receptors (25). The binding of IL-1 to its receptor results in activation of a variety of second-messenger signaling pathways (reviewed in Ref. 13) and a unique primary gene expression program characterized by the induction of a composite set of immediate early genes such as gro-␣, gro-␤, c-jun, nur77/NGF1-B/NAK1, IRG-9/TIS11, and Egr-1 (16,17,19). IL-1 action in A375-C6 cells is also characterized by inhibition of ornithine decarboxylase, a rate-limiting enzyme in polyamine synthesis, leading to inhibition of DNA synthesis (18,26). Although these events are thought to mediate the antiproliferative effect of IL-1, further investigation is required to elucidate the direct mechanisms responsible for initiation and/or maintenance of the growth-arrested state. Since cell growth and proliferation are ultimately regulated by a highly conserved set of cell cycle-regulatory proteins, a fruitful approach to the study of the antiproliferative mode of action of IL-1 is to examine the influence of IL-1 on the regulation of the cell cycle-regulating machinery. The eukaryotic cell cycle is regulated by the action of the cyclin-dependent kinases (CDKs) and their activating subunits, the cyclins (27,28). In mammalian cells, Cdk6 and Cdk4 are associated with the D-type cyclins and regulate G 1 progression. Cdk2 is associated with E-and A-type cyclins, and the respective complexes are believed to control G 1 -S transition and S phase progression, respectively. Cdc2 is associated with B-type cyclins and regulates G 2 -M phase. The most well studied substrates of the CDKs operating during G 1 and S phases are the retinoblastoma family of proteins (29,30). This family consists of pRb and the related proteins p107 and p130, collectively termed the pocket proteins. Progression from G 1 to S requires inactivation of the retinoblastoma family proteins by phosphorylation and the consequent release of a number of factors including the E2F family of transcription factors. These transcription factors then activate transcription of various genes that promote cell cycle progression. Thus, the phosphorylation state of retinoblastoma family proteins is a critical determinant in the execution of the progression from G 1 to S. Recently, Muthukkumar et al. (15) documented that growth arrest by IL-1 was linked to suppression of pRb phosphorylation and suggested that hypophosphorylated pRb may possibly mediate the action of IL-1. Since pRb is phosphorylated by the action of the CDKs, it is conceivable that the IL-1-induced pRb hypophosphorylation is the result of negative regulation of the CDKs. However, the effect of IL-1 on the activity or the regulation of the CDKs has not been defined as yet. In mammalian cells, regulation of CDKs is achieved by several mechanisms including alteration of CDK levels; changes in the expression of the cyclins with which CDKs interact; activation and inactivation of CDKs by phosphorylation/dephosphorylation events; and the abundance and action of two families of CDK inhibitors, the Cip/Kip family (p21 cip1 , p27 kip1 , and p57 kip2 ) and the Ink4 family (p16 ink4a , p15 ink4b , p18 ink4c , and p19 ink4d ) (reviewed in Refs. 27, 31, and 32). Modulation at any of these levels of regulation could regulate pRb phosphorylation. The current study was undertaken to investigate the molecular mechanisms, which mediate the antiproliferative effect of IL-1, through a detailed analysis of the IL-1 effects on the cell cycle-regulating molecules, such as CDKs, cyclins, CDK inhibitors, and pRb family proteins. By using A375S2 human melanoma cells (33), which are as highly sensitive to the antiproliferative effect of IL-1 as A375-C6 cells, we have systematically analyzed the changes that occur in kinase activities, expression levels, interactions, and phosphorylation status of the cell cycle-regulating molecules and compared them to the timing of onset and completion of the growtharresting process. We conclude from these experiments that IL-1 arrests A375S2 cells at G 1 and G 2 phases of the cell cycle by inhibiting kinase activities of cyclin E-Cdk2 and cyclin B1-Cdc2 complexes, and the inhibitory mechanism for each complex is different. Furthermore, data presented here suggest that IL-1 exerts its growth-arresting effects via p107 and p130 pathways rather than through pRb. EXPERIMENTAL PROCEDURES Cell Culture-A375S2 human melanoma cells (obtained from Otsuka Pharmaceutical Co., Ltd., Cellular Technology Institute, Tokushima, Japan) were cultured in minimum essential medium (Life Technologies, Inc. Oriental Co., Tokyo, Japan) supplemented with 10% heat-inactivated fetal calf serum (Life Technologies) in a humidified incubator at 37°C in 5% CO 2 . Human recombinant IL-1 was purchased from R & D Systems (Minneapolis, MN). To assure exponential cell growth, A375S2 cell cultures were set up 48 h prior to the addition of IL-1; 0.6 ϫ 10 6 cells were plated in 100-mm plastic dishes and incubated for 24 h, the culture medium was then replaced, and the incubation was continued for another 24 h before the addition of IL-1 (1.0 ng/ml, unless otherwise indicated). Cells were harvested at different times after the addition of IL-1 and processed for cyclin-dependent kinase assays, Western blotting, immunoprecipitation, or Northern blotting, as described below. For the cell proliferation assay, cells at 1.2 ϫ 10 5 per 60-mm dish were treated with IL-1 at different concentrations. At various experimental intervals, cells were trypsinized and counted using a cell counter (Sysmex model F-500, Sysmex Co., Kobe, Japan). In a parallel experiment, the number of viable cells was also determined by the Trypan blue dye exclusion test. For cell cycle analysis, trypsinized cells were stained with propidium iodide by using the Cycle TEST PLUS kit (Nippon Becton Dickinson Co., Ltd., Tokyo, Japan) and analyzed for DNA content by using the Becton Dickinson fluorescence-activated cell sorting system (FACSCalibur). Cell cycle distribution was determined using ModFit LT Software (Verity Software House, Popsham, ME). All experiments were repeated at least twice. Immunoprecipitation and Kinase Assay-Cell monolayers were washed twice with ice-cold phosphate-buffered saline and then scraped into ice-cold Nonidet P-40 lysis buffer (50 mM HEPES (pH 7.5), 250 mM NaCl, 0.1% Nonidet P-40, with 0.5 mM phenylmethylsulfonyl fluoride, 1 mM dithiothreitol, 10 mM ␤-glycerophosphate, 1 mM NaF, 0.1 mM Na 3 VO 4 , 10 g/ml aprotinin, 10 g/ml leupeptin, and 5 g/ml pepstatin A added just before use). Cell lysates were sonicated and clarified by centrifugation at 19,000 ϫ g for 15 min at 4°C, aliquots were removed for analysis of protein concentration (measured using the DC Protein Assay kit; Nippon Bio-Rad Laboratories, Osaka, Japan), and samples were adjusted to equivalent protein concentrations. After precleaning with protein G-Sepharose beads (Sigma-Aldrich Japan, Tokyo, Japan), equal amounts of protein (150 g) were incubated with the appropriate antibody for 2 h at 4°C. Antibody complexes were recovered on protein G-Sepharose beads and washed four times with Nonidet P-40 lysis buffer and twice with 50 mM HEPES buffer (pH 7.5) containing 1 mM dithiothreitol. The beads were then incubated at 30°C for 20 min in 25 l of reaction buffer (50 mM HEPES (pH 7.5), 10 mM MgCl 2 , 20 M ATP, 1 mM dithiothreitol, 2 mM glutathione, 10 mM ␤-glycerophosphate, 1 mM NaF, 0.1 mM Na 3 VO 4 ) in the presence of either 2.5 g of histone H1 (Roche Molecular Biochemicals) or 1 g of GST-pRb substrate (Santa Cruz Biotechnology) and 5 Ci of [␥-32 P]ATP (Amersham Pharmacia Biotech) per reaction. The reactions were stopped by the addition of 2ϫ SDS sample buffer, and samples were analyzed by 12.5% SDS-PAGE followed by autoradiography. Western Blot Analysis-Lysis buffer and immunoprecipitation were as described above. Whole cell lysates or immunoprecipitates were resolved by 12.5% SDS-PAGE (7.5% in the case of pRb family proteins), the resolved proteins were transferred to nitrocellulose membranes (Nippon Bio-Rad Laboratories) using a semidry transfer apparatus (Nippon Bio-Rad Laboratories), and the membranes were blocked by incubating them overnight in TBST (Tris-buffered saline (pH 7.6) containing 5% nonfat dry milk and 0.05% Tween 20). The blots were then rinsed three times in TBST and incubated with primary antibody for 2 h. After a wash, the nitrocellulose was incubated with a secondary antibody conjugated to horseradish peroxidase (Amersham Pharmacia Biotech) for 30 min. Finally, the protein was visualized by ECL-based autoradiography as recommended by the manufacturer (Amersham Pharmacia Biotech). Immunodepletion-Whole cell extracts were prepared with Nonidet P-40 lysis buffer and subjected to four rounds of immunodepletion, using either normal rabbit serum or a rabbit polyclonal antibody raised against p21 (15431E; PharMingen) immobilized on protein G-Sepharose beads. Following depletion, supernatants were subjected to Western blot analysis as described above to show the presence of remaining proteins. Northern Blot Analysis-Total cellular RNA was isolated by using RNAzol (Biotecx Laboratories, Inc., Houston, TX), and RNA samples (10 g) were fractionated by electrophoresis through 1.3% agarose gels containing 6% formaldehyde, 20 mM MOPS, 5 mM sodium acetate, and 1 mM EDTA and blotted onto a synthetic nylon membrane (Biodyne A; Pall BioSupport Corp., East Hills, NY) in 20ϫ SSC (1ϫ SSC: 150 mM NaCl, 15 mM sodium citrate) and baked at 80°C for 2 h. The blots were prehybridized and then hybridized to a 32 P-labeled antisense RNA probe for human p21 at 65°C. The probe was synthesized in the presence of [␣-32 P]UTP (Amersham Pharmacia Biotech) using a Riboprobe Gemini II system (Promega KK, Tokyo, Japan). As a template for the antisense RNA probe, the full-length cDNA of human p21 (kindly provided by Dr. A. Noda, Kobe University, Kobe, Japan) was used. The membrane was washed twice in 2ϫ SSC plus 0.1% SDS for 5 min and then twice in 0.1ϫ SSC plus 0.1% SDS for 30 min at 65°C and exposed to x-ray film at Ϫ80°C using an intensifying screen. After detection of p21 mRNA, the membranes were rehybridized with a 32 P-labeled antisense RNA probe for human glyceraldehyde-3-phosphate dehydogenase (Ambion Inc., Austin, TX) to verify the equal loading of RNA. RESULTS Initial Characterization of the Antiproliferative Action of IL-1 on A375S2 Cells-To determine the optimal IL-1 concentration to inhibit A375S2 cell proliferation completely, exponentially growing cells were treated with various concentrations of IL-1 for 96 h, and the total number of live cells was counted. As shown in Fig. 1A, more than 1 ng/ml of IL-1 inhibited the cell proliferation almost completely. A trypan blue dye exclusion test indicated little toxicity associated with this dose of IL-1 (Ͼ90% viability after 96 h). We therefore used 1 ng/ml IL-1 for subsequent experiments. Fig. 1B shows the time course of the antiproliferative effect of IL-1 on A375S2 cells. A slight inhibition of cellular proliferation was apparent as early as 24 h after IL-1 treatment, and thereafter the proliferation was inhibited almost completely, indicating that the action of IL-1 was exerted rapidly. To further characterize the antiproliferative action of IL-1, an analysis of the distribution of cells in the various phases of the cell cycle was performed by flow cytometry as a function of time following IL-1 treatment (Fig. 2). No change in cell cycle distribution was apparent during the first 4 h following exposure to IL-1. A decrease in the percentage of cells in S phase was noticeable between 6 and 8 h. There was a remarkable decrease in the S-phase fraction after 24 h (to 3% or less) accompanied by an increase in the percentage of cells in both G 1 and G 2 /M. By 48 h, the percentage of cells in S phase had decreased to Ͻ1%. In microscopic analysis of Wright-Giemsa-stained cell preparations, no mitotic cell was observed after 48 h of IL-1 treatment (data not shown). These results showed that IL-1 arrested A375S2 cells at G 1 and G 2 phases of the cell cycle. It was also shown that the process resulting in the blockage at two different cell cycle points began to take effect within the first 6 -8 h and was nearly completed by 24 h following IL-1 treatment. We therefore focused on the mechanisms inhibiting cell cycle progression during the early phase (Ͻ6 -8 h) of IL-1 treatment. Effects of IL-1 on Cyclin-CDK Kinase Activities-Since CDKs complexed with their catalytic subunit cyclins regulate the cell cycle progression in different phases, we first investigated whether the IL-1-induced blockage at two different cell cycle points was associated with changes in the kinase activities of various cyclin-CDK complexes. Results of the in vitro kinase assays using appropriate immunoprecipitates and substrates (GST-pRb or histone H1) are shown in Fig. 3. Cdk6-and cyclin D-associated kinase activities both elevated from 2 to 4 h following IL-1 treatment, peaked at 6 -8 h, and thereafter declined and fell below control levels at 24 -48 h. Cdk4 was expressed at very low levels in all control and IL-1-treated A375S2 cells (see below), and no kinase activities could be detected in Cdk4 immunoprecipitates (data not shown). The levels of cyclin A-associated kinase activity did not change significantly during the first 8 h of IL-1 treatment, but most of the activity had disappeared after 24 h of treatment. The kinase activities associated with the other Cdks and cyclins we tested, namely Cdk2, Cdc2, cyclin E, and cyclin B, all decreased gradually from 4 to 8 h and almost disappeared by 24 h after IL-1 treatment. These results demonstrate that IL-1 treatment results in a temporary increase in cyclin D-Cdk6 activity and consistent decreases of cyclin E-Cdk2 and cyclin B1-Cdc2 activities. It should be noted that the decreases of cyclin E-Cdk2 and cyclin B1-Cdc2 activities were both already detectable after 4 h of IL-1 treatment, before any changes in cell cycle distribution could be detected (Fig. 2). The kinase activities of cyclin E-Cdk2 and cyclin B-Cdc2 are known to be critical for G 1 -S and G 2 -M transition, respectively. Hence, it is likely that down-regulation of cyclin E-Cdk2 and cyclin B1-Cdc2 activities from 4 h is a mechanism of braking the cell cycle at their respective points during the early phase of IL-1 treatment. Effects of IL-1 on the Protein Levels of Cyclins and CDKs-We next examined whether the early changes in cyclin-CDK kinase activities are attributable to changes in the protein levels of cyclins and CDKs. Results of Western analysis are shown in Fig. 4. IL-1 did not reduce the expression of any of the cyclins or CDKs for at least 8 h, indicating that the changes of cyclin D-CDK6, cyclin E-Cdk2, or cyclin B1-Cdc2 kinase activities observed in the early phase of IL-1 treatment (Fig. 3) were not due to changes in protein expression levels of the respective components. At 24 h, the expression of Cdk6 and cyclin D was decreased significantly. It was further decreased at 48 h. The expression of cyclin A, cyclin B1, and Cdc2 was also decreased markedly at 24 h, and these components had mostly disappeared at 48 h. The decreased expression of these components after 24 h may correspond to the decrease in the respective cyclin-CDK activities during the late phase of IL-1 treatment (Fig. 3). In contrast, the protein expression of Cdk2 and cyclin E remained unchanged up to 48 h despite the complete loss of cyclin E-Cdk2 kinase activity after 24 h of IL-1 treatment (Fig. 3). For Cdk2, however, a slower migrating form was observed to increase at 24 and 48 h. Since the form represents the Thr-160dephosphorylated inactive form of Cdk2 (34), the increased dephosphorylation of Cdk2 was likely to contribute to the down-regulation of Cdk2-associated kinase activities after 24 h of IL-1 treatment. Cdk4 was expressed at very low levels in these cells with no detectable change in the expression following IL-1 treatment. Further experiments examined the possibility that altered association of cyclins with their CDK partners might contribute to the changes in cyclin-CDK kinase activities during the early phase of IL-1 treatment. In these experiments, cyclins were immunoprecipitated, and the immunoprecipitates were then immunoblotted for the respective CDK partners. As shown in Fig. 5, levels of cyclin D-associated Cdk6 elevated from 2 to 4 h following IL-1 treatment, peaked at 6 -8 h, and thereafter declined. The changes in cyclin D-Cdk6 association were similar in timing and magnitude to the changes in cyclin D-Cdk6 kinase activity (Fig. 3), suggesting that the temporary increase of cyclin D-Cdk6 activity between 2 and 8 h reflected the increased formation of cyclin D-Cdk6 complex. On the other hand, no change was observed in levels of cyclin E-associated Cdk2 over the 48 h of treatment. Levels of cyclin A-associated Cdk2 and cyclin B1-associated Cdc2 were unaltered for the first 8 h following IL-1 treatment, but thereafter both declined to low levels at 24 h, paralleling the temporal changes in total protein levels of cyclin A, cyclin B1, and Cdc2 (Fig. 4). Effects of IL-1 on the Expression of CDK Inhibitors-As described above, the early decreases in cyclin E-Cdk2 and cyclin B1-Cdc2 activities were not accompanied by decreases in the expression of respective cyclins or CDKs or by decreases in their complex formation (Figs. [3][4][5]. Then, to investigate other possible causes of the decrease in kinase activities, the expression of various CDK inhibitors was examined by Western analysis. Data in Fig. 6 show that IL-1 treatment resulted in a rapid induction of p21. The protein levels of p21 began to increase in the first 2 h of treatment and continued to increase up to 48 h. On the other hand, the protein levels of p27, which was expressed at very low levels in untreated A375S2 cells, remained unchanged during the early phase of IL-1 treatment, and an obvious increase was observed only after 24 h of treatment. The expression of other CDK inhibitors, p57, p15, p16, p18, and p19, was only detected at very low levels in cells that were untreated or treated with IL-1 (Fig. 6). To determine whether the increase of p21 protein expression is regulated at the transcriptional level, p21 mRNA was quantitated by Northern blot analysis using a full-length antisense RNA probe for human p21. As shown in Fig. 7, a small increase in p21 mRNA levels was detectable within 30 min of IL-1 treatment, and the levels continued to increase up to 48 h, paralleling the changes in p21 protein levels. These results indicated that the IL-1-induced increase in p21 protein levels is regulated transcriptionally at least in part. p21 expression is known to be under the control of both p53-dependent and p53-independent mechanisms. To examine whether p53 protein contributes to the IL-1-mediated induction of p21, the protein levels of p53 were analyzed after treatment of cells with IL-1. As shown in Fig. 7, p53 protein was barely detectable in normal growing A375S2 cells, and no change was observed in its expression levels following IL-1 treatment. On the other hand, when we treated A375S2 cells with camptothecin, a topoisomerase inhibitor that induces double-stranded DNAbreaks, we observed the induction of both p53 and p21 proteins, indicating that the function of p53, required for induction of p21 by DNA damage, is retained in these cells (data not shown). Thus, the present results suggested that IL-1 induces p21 independently of p53. Association of p21 with Cyclin-CDK Complexes-To determine whether p21 is responsible for the inhibition of cyclin E-Cdk2 and cyclin B-Cdc2 complexes during the early phase of IL-1 treatment, we next investigated whether there was any change in the association of p21 with cyclin-CDK complexes after IL-1 treatment. Cell extracts were immunoprecipitated with antibodies against various cyclins or CDKs and Western blotted for p21 (Fig. 8). We found that a certain amount of p21 was coimmunoprecipitated with anti-Cdk6, -Cdk2, -cyclin D, -cyclin E, and -cyclin A antibodies even from extracts of untreated growing A375S2 cells. The relative amount of p21 associated with cyclin E was increased gradually from 4 to 48 h following treatment with IL-1. Cyclin A-associated p21 was increased slightly from 4 to 8 h, although cyclin A-Cdk2 kinase activity was not changed during the early phase of IL-1 treatment (Fig. 3). Cdk2-associated p21 increased up to 8 h and thereafter decreased gradually, consistent with the observed changes in the p21 association with cyclins E and A, partners of Cdk2. For Cdc2 and cyclin B1, a p21 association was not detectable in untreated growing cells but became detectable faintly at 2-4 h following IL-1 treatment and thereafter tended to increase slightly up to 24 h. These findings indicate that the level of p21 associated with cyclin E-Cdk2, cyclin A-Cdk2, and cyclin B1-Cdc2 was increased during the early phase of IL-1 treatment. We next asked whether the increased level of p21 associated with these cyclin-CDK complexes was sufficient to inhibit their kinase activities. To answer this question, we determined what fraction of each cyclin-CDK complex was associated with p21 by immunodepletion experiments. Cell extracts from untreated cells or from cells treated with IL-1 for 8 h were immunodepleted by three successive rounds of immunoprecipitation with either anti-p21 antiserum or normal rabbit serum as a control. What remained in the depleted extracts was then examined by immunoblotting with specific antibodies against cyclins and CDKs (Fig. 9). Quantitation of p21 levels indicated that anti-p21 antiserum successfully removed p21 from control or IL-1treated extracts. We found that nearly all of cyclin E was depleted with anti-p21 antiserum not only from IL-1-treated extracts but also from control extracts, indicating that all of the cyclin E-Cdk2 complex was already bound by p21 before treatment with IL-1 in A375S2 cells. This finding was surprising in light of the fact that cyclin E-Cdk2 complex is active in growing cells and that the amount of p21 bound to cyclin E-Cdk2 increased still further following IL-1 treatment (Fig. 8). A possible interpretation of these findings is that p21 can associate with the cyclin E-Cdk2 complex in a noninhibitory mode and that multiple molecules of p21 can associate with the complex. Indeed, it has been proposed that the first molecule of p21 that associates with a cyclin-CDK complex does not inhibit its activity and that the binding of a second p21 molecule is required to inhibit its kinase activity (35,36). For Cdk2, a small proportion of the protein was depleted from both control and IL-1treated extracts, consistent with the fact that Cdk2 is expressed in excess over cyclin E and A in the cell and that the binding affinity of p21 to free Cdk2 is much lower than to cyclin-Cdk2 complex (35,(37)(38)(39). In contrast with the case of cyclin E, no depletion of cyclin A, cyclin B1, or Cdc2 was observed either before or after treatment with IL-1, suggesting that only a minor portion of the cyclin A-Cdk2 or cyclin B-Cdc2 complexes was associated with p21 even after IL-1 treatment. On the basis of these results, we conclude that p21 binds preferentially to cyclin E-Cdk2 complex and that the increased level of p21 after IL-1 treatment is effectual for the inhibition of cyclin E-Cdk2 but has little or no effect on cyclin A-Cdk2 and cyclin B1-Cdc2. We also found that the relative amount of Cdk6-and cyclin D-associated p21 increased 2-4 h following IL-1 treatment, peaked at 6 -8 h, and thereafter declined (Fig. 8). Immunodepletion experiments performed with anti-p21 antiserum verified that a certain proportion of cyclin D, possibly complexed with Cdk6, was indeed associated with p21 in cells treated with IL-1 for 8 h. These findings were surprising since, as seen in Fig. 3, the kinase activity of cyclin D-Cdk6 increased during the early phase of IL-1 treatment. It is noteworthy that the time kinetics of the increase in the association of p21 with cyclin D-Cdk6 complexes (Fig. 8) coincided with the increase of kinase activity (Fig. 3), resulting probably from the increase of complex formation between cyclin D and Cdk6 (Fig. 5). These findings may be explained by the recent suggestion that p21 acts not only as a CDK inhibitor but also as an assembly factor for cyclin-CDK complex formation (36,40). IL-1 Induces Tyrosine Phosphorylation of Cdc2-Another important mechanism for controlling CDK activity is phosphorylation of conserved threonine and tyrosine residues of CDKs. The kinase activity of both Cdk2 and Cdc2 is regulated negatively by phosphorylation of Tyr-15 and Thr-14 (34,41). To examine the possibility that such phosphorylation might contribute to the inhibition of Cdk2 and Cdc2 during the early phase of IL-1 treatment, we analyzed the relative content of phosphotyrosine in these two CDKs by immunoblotting with an anti-phosphotyrosine antibody. As shown in Fig. 10, no tyrosine phosphorylation was detected in Cdk2 immunoprecipitated with anti-cyclin E antibodies from either the control or IL-1-treated cell extracts. On the other hand, phosphotyrosine was detected at low levels in cyclin B1-associated Cdc2 before treatment with IL-1, and the levels increased gradually from 4 to 8 h following IL-1 treatment (Fig. 10), coincident with the observed decrease in cyclin B1-Cdc2 kinase activity (Fig. 3). These findings suggest that Tyr-15 phosphorylation is a mechanism that inhibits the activity of cyclin B1-Cdc2 but is not involved in the inhibition of cyclin E-Cdk2 during the early phase of IL-1 treatment. Analysis of Phosphorylation Status of pRb Family Proteins-As shown above, the kinase activity of cyclin E-Cdk2 started to decrease 4 h following IL-1 treatment (Fig. 3). The best studied G 1 cyclin-CDK substrate is the product of the retinoblastoma tumor suppresser gene, pRb. It has become clear that pRb is a negative regulator that acts in the G 1 phase of the cell cycle, and its activity appears to be modulated by phosphorylation. Then we determined whether pRb underwent dephosphorylation at times compatible with the decrease in cyclin E-Cdk2 activities. The reduction in mobility of pRb on SDS-PAGE and Western blot analysis is a widely accepted indicator of pRb phosphorylation. In asynchronously growing A375S2 cells, pRb was found in both hyper-and hypophosphorylated forms, which migrated as multiple bands in the molecular mass range of 110 -115 kDa ( Fig. 11; detected by an antibody recognizing a pRb C-terminal protein). Contrary to our expectation, no change in the proportions of hyper-and hypophosphorylated pRb was recognized in the early phase (up to 8 h) of IL-1 treatment, and disappearance of the slow-migrating hyperphosphorylated forms of pRb was observed only 24 h after treatment (Fig. 11). Then, we next investigated whether IL-1 affected the phosphorylation state of the two known pRb-related proteins, p107 and p130. As for the p107 protein, a slight downward mobility shift, which seemed to represent the dephosphorylation of the protein, was first seen at 4 h of IL-1 treatment, and a more marked shift was evident at 8 h (Fig. 11). The protein expression level of p107 was not affected during the first 8 h of treatment, but after 24 h, when the vast majority of cells have exited the cell cycle, the p107 protein almost disappeared. On the other hand, p130 was present as both hyper-and hypophosphorylated forms in asynchronously growing control cells. In contrast with p107, the total protein level of p130 was increased by treatment with IL-1. The increase was first seen at 6 h and peaked at 24 h. Interestingly, independent of the increase in the total protein levels, the ratio between the hyper-and hypophosphorylated forms of p130 was constant during the first 8 h, and a decrease in the hyperphosphorylated form with an accumulation of the hypophosphorylated form was observed only at the 24-and 48-h time points. Taken together, these results suggest that IL-1 exerts its growth-inhibitory effects not through pRb but by regulating the phosphorylation state of p107 and the abundance of p130. DISCUSSION This study represents an extensive analysis of the potential roles of the cell cycle machinery in IL-1-induced growth arrest of A375S2 human melanoma cells. The first important observation in this study is that IL-1 arrested A375S2 cells at both G 1 and G 2 phases of the cell cycle, suggesting that IL-1 targets a molecule(s) critical for progression through G 1 (or G 1 -S) and G 2 (or G 2 -M). In the previous study (14), in some contrast with our data, A375 cells, which are less sensitive to IL-1 than A375S2 cells (33), were shown to be arrested by IL-1 at G 1 phase but not at G 2 phase, although IL-1 did retard progression of A375 cells through G 2 -M. Another A375-derived IL-1-sensitive clone named A375-C6 has been reported to be arrested by IL-1 at G 0 /G 1 phase, but no evidence for blockage or retardation at G 2 phase was obtained (15). It therefore appears likely that the type of arrest induced by IL-1 varies with the clone of the A375 cell line. The cell cycle analysis of A375S2 cells also showed that IL-1 started to brake cell cycle progression at G 1 and G 2 phases within the first 6 -8 h of treatment, and the blockage at the two phases was nearly completed by 24 h. To elucidate the molecular mechanism of the IL-1 action, it is of importance to distinguish the induced changes in cell cycleregulating molecules causing growth arrest from those that are a consequence of the cell cycle arrest. We therefore focused on the changes and/or modulations of cell cycle-regulating molecules observed during the early phase (Ͻ6 -8 h) of IL-1 treatment before any changes in cell cycle distribution could be detected. We found that the kinase activities of cyclin E-Cdk2 and cyclin B-Cdc2 complexes both started to decrease as early as 4 h after treatment with IL-1, suggesting that these changes were causes rather than consequences of the inhibition of cell cycle progression. Since cyclin E-Cdk2 and cyclin B1-Cdc2 complexes have been shown to have crucial roles in G 1 -S and G 2 -M transition, respectively, it is likely that the IL-1-induced rapid down-regulation of cyclin E-Cdk2 and cyclin B1-Cdc2 activities is a mechanism of braking cell cycle progression at their respective points. On the other hand, the kinase activity of cyclin A-Cdk2, whose function is essential in S phase, showed no change during the first 8 h of IL-1 treatment. Consistent with this, no evidence for blockage in S phase was obtained during the early phase of IL-1 treatment. After 24 h, when the vast majority of cells have exited the cell cycle, the kinase activity of cyclin A-Cdk2, as well as that of cyclin E-Cdk2 and cyclin B1-Cdc2, disappeared completely. The complete loss of these cyclin-CDK activities after 24 h may be a secondary response reflecting the cessation of cell cycle and maintenance of cells at G 1 and G 2 phases. The kinase activity of cyclin-CDK complexes can be regulated in a number of ways (see Ref. 42 and references therein). In this study, examination of potential mechanisms for the IL-1-induced rapid down-regulation of cyclin E-Cdk2 and cyclin B1-Cdc2 activities revealed that the CDK inhibitor p21 is likely to be responsible for the inhibition of cyclin E-Cdk2 activity, whereas an increase in tyrosine phosphorylation of Cdc2 may be involved in the inhibition of cyclin B1-Cdc2 activity. p21 has been shown to inhibit a wide variety of cyclin-CDK complexes including G 1 cyclin/CDK and has been implicated in G 1 arrest following DNA damage (37,43) in response to negative growth factors, such as transforming growth factor-␤ (44,45) or interferon-␣ (46), and in maintenance of terminally differentiated cells in a nonproliferative state (47,48). Our results provide the following evidence to support a possible role for p21 in IL-1-mediated inhibition of cyclin E-Cdk2 activity causing cell cycle arrest at G 1 . First, p21 induction is the first change to be observed, occurring well before the start of cell cycle arrest. In contrast with p21, the p21-related CDK inhibitor p27, which was expressed at very low levels in proliferating A375S2 cells, was induced only after 24 h of IL-1 treatment. Other CDK inhibitors examined, including p57, p15, p16, p18, and p19, were not induced at all by IL-1. Second, the increase in p21 expression levels is paralleled by an increased binding of p21 to cyclin E-Cdk2 complex, coinciding with the decrease of kinase activity of the complex. Although the binding of p21 to cyclin A-Cdk2 and cyclin B1-Cdc2 complexes also increased in parallel with the increase in p21 expression levels, immunodepletion experiments revealed that the vast majority of the cyclin A-Cdk2 and cyclin B1-Cdc2 complexes were still free from p21 after IL-1 treatment, indicating that the increased binding of p21 to these cyclin-CDK complexes after IL-1 treatment is not sufficient in quantity to inhibit the activity of these complexes. In contrast, all of the cyclin E-Cdk2 complex was found to be complexed with p21 after and even before IL-1 treatment. At first glance, this observation appears to be in conflict with the fact that cyclin E-Cdk2 complex is active in growing cells and that the relative amount of p21 bound to the complex increased still further following IL-1 treatment. The apparent paradox, however, is resolved by proposing that p21 can associate with cyclin E-Cdk2 complex in a noninhibitory mode and that multiple molecules of p21 can associate with the complex. This is fully consistent with the idea that the first molecule of p21 that associates with a cyclin-CDK complex does not inhibit its activity and that the binding of a second p21 molecule is required to inhibit its kinase activity (35,36). Based on this idea, it has been proposed that cyclin-CDK complexes become maximally sensitive to increases in p21 levels by the binding of one p21 molecule to each complex (35). This is consistent with our observation that the decrease of cyclin E-Cdk2 activity occurred quickly, coinciding with the rapid increase in p21 expression levels following IL-1 treatment. A similar observation has been reported in normal human fibroblasts, where all of the active cyclin E-Cdk2 complex is associated with p21, and the active complex containing p21 can be inhibited by upregulated p21 following UV-induced DNA damage (37). This phenomenon was also explained based on the stoichiometric inhibitory action of p21. Although p21 is known as a universal CDK inhibitor, recent reports provide evidence that prime targets of p21 for inhibition are cyclin E/A-Cdk2 complexes in vivo (37,49,50). Consistent with this, our observations suggest that IL-1-induced p21 is a potent inhibitor of cyclin E-Cdk2 but not effective for cyclin D-Cdk6 or cyclin B1-Cdc2. However, our observations FIG. 11. Western blot analysis of phosphorylation status of pRB family proteins. Exponentially growing A375S2 cells were treated at time 0 with 1.0 ng/ml IL-1. At intervals thereafter, whole cell lysates were prepared, subjected to SDS-PAGE, and immunoblotted with antibodies to either pRb (9032), p107 (C-18), or p130 (R27020). The positions of hypophosphorylated proteins (pRb, p107, and p130) and hyperphosphorylated proteins (ppRb, pp107, and pp130) are indicated. Representative data from two independent experiments are shown. also suggest that the induced p21 has little inhibitory effect even on cyclin A-Cdk2. This appears to be associated with the finding that all of the cyclin E-Cdk2 complex, but only a minor portion of the cyclin A-Cdk2 complex, was already associated with p21 before treatment with IL-1. Such a physical association of p21 with active cyclin E-Cdk2 but not with cyclin A-Cdk2 in growing cells is also observed in normal human fibroblasts (37) and in certain cell lines (51,52). These observations suggest that at least in these cells and cell lines, p21 binds to cyclin E-Cdk2 complex with much higher affinity than to cyclin A-Cdk2 complex, and thus p21 is a more effective inhibitor of cyclin E-Cdk2 than of cyclin A-Cdk2 in vivo. It remains to be clarified, however, whether the relative affinity of p21 to cyclin E-and cyclin A-Cdk2 complexes varies with the cell type. While this work was in progress, Nalca and Rangnekar (53), using another A375-derived IL-1-sensitive clone, A375-C6, showed that IL-1 caused rapid induction of p21 at the mRNA and protein level. However, they have proposed that p21 does not play an important role in the growth-arresting effect of IL-1, since they found that inhibition of p21 expression by the antisense construct resulted in only a marginal rescue from the growth-arresting action of IL-1. This view stands in contrast with the present results suggesting that the rapid induction of p21 is of major importance in the action of IL-1 in A375S2 cells. This discrepancy may be attributable to a difference in the steadiness of the IL-1-induced increase in p21 protein levels between the two clones of A375 cells used. In A375-C6 cells, Nalca and Rangnekar (53) observed a rapid but transient increase (peaked at 3 h) in p21 protein expression levels following IL-1 treatment, in contrast to our data showing that IL-1 caused a rapid and sustained increase (up to 48 h) in p21 levels in A375S2 cells. Thus, in A375-C6 cells, the transient induction of p21 may not be sufficient to cause cell cycle arrest, and other mechanisms that control cell proliferation may be responsible for the arrest. Alternatively, it is conceivable that compensatory regulation mechanisms become operative in the absence of p21. In addition, it is also possible that the regulatory functions of p21 are redundant. An unexpected finding of this study was the increase in cyclin D-Cdk6 activity during the early phase of IL-1 treatment. The increase was paralleled by the increase in the relative amount of p21-bound cyclin D-Cdk6 complex, coinciding with the increase in cellular p21 levels. A possible interpretation of these findings is that p21 facilitates the association of the complex formation between cyclin D and Cdk6 without hindering the function of the complex. This is consistent with the idea that besides simply inhibiting kinase activity, members of the p21 family can promote the association of CDKs with cyclins and thus stabilize cyclin-CDK complexes (36,40). There is increasing evidence that such a role of p21 is crucial during assembly of cyclin-CDK complexes. For instance, recent data suggest that p21 promotes the association of D-type cyclins with CDKs by counteracting the effects of the Ink4 family of CDK inhibitors (54,55). Thus, it may be concluded that the increase in cyclin D-Cdk6 activity during the early phase of IL-1 treatment mirrored the increased assembly of the active cyclin D-Cdk6-p21 complexes driven by the increase in cellular p21 levels. In this context, the possibility that this novel function of p21 also contributes to the formation of active cyclin E-Cdk2-p21 ternary complexes in growing A375S2 cells may be admitted. In addition to the inhibition by CDK inhibitors, phosphorylation of conserved threonine and tyrosine residues near the ATP binding sites of CDKs (Thr-14 and Tyr-15 on both Cdk2 and Cdc2) is also an important mechanism employed to keep the CDKs inactive (34,41). Our data showed that there was an increase in phosphotyrosine content in Cdc2 but not in Cdk2 during the early phase of IL-1 treatment. Thus, it is conceivable that the increase in Tyr-15 phosphorylation is a mechanism that inhibits the cyclin B1-Cdc2 but is unrelated to the decrease in cyclin E-Cdk2 activity during the early phase of IL-1 treatment. Phosphorylation of Tyr-15 on Cdc2 is controlled by the opposing activities of the Wee1/Myt1 kinases (56 -58) and Cdc25 phosphatase (59). Thus, the IL-1-induced increase in Tyr-15 phosphorylation on Cdc2 could be due to an increase in Wee1/Myt1 kinase activity and/or due to a decrease in Cdc25 phosphatase activity. Both Myt1 and Cdc25 also control phosphorylation of Thr-14 on Cdc2. It seems possible, therefore, that IL-1 causes an increase in phosphorylation of Thr-14, as well as Tyr-15, on Cdc2, thereby inhibiting cyclin B-Cdc2 activity. Furthermore, we do not exclude the possibility that other post-translational modifications, such as a decrease in Thr-161 phosphorylation on Cdc2, are also involved in the down-regulation of cyclin B1-Cdc2 activity during the early phase of IL-1 treatment. Recently, it has been reported that growth inhibition of A375-C6 human melanoma cells by IL-1 is mediated at least in part by the suppression of pRb phosphorylation to retain pRb in an unphosphorylated, growth-inhibitory state (15). In the present study using A375S2 cells, however, a shift in electrophoretic mobility of pRb to a hypophosphorylated form was not observed until 24 h after IL-1 treatment, when cell cycle arrest had already been completed. This finding suggests that the observed suppression of pRb phosphorylation at the 24-h time point is the result of IL-1-induced growth arrest, and thus pRb is not a functional mediator of IL-1 action in A375S2 cells. However, since the kinase activity of cyclin E-Cdk2, which is one of the candidates for pRb kinases in vivo, was inhibited in the early phase (Ͻ8 h) of IL-1 treatment, we cannot fully rule out the possibility that the early inhibition of cyclin E-Cdk2 activity caused dephosphorylation of a few phosphorylation sites in pRb that could not appreciably alter the electrophoretic mobility of pRb. In respect to the substrate of cyclin E-Cdk2 kinase, several lines of evidence indicate that cyclin E-Cdk2 catalyzes events that are rate-limiting for the G 1 /S transition that are independent of pRb phosphorylation. For example, cyclin E, but not cyclin D, is essential for entry into S phase in mammalian cells that lack a functional pRb (60 -62). In vitro phosphorylation by Cdk2 kinases had little effect on pRb function in a microinjection-based in vivo cell cycle assay (63). Thus, it is suggested that cyclin E-Cdk2 phosphorylates key substrates other than pRb. Consistent with this hypothesis, here we found that phosphorylation of the pRb-related protein p107 was suppressed from 4 h following IL-1 treatment, coincident with the inhibition of cyclin E-Cdk2 kinase activity, suggesting that the suppression of p107 phosphorylation is mediated through inhibition of cyclin E-Cdk2 activity by IL-1. It is possible, therefore, that p107 acts as a downstream target of the IL-1-induced arrest pathway. We also found that the total protein level of another pRb-related protein, p130, gradually increased in the early phase of IL-1 treatment, whereas the ratio between the hyper-and hypophosphorylated forms of p130 was constant. This finding suggests an additional function of IL-1, up-regulation of the p130 protein level, which also plays an important role in IL-1-induced G 1 arrest. Both p107 and p130, like pRb, can regulate cell proliferation, and there are a number of cell systems where the retinoblastoma family members have been involved in eliciting cell cycle arrest. The growth arrest mediated by the three pocket proteins are however not identical (64,65), and their relative importance varied with the antiproliferative agent and the cell type employed in the arrest system. For example, it has been shown that pRb plays a central role in cell cycle arrest after DNA damage, whereas p107 and p130 are dispensable for this process (66). Although all of the pRb family members have been reported to be associated with transforming growth factor-␤induced G 1 arrest (67)(68)(69), recent experiments suggest that p130 is the major downstream target in transforming growth factor-␤-regulated growth arrest in gastric-carcinoma cells (45). Muthukkumar et al. (15) have suggested that growth arrest of A375-C6 melanoma cells by tumor necrosis factor-␣ is mediated by a pRb-independent pathway, whereas that by IL-1 is dependent on pRb function. Koudssi et al. (70) have also shown that pRb is involved in IL-1-induced G 1 /S arrest of rat cardiac fibroblasts, although they have not addressed the involvement of p107 or p130. Our data, while in contrast, suggest that IL-1 causes cell cycle arrest not through pRb but by regulating the phosphorylation state of p107 and the abundance of p130. The discrepancy implies that the pRb family proteins involved in IL-1-induced cell cycle arrest could vary with cell lines examined. Although these pRb family proteins share considerable sequence homology, each protein has been shown to have a different temporal profile of interaction with different E2F family members. The p107 and p130 proteins bind specifically to E2F4 and E2F5 (71)(72)(73), whereas pRb interacts with each of the E2F family members (74,75). pRb seems to bind preferentially to E2F in middle to late G 1 and S phases, and p107 forms complexes with E2F predominantly in late G 1 and S phases, while p130-E2F complexes accumulate when cells exit from the cell cycle. Of note, p130 has been suggested to function as the major pocket protein during various differentiation programs (76 -78), and the protein is expressed at high levels in terminally differentiated cells, such as neurons and skeletal muscle (79). It is tempting, therefore, to speculate that the p130 protein induced by IL-1 plays a crucial role in initiation and maintenance of the growth-arrested state. Although the precise role of each individual pRb family protein is not addressed here, the data presented provide a new clue that should help in future investigations of the molecular mechanisms underlying the antiproliferative effect of IL-1. In conclusion, our findings demonstrate that IL-1 inhibits the growth of A375S2 cells by arresting them at G 1 and G 2 phases of the cell cycle, and the arrests are preceded by a rapid decrease in cyclin E-Cdk2 and cyclin B1-Cdc2 kinase activities. Thus, the rapid down-regulation of these two cyclin-CDK activities is likely to be the mechanism responsible for the cell cycle arrest at G 1 and G 2 phases, respectively. The CDK inhibitor p21 is likely to be responsible for the inhibition of cyclin E-Cdk2 activity, whereas an increase in tyrosine phosphorylation of Cdc2 may be involved in the inhibition of cyclin B1-Cdc2 activity. Furthermore, we found that IL-1 causes rapid dephosphorylation of p107, but not of pRb or p130, while the total protein levels of p130 are increased. Thus, IL-1 may exert its growth-arresting effects not through pRb but by regulating the phosphorylation state of p107 and the abundance of p130. It will be important in our future studies to define the upstream signal transduction pathway mediating the IL-1 action.
v3-fos-license
2022-03-16T01:15:56.011Z
2022-03-15T00:00:00.000
247451052
{ "extfieldsofstudy": [ "Medicine", "Computer Science", "Engineering" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.frontiersin.org/articles/10.3389/fradi.2022.1039160/pdf", "pdf_hash": "9ca1a9fd8d96e57efff00323fb77f0dfed64730a", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:1742", "s2fieldsofstudy": [ "Computer Science" ], "sha1": "8c09fa168fe84dbe381b766df99e5a4618f8d0a5", "year": 2022 }
pes2o/s2orc
Adversarial counterfactual augmentation: application in Alzheimer’s disease classification Due to the limited availability of medical data, deep learning approaches for medical image analysis tend to generalise poorly to unseen data. Augmenting data during training with random transformations has been shown to help and became a ubiquitous technique for training neural networks. Here, we propose a novel adversarial counterfactual augmentation scheme that aims at finding the most effective synthesised images to improve downstream tasks, given a pre-trained generative model. Specifically, we construct an adversarial game where we update the input conditional factor of the generator and the downstream classifier with gradient backpropagation alternatively and iteratively. This can be viewed as finding the ‘weakness’ of the classifier and purposely forcing it to overcome its weakness via the generative model. To demonstrate the effectiveness of the proposed approach, we validate the method with the classification of Alzheimer’s Disease (AD) as a downstream task. The pre-trained generative model synthesises brain images using age as conditional factor. Extensive experiments and ablation studies have been performed to show that the proposed approach improves classification performance and has potential to alleviate spurious correlations and catastrophic forgetting. Code: https://github.com/xiat0616/adversarial_counterfactual_augmentation Introduction Deep learning has been playing an increasingly important role in medical image analysis in the past decade, with great success in segmentation, diagnosis, detection, etc (1). Although deep-learning based models can significantly outperform traditional machine learning methods, they heavily rely on the large size and quality of training data (2). In medical image analysis, the availability of large dataset is always an issue, due to high expense of acquiring and labelling medical imaging data (3). When only limited training data are available, deep neural networks tend to memorise the data and cannot generalise well to unseen data (4,5). This is known as over-fitting (4). To mitigate this issue, data augmentation has become a popular approach. The aim of data augmentation is to generate additional data that can help increase the variation of the training data. Conventional data augmentation approaches mainly apply random image transformations, such as cropping, flipping, and rotation etc. to the data. Even though such conventional data augmentation techniques are general, they may not transfer well from one task to another (6). For instance, color augmentation could prove useful for natural images but may not be suitable for MRI images which are presented in greyscale images (3). Furthermore, traditional data augmentation methods may introduce distribution shift, i.e., the change of the joint distribution of inputs and outputs, and consequently adversely impact the performance on nonaugmented data during inference 1 (i.e., during the application phase of the learned model) (7). Some recently developed approaches learn parameters for data augmentation that can better improve downstream task, e.g. segmentation, detection, diagnosis, etc., performance (6,8,9) or select the hardest augmentation for the target model from a small batch of random augmentations for each traning sample (10). However, these approaches still use conventional image transformations and do not consider semantic augmentation (11), i.e., creating unseen samples by changing semantic information of images such as changing the background of an object or changing the age of a brain image. Semantic augmentation can complement traditional techniques and improve the diversity of augmented samples (11). One way to achieve semantic augmentation is to train a deep generative model to create counterfactuals, i.e., synthetic modifications of a sample such that some aspects of the original data remain unchanged (12)(13)(14)(15)(16). However, these approaches mostly focus on the training stage of generative models and randomly generate samples for data augmentation, without considering which counterfactuals are more effective for downstream tasks, i.e. data-efficiency of the generated samples. Ye et al. (17) use a policy based reinforcement learning (RL) strategy to select synthetic data for augmentation with reward as the validation accuracy. Xue et al. (18) propose a cGAN based model to augment classification of histopathology images with a selective strategy based on assigned label confidence and feature similarity to real data. By contrast, our approach focuses on finding the weakness (i.e. the hard counterfactuals) of a downstream task model (e.g. a classifier) and forces it to overcome its weakness. Similarly, Ye et al. (17) use a policy based reinforcement learning (RL) strategy to select synthetic data for augmentation, with reward as the validation accuracy, but the instability of RL training could perhaps hinder the utility of their approach. Wang et al. (11), Li et al. (19), Chen and Su (20) proposed to augment the data in the latent space of the target deep neural network, by estimating the covariance matrix of latent features obtained from latent layers of the target deep neural network for each class (e.g., car, horse, tree, etc.) and sampling directions from the feature distributions. These directions should be semantic meaningful such that changing along one direction can manipulate one property of the image, e.g. color of a car. However, there is no guarantee that the found directions will be semantically meaningful, and it is hard to know which direction controls a particular property of interest. In this work, we consider the scenario that we have a classifier which we want to improve (e.g. an image-based classifier of Alzheimer's Disease (AD) given brain images). We are also given some data and a pre-trained generative model that is able to create new data given an image as input and conditioning factors that can alter corresponding attributes in the input. For example, the generative model can alter the brain age of the input. We propose an approach to guide a pre-trained generative model to generate the most effective counterfactuals via an adversarial game between the input conditioning factor of the generator and the downstream classifier, where we use gradient backpropagation to update the conditioning factor and the classifier alternatively and iteratively. A schematic of the proposed approach is shown in Figure 1. Specifically, we choose the classification of AD as the downstream task and utilise a pre-trained brain ageing synthesis model to improve the AD classifier. The brain ageing generative model used in this paper is adopted from a recent work (21), which takes a brain image and a target age as inputs and outputs an aged brain image. 2 We show that the proposed approach can improve the test accuracy of the AD classifier. We also demonstrate that it can be used in a continual learning 3 context to alleviate catastrophic forgetting, i.e. deep models forget what they have learnt from previous data when training on new given data, and can be used to alleviate spurious correlations, i.e. two variables appear to be causally related to one another but in fact they are not. Our contributions can be summarised as follows: conditional input and the classifier. To the best of our knowledge, this is the first approach that formulates such an adversarial scheme to utilise pre-trained generators in medical imaging. 2. We improve a recent brain ageing synthesis model by involving Fourier encoding to enable gradient backpropagation to update conditional factor and demonstrate the effectiveness of our approach on the task of AD classification. 3. We consider the scenario of using generative models in a continual learning context and show that our approach can help alleviate catastrophic forgetting. 4. We apply the brain ageing synthesis model for brain rejuvenation synthesis and demonstrate that the proposed approach has the potential to alleviate spurious correlations. Notations and problem overview We denote an image as x X, and a conditional generative model G that takes an image x and a conditional vector v as input and generates a counterfactualx that corresponds to v: For each x, there is a label y Y. We define a classifier C that predicts the labelŷ for given x:ŷ ¼ C(x). In this paper, x is a brain image, y is the AD diagnosis of x, and v represents the target age a and AD diagnosis on which the generator G is conditioned. We select age and AD status to be conditioning factors as they are major contributors to brain ageing. We use a 2D slice brain ageing generative model as G, and a VGG 4 -based (22) AD classification model as C. In Xia et al. (21), the brain ageing generative model is evaluated in multiple ways, including several quantitative metrics: Structural Similarity (SSIM), Peak Signal-to-Noise Ratio (PSNR) and Mean Squared Error (MSE) between the synthetically aged brain images and the ground-truth followup images, and Predicted Age Difference (PAD), i.e. difference between the predicted age by a pre-trained age predictor and the desired target age. For more details of the evaluation metrics, please refer to Xia et al. (21), Section 4. Note that we only change the target age a in this paper, thus we write the generative process asx ¼ G(x, a) for simplicity. Suppose a pre-trained G and a C are given, the question we want to answer is: "How can we use G to improve C in a (data) efficient manner"? To this end, we propose an approach to utilise G to improve C via an adversarial game with gradient backpropagation to update a and C alternatively and iteratively. Fourier encoding for conditional factors The proposed approach requires backpropagation of gradient to the conditional factor to find the hard counterfactuals. However, the original brain ageing synthesis model (21) used ordinal encoding to encode the conditional age and AD diagnosis, where the encoded vectors are discrete in nature and need to maintain a certain shape, which hinders gradient backpropagation to update these vectors. A schematic of the adversarial classification training. The pre-trained generator G takes a brain image x and a target age a as input and outputs a synthetically aged imagex that corresponds to the target age a. The classifier C aims to predict AD label for a given brain image. To utilise G to improve C, we formulate an adversarial game between a (in red box) and C (in cyan box), where a and C are updated alternatively and iteratively using L 1 and L 2 , respectively (see Section 2.3). Note G is frozen. 4 A popular deep learning neural network that has widely been used for classification. that we first quantize to 0=1 and then check for ordinal order preservation of the 1 digits. Both are not easily differentiable. To enable gradient backpropagation to update the conditional vectors, we propose to use Fourier encoding (23,24) to encode the conditional attributes, i.e., age and heath state (diagnosis of AD). The effectiveness of Fourier encoding has been experimentally shown in Tancik et al. (23), Mildenhall et al. (24). We also compared the generative model using Fourier v.s. Ordinal encoding using the quantitative metrics briefly introduced in Section 2.1, as presented in Table 1. We observe that the generator using Fourier encoding achieves very similar quantitative results as the generator using ordinal encoding, demonstrating effectiveness of Fourier encoding to encode age and health status. The key idea of Fourier encoding is to map low-dimensional vectors to a higher dimensional domain using a set of sinusoids. where b j can be viewed as the Fourier basis frequencies, and p 2 j the Fourier series coefficients. In this work, the vector v represents the target age a and the health status (AD diagnosis), and d ¼ 2. In our experiments, we set p 2 j ¼ 1 for j ¼ 1, . . . , m, and b j are independently and randomly sampled from a Gaussian distribution, b j N (m scale à I, 0), where m scale is set to 10. We set m ¼ 100 and the resulting g(v) is 200-dimensional. After encoding, the generator G takes the encoded vector g(v) as input. The use of Fourier encoding offers two advantages. First, Xia et al. (21) encoded age and health state into two vectors and had to use two MLPs to embed the encoded vectors into the model. This may not be a big issue when the number of factors is small. However, extending the generative model to be conditioned on tens or hundreds of factors will increase the memory and computation costs significantly. With Fourier encoding, we can encode all possible factors into a single vector, which offers more flexibility to scale the model to multiple conditional factors. Second, Fourier encoding allows us to compute the gradients with respect to the input vector v or certain elements of v, since the encoding process is differentiable. As such, we replace the ordinal encoding with Fourier encoding for all experiments. The generative model G takes v as input:x ¼ G(x, v), where v represents target age and health state. Since we only change the target age a in this paper, we write the generative process asx ¼ G(x, a) for simplicity. Adversarial counterfactual augmentation Suppose we have a conditional generative model G and a classification model C. The goal is to utilise G to improve the performance of C. To this end, we propose an approach consisting of three steps: pre-training, hard sample selection and adversarial classification training. A schematic of the adversarial classification training is presented in Figure 1. Algorithm 1 summarises the steps of the method. Below we describe each step in detail. Pre-training The generative model is pre-trained using the same losses as in Xia et al. (21) except that we use Fourier encoding to encode age and AD diagnosis. Consequently, we obtain a pre-trained G that can generate counterfactuals conditioned on given target ages a: The classification model C is a VGG-based network (22) trained to predict the AD diagnosis from brain images, optimised by minimising: where L s (Á) is a supervised loss (binary cross-entropy loss in this paper), x is a brain image, and y is its ground-truth AD label. To note that if the pre-trained G and C are available in practice, we could avoid the pre-training step. For detail of the evaluation metrics please refer to Xia et al. (21), Section 4. Input: Training set D train ; hyperparameter k, N; a pre-trained G; C. Pre-training: 1. Train the classifiers C on D train (Eq. 2). Hard sample selection: 2. Select N samples from D train that result in the highest classification errors for C, denoted as D hard . Adversarial classification training: 3. Randomly initialize target ages a, and obtain initial synthetic data. For k do 4. Update a in the direction to maximize classification error (Eq 4). 5. Obtain synthetic images with D hard and the updated a, denoted as D syn . 6. Update C to optimize Eq. 5 on D train ∪ D syn for one epoch. Hard sample selection Liu et al. (25), Feldman and Zhang (26) suggested that training data samples have different influence on the training of a supervised model, i.e., some training data are harder for the task and are more effective to train the model than others. Liu et al. (25) propose to up-sample, i.e. duplicate, the hard samples as a way to improve the model performance. Based on these observations, we propose a similar strategy to Liu et al. (25) to select these hard samples: we record the classification errors of all training samples for the pre-trained C and then select N ¼ 100 samples with the highest errors. The selected hard samples are denoted as D hard : {X hard , Y hard }. 28) augmented datasets by randomly generating a number of synthetic data with pre-trained generators. Similar to training samples, some synthetic data could be more effective for downstream tasks than others. Here we assume that if a synthetic data sample is hard, then it is more effective for training. We propose an adversarial game to find the hard synthetic data to boost C. Adversarial classification training Specifically, let us first define the classification loss for synthetic data as: wherex is a generated sample conditioned on the target age a: x ¼ G(x, a), and y is the ground-truth AD label for x. Here we assume that changing target age does not change the AD status, thus x andx have the same AD label. Since the encoding of age a is differentiable (see Section 2.2), we can obtain the gradients of L C with respect to a as: r a L C ¼ r a [L s (C(G(x, a)), y)], and update a in the direction of maximising L C by:ã ¼ a þ g a r a L C , where g a is the step size (learning rate) for updating a. Formally, the optimization function of a can be written as: Then we could obtain a set of synthetic data using the updated a: The classifier C is updated by optimising: where D combined : {X combined , Y combined } is a combined dataset consisting of the training dataset and synthetic dataset: Liu et al. (25), we update C on D combined instead of D syn as we found updating C only on D syn can cause catastrophic forgetting (29). The adversarial game is formulated by alternatively and iteratively updating a and classifier C via Eqs. 4 and 5, respectively. In practice, to prevent a from going to unsuitable ages, we clip it to be in [60, 90] after every update. Updating a vs. updating G Note here the adversarial game is formulated between a and C, instead of G and C. This is because training G against C allows G to change its latent space without considering image quality, and the output of G could be unrealistic. Please refer to Section 4.1.2 for more details and results. Counterfactual augmentation vs. conventional augmentation Here we choose to augment data counterfactually instead of applying conventional augmentation techniques. This is because that the training and testing data are already pre-processed and registered to MNI 152, and in this case conventional augmentations do not introduce helpful variations. Please refer to Section 4.1.3 for more details and results. Adversarial classification training in a continual learning context Most previous works (14,27,28,(30)(31)(32) that used pretrained deep generative models for augmentation focused on generating a large number of synthetic samples, and then merged the synthetic data with the original dataset and trained the downstream task model (e.g. a classifier) on this augmented dataset. However, this requires training the task model from scratch, which could be inconvenient. Imagine that we are given a pre-trained classifier, and we have a generator at hand which may or may not be pre-trained on the same dataset. We would like to use the generator to improve the classifier, or transfer the knowledge learnt by the generator to the classifier. The strategy of previous works is to use the generative model to produce a large amount of synthetic data that cover the knowledge learnt by the generator, and then train the classifier on both real and synthetic data from scratch, which would be expensive. However, in this work, we consider the task of transferring knowledge from the generator to the classifier in the continual learning context, by considering synthetic data as new samples. We want the classifier to learn new knowledge from these synthetic data without forgetting what it has learnt from the original classification training set. We will show how our approach can help in the continual learning context. In Section 2.3, after we obtain the synthetic set D syn , we choose to update the classifier C on the augmented dataset D syn < D train , instead of D syn (stage 6 in Algorithm 1). This is because re-training the classifier only on the D syn would result in catastrophic forgetting (29), i.e. a phenomenon where deep neural networks tends to forget what it has learnt from previous data when being trained on new data samples. To alleviate catastrophic forgetting, efforts have been devoted to developing approaches to allow artificial neural networks to learn in a sequential manner (33,34). These approaches are known as continual learning (33,35,36), lifelong learning (37, 38), sequential learning (39,40), or incremental learning (41, 42). Despite different names and focuses, the main purpose of these approaches is to overcome catastrophic forgetting and to learn in a sequential manner. If we consider the generated data as new samples, then the update of the pre-trained classifier C can be viewed as a continual learning problem, i.e. how to learn new knowledge from the synthetic set D syn without forgetting old knowledge that is learnt from the original training data D train . To alleviate catastrophic forgetting, we re-train the classifier on both the synthetic dataset D syn and the original training dataset D train . This strategy is known as memory replay in continual learning (43,44) and was also used in other augmentation works (25). The key idea is to store previous data in a memory buffer and replay the saved data to the model when training on new data. However, it could be expensive to store and revisit all the training data, especially when the data size is large (44). In Section 4.2, we perform experiments where we only provide a portion (M%) of training data to the classifier when re-training with synthetic data (to simulate the memory buffer). In this case, we only create synthetic data from the memory bank. We want to see whether catastrophic forgetting would happen or not when only a portion (M%) of training data is provided, and if so, how much it affects the test accuracies. Algorithm 2 summarises the steps of the method in the continual learning context. Data We use the ADNI dataset (45) for experiments. We select 380 AD and 380 CN (control normal) T1 volumes between 60 and 90 years old. We split the AD and CN data into training/validation/ testing sets with 260/40/260 volumes, respectively. All volumetric data are skull-stripped using DeepBrain 5 , and linearly registered to MNI 152 space using FSL-FLIRT (46). We normalise brain Implementation The generator is trained the same way as in Xia et al. (21), except we replace ordinal encoding with Fourier encoding. We pre-train the classifier for 100 epochs. The experiments are performed using Keras and Tensorflow. We train pre-trained classifiers C with Adam with a learning rate of 0.00001 and decay of 0.0001. During adversarial learning, the step size of a is tuned to be 0.01, and the learning rate for C is 0.00001. The experiments are performed using a NVIDIA Titan X GPU. Comparison methods We compare with the following baselines: 1. Naïve: We directly use the pre-trained C for comparison as the lower bound. 2. RSRS: Random Selection + Random Synthesis. We randomly select N ¼ 100 samples from the training set D train , denoted as D rand , and then use the generator G to randomly generate N synthesis ¼ 5 synthetic samples for each sample in D rand , denoted as D syn . Then we train the classifier on the combined dataset D train < D syn for k ¼ 5 steps. This is the typical strategy used by most previous works (14,27,28). 3. HSRS: Hard Selection + Random Synthesis. We select N ¼ 100 hard samples from D train based on their classification errors of C, denoted as D hard , and then use the generator G to randomly generate N synthesis ¼ 5 synthetic samples for each sample in D hard , denoted as D syn . Then we train the classifier on the combined dataset D train < D syn for k ¼ 5 steps. Input: Training dataset D train ; hyperparameter M, N, k; a pre-trained generator G; a pre-trained classifier model C. Construct D store : 1. Randomly select M% data from D train , denoted as D store . Hard sample selection: 2. Select N samples from D store that result in the highest classification errors for C, denoted as D hard . Adversarial training: 3. Randomly initialize target ages a, and obtain initial synthetic data. Frontiers in Radiology 4. RSAT: Random Selection + Adversarial Training. We randomly select N ¼ 100 samples from the training set D train , denoted as D rand , and then use the adversarial training strategy to update the classifier C, as described in Section 2.3. The difference between RSAT and our approach is that we select hard samples for generating counterfactuals, while RSAT uses random samples. . Comparison with baselines We first compare our method with baseline approaches by evaluating the test accuracy of the classifiers. We set N ¼ 100 and k ¼ 5 in experiments. We pre-train C for 100 epochs and G as described in Section 3. The weights of the pre-trained C and the pre-trained G are the same for all methods. For a fair comparison, the total number of synthetically generated samples is fixed to 500 for RSRS, HSRS, RSAT and our approach. For JTT, there are 2,184 samples mis-classified by C and oversampled. We initialize a randomly between real ages of original brain images x and maximal age (90 yrs old). From Table 2 we can observe that our proposed procedure achieves the best overall test accuracy, followed by baseline RSAT. This demonstrates the advantage of adversarial training between the conditional factor (target age) a and the classifier. On top of that, it shows that selecting hard examples for creating augmented synthetic results helps, which is also demonstrated by the improvement of performance of HSRS over Naïve. We also observe that JTT (25) improves the classifier performance over Naïve, showing the benefit of upsampling hard samples. In contrast, baseline RSRS achieves the lowest overall test accuracy, even lower than that of Naïve. This shows that randomly synthesising counterfactuals from randomly selected samples could result in synthetic images that are harmful to the classifier. Furthermore, we observe that for all methods, the worstgroup performances are achieved on the 80-90 CN group. A potential reason could be: as age increases, the brains shrink, and it is harder to tell if the ageing pattern is due to AD or caused by normal ageing. Nevertheless, we observe that for this worst group, our proposed method still achieves the best performance, followed by RSAT. This shows that adversarial training can be helpful to improve the performance of the classifier, especially for hard groups. The next best results are achieved by HSRS and JTT, which shows that finding hard samples and up-sampling or augmenting them was helpful to improve the worst-group performance. We also observe the improvement of worst-group performance for RSRS over Naïve, but the improvement is small compared to other baselines. Figure 2 presents histograms of original ages for training subjects and the target ages after adversarial training, where we can see how the adversarial training aims to balance the data. We also report the precision and recall for all methods, as presented in Table 3. We can observe that our approach achieves the highest overall precision and recall results. In summary, the quantitative results show that it is helpful to find and utilise hard counterfactuals for improving the classifier. Train G against C We choose to formulate an adversarial game between the conditional generative factor a (the target age) and the classifier C, instead of between the generator G and the classifier C. This is because we are concerned that an adversarial game between G and C could result in unrealistic outputs of G. In this section, we perform an experiment to investigate this. Specifically, we define an optimization function: x Xtrain,y Ytrain L s (C (G(x, a)), y), where we aim to train G in the direction of maximising the loss of the classifier C on the synthetic data G(x, a). After every update of G, we construct a synthetic set D syn by generating 100 synthetic images from D train , and update C on D train < D syn via Equation 5. The adversarial game G vs. C is formulated by alternatively optimising Equations 6 and 5 for 10 epochs. In Figure 3, we present the synthetic brain ageing progression of a CN subject before and after the adversarial training of G vs. C. We can observe that after the adversarial training, the generator G produces unrealistic results. This could be because there is no loss or constraint to prevent the generator G from producing low-quality results. The adversarial game only requires the generator G to produce images that are hard for the classifier C, and naturally, images of low quality would be hard for C. A potential solution could be to involve a GAN loss with a discriminator to improve the output quality, but this would make the training much more complex and require more memory and computations. We also measure the test accuracy of the classifier C after training G against C to be 81:6%, which is much lower than the Naïve method (88:4%) and our approach (91:1%) in Table 2. The potential reason is that C is misled by the unrealistic samples generated by G. Effect of conventional augmentations for registered brain MRI data In this section, we test the effect of applying several commonly used conventional augmentations, e.g. rotation, shift, scale and flip, to the training of the AD classifier. These are typical conventional augmentation techniques applied to computer vision classification task. Specifically, we train the classifier the same way as Naïve, except we augment training data with conventional augmentations. Interestingly, we find that after applying rotation (range 10 degrees), shift (range 0.2), scale (range 0.2), and flip to augment the training data, the accuracy of the trained classifier drops from 88:4% to 71:6%. We then measure accuracies when trained with each augmentation to be 74:1% (rotation), 87:1% (shift), 82:9% (scale), and 87:8% (flip). We also trained the Histograms of ages of subjects before and after adversarial learning. We can observe that adversarial training aims to balance the data. We first present the precision for different age groups (column 2-4) and all testing data (column 5), and then present the recall for different age groups (column [6][7][8] and all testing data (column 9). For each group, the best results are shown in bold. Xia et al. 10.3389/fradi.2022.1039160 Frontiers in Radiology classifier with random gamma correction (gamma ranges from 0.2 to 1.8), and the resulting test accuracy is 84:4%. This could be because both training and testing data are already preprocessed, including registered to MNI 152 and contrast normalisation, and these conventional augmentations do not introduce helpful variations to the training data but distract the classifier from focusing on subtle differences between AD and CN brains. We also tried to train the classifier with MaxUp (10) with conventional augmentations. The idea of MaxUp is to generate a small batch of augmented samples for each training sample and train the classifier on the worstperformance augmented sample. The overall test accuracy is 57:7%. This could be because that MaxUp tends to select the augmentations that distract the classifier from focusing on subtle AD features the most. The results with conventional augmentations (+MaxUp) suggest that for the task of AD classification, when training and testing data are pre-processed well, conventional data augmentation techniques seem to not help improve the classification performance. Instead, these augmentations distract the classifier from identifying subtle changes between CN and AD brains. By contrast, the proposed procedure augment data in terms of semantic information, which could alleviate data imbalance and improve classification performance. 4.2. Adversarial counterfactual augmentation in a continual learning context 4.2.1. Results when re-training with a portion (M%) of training data Suppose we have a pre-trained classifier C and a pre-trained generator G, and we want to improve C by using G for data augmentation. However, after pre-training, we only store M% (M [ (0, 100]) of the training dataset, denoted as D store . During the adversarial training, we synthesise N samples using the generator G, denoted as D syn . Then we update the classifier C on D store < D syn , using Equation 5 where D combined ¼ D store < D syn . The target ages are initialised and updated the same way as in Section 4.1. Algorithm 2 illustrates the procedure in this section. Table 4 presents the test accuracies of our approach and baselines when M changes. For Naïve-100, the results are then same as in Table 2. For JTT, the original paper Liu et al. (25) retrained the classifier using the whole training set. Here we first randomly select M% training samples as D store and find misclassified data D mis within D store to up-sample, then we retrain the classifier on the augmented set. We can observe that when M decreases, catastrophic forgetting happens for all The synthetic results for a healthy (CN) subject x at age 70: (A) the results of the pre-trained G, i.e. before we train G against C; (B) the results of G after we train G against C. We synthesise aged imagesx at different target ages a. We also visualise the difference between x andx, jx À xj. For more details see text. Xia et al. 10.3389/fradi.2022.1039160 Frontiers in Radiology approaches. However, our method suffers the least from catastrophic forgetting, especially when M is small. With M ¼ 20% of training data for retraining, our approach achieves better results than Naïve. This might be because the adversarial training between a and C tries to detect what is missing in D store and tries to recover the missing data by updating a towards those directions. We observe that RSAT achieves the second best results, only slightly worse than the proposed approach. Moreover, HSRS and JTT are more affected by catastrophic forgetting and achieve worse results. This might be because the importance of selecting hard samples declines as M decreases, since the D store becomes smaller. These results demonstrate that our approach could alleviate catastrophic forgetting. This could be helpful in cases where we want to utilise generative models to improve pre-trained classifiers (or other task models) without revisiting all the training data (a continual learning context). Results when number of samples used for synthesis (N) changes We also performed experiments where we changed N, i.e. the number of samples used for generating counterfactuals. Specifically, we set M ¼ 1, i.e. only 1% of original training data are used for re-training C, to see how many synthetic samples are needed to maintain good accuracy, especially when there are only a few training data stored in D store . This is to see how efficient the synthetic samples are in terms of training C and alleviating catastrophic forgetting. The results are presented in Table 5. From Table 5, we can observe that the best results are achieved by our method, followed by RSAT. Even with only one sample for synthesis, our method could still achieve a test accuracy of 80%. This is probably because the adversarial training of a vs. C guides G to generate hard counterfactuals, which are efficient to train the classifier. The results demonstrate that our approach could help alleviate catastrophic forgetting even with a small number of synthetic samples used for augmentation. This experiment could also be viewed as a measurement of the sample efficiency, i.e. how efficient a synthetic sample is in terms of re-training a classifier. Can the proposed procedure alleviate spurious correlations? Spurious correlation occurs when two factors appear to be correlated to each other but in fact they are not (47). Spurious correlation could affect the performance of deep neural networks and has been actively studied in computer vision field (25,(48)(49)(50)(51) and in medical imaging analysis field (52,53). For instance, suppose we have an dataset of bird and bat photos. For bird photos, most backgrounds are sky. For bat photos, most backgrounds are cave. If a classifier learns this spurious correlation, e.g. it classifies a photo as bird as long as the background is sky, then it will perform poorly on images where bats are flying in the sky. In this section, we investigate if our approach could correct such spurious correlations by changing a to generate hard counterfactuals. Here we create a dataset where 7860 images between 60 and 75 yrs old are AD, and 7,680 images between 75 and 90 yrs old are healthy, denoted as D spurious . This is to construct a spurious correlation: young ! AD and old ! CN (in reality older people have higher chances of getting AD (54)). Then we pre-train C on D spurious . The brain ageing model proposed in Xia et al. (21) only considered simulating ageing process, but did not consider brain rejuvenation, i.e., the reverse of ageing. To utilise old CN data, we pre-train another generator in the rejuvenation direction, i.e., generating younger brain images from old ones. As a result, we obtain two generators that are pre-trained on D train , denoted as G ageing and G rejuve , where G rejuve is trained to simulate the rejuvenation process. Figure 4 shows visual results of G rejuve . After that, we select 50 CN and 50 AD hard images from D spurious , denoted as D hard and perform the adversarial classification training using G rejuve for old CN samples and G ageing for young AD samples. The target ages a are initialized as real ages of x. After we obtain G ageing and G rejuvenation , we select 50 CN and 50 AD images from D spurious that result in highest training errors, denoted as D hard . Note that the selected CN images are We also show the percentage of N vs. the total number of Dstore. between 75 and 90 yrs old, and the AD images are between 60 and 75 yrs old. Then we generate synthetic images from D hard using G rejuvenation for old CN samples and G ageing for young AD samples. The target ages a are initialized as their groundtruth ages. Finally, we perform the adversarial training between a and the classifier C. Here we want to see if the adversarial training can detect the spurious correlations purposely created by us, and more importantly, we want to see if the adversarial training between a and C can break the spurious correlations. Table 6 presents the test accuracies of our approach and baselines. For Naïve, we directly use the classifier C pretrained on D spurious . For HSRS, we randomly generate synthetic samples from D hard for augmentation. For JTT, we simply select mis-classified samples from D spurious and upsample these samples. We can observe from Table 6 that the pre-trained C on D spurious (Naïve) achieves much worse performance (67:0% accuracy) compared to that of Table 2 (88:4% accuracy). Specifically, it tends to misclassify young CN images as AD and misclassify old AD images as CN. This is likely due to the spurious correlations that we purposely create in D spurious : young ! AD and old ! CN. We notice that for Naïve, the test accuracies of AD groups are higher than that of CN groups. This is likely due to the fact we have more AD training data, and the classifier is biased to classify a subject to AD. This can be viewed as another spurious correlation. Overall, we observe that our method achieves the best results, followed by HSRS. This shows that the synthetic results generated by the generators are helpful to alleviate the effect of spurious correlations and improve downstream tasks. The improvement of our approach over HSRS is due to the adversarial training between a and C, which guides the generator to produce hard counterfactuals. We observe JTT does not improve the test accuracies significantly. A potential reason is that JTT tries to find "hard" samples in the training dataset. However, in this experiment, the "hard" samples should be young CN and old AD samples which do not exist in the training dataset D spurious . By contrast, our procedure could guide G to generate these samples, and HSRS could create these samples by random chance. Figure 5 plots the histograms of the target ages a before and after the adversarial training. From Figure 5 we can observe that the adversarial training pushes a towards the hard direction, which could alleviate the spurious correlations. For instance, in D spurious and D hard the AD subjects are all in the young group, i.e. 60-75 yrs old, and the classifier learns the spurious correlation: young ! AD, but in Figure 5A we can observe We first present the average test accuracies for different age groups with CN diagnosis (column 2-3) or AD (column [4][5], and then present the average test accuracies for the whole testing set (column 6). For each method, the worst-group performance is shown in italic. For each age group, i.e. each column, the best performance was shown in bold. For more details see text. Example results of brain rejuvenation for an image (x) of a 85 year old CN subject. We synthesise rejuvenated imagesx at different target ages a. We also show the differences betweenx and x,x À x. For more details see text. Xia et al. 10.3389/fradi.2022.1039160 Frontiers in Radiology that the adversarial training learns to generate AD synthetic images in the range of 75-90 yrs old. These old AD synthetic images can help alleviate the spurious correlation and improve the performance of C. Similarly, we can observe a are pushed towards young for CN subjects in Figure 5B. Conclusion We presented a novel adversarial counterfactual scheme to utilise conditional generative models for downstream tasks, e.g. classification. The proposed procedure formulates an adversarial game between the conditional factor of a pretrained generative model and the downstream classifier. The synthesis model used in this work uses two generators for ageing and rejuvenation. Others have shown that one model can handle both tasks albeit in another dataset and with less conditioning factors (55). We do highlight though that our approach is agnostic to the generator used and since could benefit from advances in (conditional) generative modelling. In this paper, we demonstrate that several conventional augmentation techniques are not helpful for registered MRI. However, there might be other heuristic-based augmentation techniques that will improve performance, and it is worth trying to combine our semantic augmentation strategy with such conventional augmentation techniques to further boost performance. The proposed adversarial counterfactual scheme could be applied to generative models that produced other types of counterfactuals rather than the ageing brain, e.g. the ageing heart (55, 56), future disease outcomes (57), existence of pathology (58,59), etc. The way we updated the conditional factor (target age) could be improved. Instead of a continuous scalar (target age), we can consider extending the proposed adversarial counterfactual augmentation to update other types of conditional factors, e.g., discrete factor or image. The strategy that we used to select hard samples may not be the most effective and could be improved. Data availability statement Publicly available datasets were analyzed in this study. This data can be found here: https://adni.loni.usc.edu. Ethics statement Ethical review and approval was not required for this study in accordance with the local legislation and institutional requirements. Author contributions TX, PS, CQ and SAT contributed to the conceptualization of this work. TX, PS, CQ and SAT designed the methodology. TX developed the software tools necessary for preprocessing and analysing images files and for training the model. TX drafted this manuscript. All authors contributed to the article and approved the submitted version. Histograms of target ages a before and after adversarial training: (A) the histogram of a for the 50 AD subjects in D hard ; (B) the histogram of a for the 50 CN subjects in D hard . Here we show histograms of a before (in orange) and after (in blue) the adversarial training.
v3-fos-license
2024-01-13T06:17:26.766Z
2024-01-01T00:00:00.000
266962617
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "HYBRID", "oa_url": "https://link.springer.com/content/pdf/10.1007/s11250-023-03877-w.pdf", "pdf_hash": "d83d74956f6c588cf2bdccbf0b391c3f2371fdc9", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:1744", "s2fieldsofstudy": [ "Environmental Science" ], "sha1": "1cebb23df5874281341bba452dbc664eff6758b4", "year": 2024 }
pes2o/s2orc
Effect of ambient lead on progesterone and pregnancy-associated glycoprotein 1 and their relationship with abortion in Zaraibi goats: a field study This study aimed to investigate the impact of ambient lead (Pb) exposure on progesterone (P4) and pregnancy-associated glycoprotein 1 (PAG1) and their relationship with abortion in Egyptian Zaraibi goats (C. hircus). To achieve this, 40 female goats (does) were mated with highly fertile male goats, resulting in a total of 28 pregnant goats. Eight of them aborted, and each of the 12 pregnant goats gave birth to one kid, whereas the remaining eight gave birth to twins. The levels of PAG1, P4, and Pb in serum were estimated by enzyme-linked immunosorbent assay (ELISA), radioimmunoassay (RIA), and inductively coupled plasma mass spectrometry (ICP-MS) respectively. Statistically, the repeated measure two-way ANOVA, regression analysis, correlation coefficient, and receiver operating characteristic (ROC) curves were applied. The current data demonstrated that the levels of blood Pb in aborted goats were significantly higher than those in non-aborted goats at the early, mid, and late gestations, and this was followed by significant decreases in serum PAG1 and P4. Furthermore, there were substantial inverse associations between blood Pb concentration and levels of PAG1 and P4, with markedly negative correlation coefficients of − 0.88 and − 0.77, respectively, in aborted goats. The threshold level of Pb required to cause abortion was ≥ 32.08 μg/dl, but for PAG1 and P4 were respectively ≤ 0.95 ng/ml and ≤ 0.48 ng/ml. Additionally, threshold levels of ≥ 12.34 ng/ml and ≥ 31.52 ng/ml for P4 and PAG1, respectively, were needed to deliver twins. In conclusion, pollution-induced increases in Pb bioavailability resulted in dramatic decreases in P4 and PAG1 levels, leading to abortions. PAG1 and P4 levels are also key factors in determining whether Zaraibi goats will give birth to twins. Introduction Zaraibi goats are considered one of the most important economic sources of livestock to Egyptian peoples till now (Nowier et al. 2020).Egypt has over 4.3 million goats, the majority of which are Baladi and Barki goats raised for meat production (Hassen and Tesfaye 2014) and Zaraibi goats reared for milk production (Galal 2005).The Egyptian Nubian goat (E.Nubian), known as the Zaraibi or Nubi goat in Upper Egypt of the Arab Republic of Egypt, Zaraibi goats are one of the main ancestors of the common Anglo-Nubian goat (Aboul-Naga et al. 2012). One of the key components of animal production in Egypt and a significant source of red meat is the goat industry.The amount of goat meat produced in Egypt during the period (2015-2019) was about 30 thousand tons, which represents about 4.14% of the average total production of red meat in Egypt (Hosny et al. 2022). To increase goat production and population, successful reproduction is essential; hence, it is important to have a thorough understanding of animal physiology throughout the various stages of reproduction (Salve et al. 2016).Reproductive health issues hinder goat breeding development plans (Haile 2014).Reproductive disorders negatively impact goat producers, reducing food production and affecting threatened animal species' persistence.Major abortions and pregnancy losses due to embryonic mortality constrain gestation in all livestock animals (Yadav et al. 2021).Abortion is a multifactorial phenomenon controlled by many factors, including infectious agents (bacterial, viral, fungal, and protozoan agents, etc. and non-infectious factors such as toxicities, malnutrition, stress, maternal endocrine imbalance, and ambient temperature (Hajiabadi et al. 2022). Heavy metals in animal feed and water can harm animal health due to their bioaccumulation (Agbugui and Abe 2022;Ghazzal et al. 2022).Exposure to sub-lethal quantities of Pb can negatively affect various biochemical and physiological systems (Elarabany and El-Batrawy 2019).Ruminants are often exposed to toxic environmental toxins, posing a threat to animal health (Gensa 2019;Mridula et al. 2022).These toxins affect various organs, like the reproductive, nervous, respiratory, liver, gastrointestinal, and endocrine systems (Volkov and Ezhkova 2020;Bíreš et al. 1995).They caused poor body conditions, slowed reproduction rates, and cancer due to their mutagenicity, teratogenicity, and carcinogenicity (Bíreš et al. 1995;Dasharathy et al. 2022). Lead is a reproductive toxin that affects female animals' reproduction, causing endometritis in ewes (Stoev et al. 1997), decreased fertility in cows ) McEvoy and McCoy 1993), poor conception rates, decreased heat detection, and longer service intervals in buffalo cows (El-Tohamy et al. 1997). Pregnancy detection is crucial for animal production systems to avoid abortion in herds due to unknown causes (Smith et al. 2015).Therefore, the reproductive process for any animal production system must include a crucial stage known as pregnancy detection for decisions on rebreeding or culling non-pregnant females.Early, accurate, and practical methods are needed for reproductive performance improvement (Arashiro et al. 2018).Various methods, including abdominal palpation, radiography, ultrasonography, and hormone detection, are being used in small ruminants with variable diagnostic accuracy.This helps in making informed decisions on rebreeding or culling non-pregnant females. Numerous methods for small ruminant pregnancy detection have been developed to optimize reproductive performance in goats (Karadaev 2015).The ideal pregnancy test should have high sensitivity, specificity, and simplicity in conducting under field conditions (Pohler et al. 2016). Pregnancy-associated glycoproteins (PAG), a large family of inactive aspartic proteinases, are only secreted by monoand bi-nucleate trophectoderm cells (Xie et al. 1991).Since PAG only has ruminant placental origin, it is thought to be a better appropriate biomarker for pregnancy in goats (Roberts et al. 2017).Bovine species have had 22 PAG genes (boPAG-1 to boPAG-22) cloned and fully sequenced (Garbayo et al. 2008).Not all PAGs are detectable at the same period of gestation; some arrive earlier and others later (Green et al. 2000).While some are present from the middle to the end of pregnancy, others start to appear about day 25 but are missing in the last stages of pregnancy (Green et al. 2000).Zamfirescu et al. (2011) reported that progesterone (P 4 ) and pregnancy-associated glycoproteins (PAGs) were considered as laboratory tools for pregnancy detection and observed that the quantitative measurement of (PAGs) can be used to confirm early gestation in goats. The current study is part of an integrated research project to identify the environmental factors that may be the direct and/or indirect cause of the frequent and noticeable abortion of female goats on many animal farms in Egypt.This resulted in a sharp and noticeable decline in livestock and in the number of goats, resulting in a reduction in the meat of female Zaraibi goats in animal farms.To achieve this goal, physiological parameters of progesterone (P 4 ), pregnancy-associated glycoprotein 1 (PAG1), and ambient ionic pollutants of lead (Pb) in the blood were measured in pregnant goats.The regression and correlation analyses were performed to determine the significant relationships and correlation coefficients of ambient Pb ions with gestation stages and serum PAG1 and P 4 levels to clarify their effect on abortion. Management of animals The animals were kept as part of the flock of the Animal Production Research Institute (APRI) and Agriculture Research Center (ARC) Sakha Experimental Station in Kafr El-Sheikh governorate (31.089°N, 30.951°E).The current study comprised 40 healthy and disease-free multiparous estrous-cycle native Egyptian Zaraibi goats (C.hircus).Throughout the trial, the animals were group-housed and maintained in a semi-intensive management system with uniform dietary conditions (65% undercoated cottonseed cake, 11% rice straw, 18% wheat bran, 3% molasses, 2% limestone, and 1% salt), 660 g/head/day and free access to water and salt blocks.This portion of the field study lasted for five months, from May to September 2018. A single vasectomized male Zaraibi goat (an infertile male) was introduced twice daily at 8 a.m. and 4 p.m. to detect the estrus phase of the does.Five viable mature fertile male Zaraibi goats (bucks) were used for mating estrus females.Mating was allowed to occur spontaneously for 45 days, and pregnant goats were identified using ultrasound and ultrasonography. Transrectal ultrasonography was performed on all animals until the 60th day of pregnancy (ESOATE Pie Medical Aquila Pro-Vet + probe, 6.0 MHz LA Rectal Veterinary Transducer), as described by Padilla-Rivas et al. (2005).Ultrasonography of the pregnant uterus revealed an anechoic embryonic vesicle (black) encircling the echoic (white) elongated streak (foetus), which extended across more than half of the fetal fluid.Ultrasonography revealed that 28 of the 40 goats were pregnant, while the remaining 12 did not become pregnant and were therefore excluded from the study.Eight goats aborted during pregnancy, while the remaining 20 goats gave birth to 12 singles and eight twins.As a result, there were 12 single and eight twin pregnancies. Prior to the trials, all goats had received vaccinations against the most contagious diseases.The Institutional Animal Care and Use Committee (IACUC) of Cairo University authorized the care and handling of animals under the permit CU/I/S/96/17.All measurements performed in compliance with the veterinary standards were approved by the Animal Ethics Committee of the Institute. Blood sampling Blood samples were obtained by jugular venipuncture.The blood was collected in non-heparinized tubes at room temperature and then centrifuged at 3000 × g for 15 min at 4 °C to separate the serum that was stored at − 20°C until measurements of lead (Pb), pregnancy-associated glycoprotein 1 (PAGs), and progesterone (P 4 ) levels in the serum of aborted and non-aborted goats. Pregnancy-associated glycoprotein 1 The estimation of pregnancy-associated glycoproteins (PAGs) levels was quantified by enzyme-linked immunosorbent assay (ELISA) using a bovine pregnancy-associated glycoproteins 1 (PAG1) ELIZA kit using kits from Shanghai Coon Koon Biotec Co., Ltd., Room 1408, 1687 Chang Yang RD, Shanghai, China.Cat No. CK-bio-18624, Standard Curve Range: 1-48 ng/ml; sensitivity: 0.1 ng/ml).The assay was conducted according to the procedures described in the enclosed catalog, and an automatic photometer plate reader was used for readings of absorbencies.The intraassay precision (CV%) was less than 10%.The inter-assay precision (CV%) was < 15%.The computed level of PGA1 in serum was expressed as nanograms per milliliter (ng/ml). Lead assay The lead (Pb) ion serum content was estimated using inductively coupled plasma mass spectrometry (ICP-MS), according to (Morsy et al. 2016).The sera were completely digested by concentrated nitric acid that evaporated after complete digestion until yellowish-white ash appeared on the wall of the test tube.The precipitate was dissolved in 3 ml HCl and diluted with deionized water.The estimated lead (Pb) level in the serum was expressed as μg/dl. Statistical analysis The Kolmogorov-Smirnov test confirmed that the current data were normally distributed; hence, a parametric statistical analysis was used.Accordingly, the repeated-measures greenhouse two-way analysis of variance (ANOVA) due to the non-homogeneity of the raw data was applied to clarify the significant changes in the studied dependent variables of Pb, PAGs, and P4 contents as direct responses of the independent variables of the pregnancy intervals (28, 46, 60, 88, 108, 128, and 148 days) and, in turn, the effect of these parameters on cases of birth (abortion and delivery of one or two fetuses).In addition, post hoc analysis of variance (ANOVA) of Schiff's test was used to compare dependent values. Regression analysis and correlation coefficients were computed to fit the relationship between the interval pregnancy period (independent variable) and the dependent variables, as well as the association between the levels of Pb ions in the serum and the concentrations of P 4 and PAG1 in the serum of aborted goats.The levels of studied parameters at, above, or below which abortion occurred were identified using diagnostic statistical analysis of receiver operating characteristic (ROC) curves.IBM Statistical Package for the Social Sciences, version 28, was used to analyze the data. Repeated-measures two-way analysis of variance demonstrated that the pregnancy stage had a significant effect on the levels of serum Pb, PAG 1, and P 4 in pregnant Zaraibi goats, which in turn had a substantial effect on abortion and the number of kids born (Table 1). According to the post hoc analysis of variance (ANOVA) for Scheffe's test, the serum Pb content of aborted goats was significantly higher than that of goats that delivered one or two kids throughout all pregnancy periods, whereas in goats that delivered a single kid or twins did not differ (Table 2).The gestation stages (28, 46, 60, 88, 108, 128, and 148 days) had a significant direct exponential relationship with the levels of Pb in aborted goats and were accompanied by marked positive correlation coefficients of + 0.98 (Table 2). The levels of PAG1 and P 4 in the serum of goats that gave birth to a single kid or twin were substantially greater than those in goats that had an abortion at all corresponding stages of pregnancy (Table 2).Additionally, PAG1 and P 4 concentrations in twin-bearing goats were significantly higher than those in single-bearing goats at all stages (Table 2).As shown in Table 2, according to the regression analysis and correlation coefficient, in aborted goats, the levels of serum PAG1 and P 4 content exhibited a significant inverse power and exponential relationship with the gestational stages and were associated with a significant negative correlation coefficient of − 0.78 and − 0.94, respectively (Table 2), that is, the levels of PAG1 and P 4 decreased with increasing gestational stage time. As shown in Fig. 1, the serum Pb content of aborted goats exhibited a significant inverse power relationship with the concentrations of PAG1, and this was accompanied by a significant negative correlation coefficient of − 0.88, whereas with levels of P 4 showed a significant inverse exponential association with a significant correlation coefficient of − 0.77. As seen in Table 3, the receiver operating characteristic (ROC) analysis revealed that the threshold levels of Pb, P 4 , and PAG1 in the serum of aborted goats were ≥ 32.08 μg/dl, ≤ 0.48 ng/ml, and ≤ 0.95 ng/ml, respectively, with a significant area under curves (AUC) of 1.00 (Table 3).This means that the serum Pb level of ≥ 32.08 μg/dl will Photo 1 Transrectal ultrasonographic photos of two pregnant does.One had a single embryo at day 32 while the other had twins embryos at day 26 Table 1 The repeated measures greenhouse two-way ANOVA table that analyzes the changes of the levels of serum lead (Pb, μg/dl); the pregnancy-associated glycoprotein1 (PAG1, ng/ ml), and progesterone (P 4 , ng/ml) contents in pregnant Zaraibi goats as a response to the pregnancy stages and their effect on the birth cases (abortion or non-abortion) SS sum of squares, df degree of freedom, MS mean of squares F calculated : the computed F-value of the data PP*BC: interaction of pregnancy period intervals with the birth cases (abortion, single, or twins) P < 0.0001: significant effect at α = 0.0001 cause abortion, but below this value will not, and vice versa for P 4 and PAG1 ≤ 0.48 ng/ml and ≤ 0.95 ng/ml will induce abortion.In addition, as shown in Fig. 2, the threshold values of P 4 and PAG1 required to induce give birth to twins were ≥ 12.34 ng/ml and ≥ 31.52 ng/ml with significant excellent AUC of 0.96 and 0.90, respectively. Discussion The current findings showed that the bioavailability of blood lead (Pb) levels in aborted goats was significantly higher than that of non-aborted goats at most pregnancy stages.Even though Zaraibi goats were supported by P4 PAGs Fig. 1 The relationship between levels of lead (Pb, μg/dl) with each of the progesterone (P 4 , ng/ml) and pregnancy-associated glycoprotein (PAG1, ng/ml) content in the sera of aborted Zaraibi goats.The symbol * indicates a significant correlation coefficient between the studied parameters.The letter x indicated the levels of Pb ions in sera, whereas the letter y is the levels of PAG1 and P 4 throughout the gestation stages veterinary care under livestock breeding management, laboratory measurements affirmed the presence of Pb in the blood of all non-aborted and aborted goats in varying proportions, indicating that their environment was lead-contaminated.Most toxicologists and environmental pollution experts believe that the presence of Pb in most mammalian tissues, including blood, is normal and not unusual, but in proportions consistent with the permissible limit decided by UNESCO (WHO 1987).This clarifies and explains the Pb ion bioaccumulation in most tissues, including blood, of both aborted and non-aborted goats because of Pb environmental exposure (Azeh Engwa et al. 2019) as in our current findings.The route of exposure, the physiochemical characteristics, and the toxicokinetic of the Pb molecule, as well as an individual's age and nutritional status, all have an impact on how much lead is absorbed (Morrow et al. 1980).The levels of free Pb ions reaching systemic blood circulation, because of their absorption via all routes of exposure, are called bioavailability of Pb.According to previous studies, 40% of Pb that is inhaled is deposited in the lungs, and deposited in the lower respiratory tract absorbs almost entirely (Morrow et al. 1980).The duodenum is where Pb is largely absorbed during digestion; however, age and nutritional state can have a significant impact on how quickly lead is absorbed (Graziano et al. 1996).Pb can also be absorbed via healthy skin (Wright et al. 2003).Accordingly, about 95% of the available lead is transported and distributed those ions to all tissues (Stauber et al. 1994) . In systemic circulation, the bioavailable Pb attaches to hemoglobin in erythrocytes and is then easily transferred to soft tissues such as the kidney, liver, reproductive organs (ovaries, placenta, etc.), and central nervous system (Goutam Mukherjee et al. 2022).Pb accumulates and is stored largely in bones and teeth after redistribution, accounting for up to 90% of the total Pb body burden (Barry 1975).Pb has a half-life of 30 days in blood and most soft tissues, but a half-life of up to 25 years in bone (Hu et al. 1998).Pb produced from bones is a significant endogenous exposure route and/or source that can contribute up to 50% of the blood Pb levels in the absence of exogenous exposure, physiologically, Pb is easily absorbed by the fetus through the placenta and builds up in breast milk (Gulson et al. 1998).On the basis of toxicokinetics, urine and/or biliary clearance are the main pathways by which ingested Pb is expelled from the body via Phase I and II biotransformation; however, in goats and most models of (Rădulescu and Lundgren 2019).Accumulating research also suggests that kids excrete ingested Pb at a slower pace than adults, which may contribute to extended retention durations in kids (Stauber et al. 1994) Our current data show that the levels of Pb ions in the serum of non-aborted goats were much lower than those of aborted goats.This demonstrates that abortion is entirely reliant on the concentration of Pb ions in goat serum, as well as their bioaccumulation in their tissues, particularly the ovary, placenta, and liver (Canaz et al. 2017).Statistically, the serum Pb concentration threshold value to trigger abortion was ≥ 32.08 μg/dl.This means that the Pb concentration required to induce abortion was ≥ 32.08 μg/dl, implying that Pb concentrations below this level will not cause abortion as demonstrated in our non-aborted goats.This interpretation is reinforced by the fact that blood Pb ion levels in mothers at all stages of pregnancy were substantially below the threshold level and failed to disturb levels of progesterone (P 4 ) and pregnancy-associated glycoprotein1 (PAG1) and consequently allowing the pregnancy to continue to term as will discuss below. In aborted goats, according to the current results, the serum Pb contents were significantly higher than those of non-aborted goats at all the gestation stages.In addition, there was a significant exponential direct relationship between the gestational stages (28, 46, 60, 88, 108, 128, and 148 days), and the levels of Pb ions content, and this was accompanied by a significant positive correlation coefficient.Accordingly, this relationship confirmed the existence of vital continued accumulations of Pb in the serum of aborted goats and consequently in soft tissues of the reproductive system especially the ovaries and placenta (Massányi et al. 2020). Pregnancy-associated glycoproteins (PAGs), in mammals including goats, are a group of glycoproteins mainly produced by the trophoblast cells of the placenta of mammals.PAGs have been shown to be useful for identifying the presence of vital embryos and for pregnancy follow-up monitoring, particularly in bovine, goats, and other dairy animals (Barbato et al. 2022;Filho et al. 2020).In ruminants, PAGs are synthesized in the mono-and binucleate cells of the trophectoderm and released into maternal blood circulation where they can be quantified (Zoli et al. 1992).PAG1 have been identified and immunolocalized as part of the discoidal-type placenta in some mammalian species (Panasiewicz et al. 2019). Several studies on goats have linked high PAG concentrations to a decrease in the activity of polymorphonuclear neutrophils (Dosogne et al. 1999), implying that trophoblast PAG production, influencing maternal immunological status, could be a mechanism by which the conceptus protects itself from rejection.PAGs, as stated by Austin et al. (1999), play a hormonal role in the release of granulocyte chemotactic protein-2 (GCP-2), an α-chemokine whose production is stimulated by interferon-τ (IFN-τ) in early pregnancy (Barbato et al. 2022).As a result, IFN-τ and PAGs would play a similar role in the activation of this chemokine, which appears to be implicated in the start of pregnancy.As a result, PAGs have been proposed as a luteotropic component of the placenta (Xie et al. 1994). Our present results demonstrated that the PAG1 content of goats who gave birth to twins was significantly higher than those who gave birth to a single kid.PAG1 levels in maternal circulation are higher in twin-bearing goats than in single-fetus goats (González et al. 2000;Sousa et al. 1999), and they are also higher (about ten times) in inter-specific pregnancies than in normal intra-specific gestation (Morecroft et al. 2015), which is consistent with our findings.González et al. (2000) found that the goat that delivered twin fetuses had higher PAG concentrations than those that delivered a single fetus.Moreover, in native North Moroccan goats.Chentouf et al. (2008) observed statistical differences between goats carrying one or two fetuses.Vasques et al. (1995) proposed the relationship between fetal growth rate and PAGs increase during pregnancy in cows as the important decline in PAG1 was reflected by stopped trophectoderm development.Additionally, the successive monitoring of PAG1 in goats also enables the identification of trophoblastic activity disorders that result in fetal death (Zarrouk et al. 1999;Batalha et al. 2001;Faye et al. 2004). As observed in our results, PAG1 are typically detectable in maternal blood starting from around day 28 of pregnancy in cattle (Barbato et al. 2022).The levels of PAG1 increase as the pregnancy progresses and can reach peak levels at different time points depending on the species and individual animal.Different goat breeds may have variations in their PAG profiles, and some breeds may have higher or lower levels of PAGs compared to others (Morecroft et al. 2015). Progesterone (P 4 ) is a steroid hormone that plays a crucial role in the regulation of female reproductive physiology, such as ovulation, implantation, pregnancy maintenance, and lactation (Kolatorova et al. 2022).It exerts its effects by binding to progesterone receptors (PRs), which are expressed in various tissues, such as the uterus, mammary gland, brain, and bone (Dinny Graham and Clarke 1997).During pregnancy, progesterone production is essential for the maintenance of gestation (Arck et al. 2007).The corpora lutea, formed on the ovary after ovulation, produce progesterone in goats (Gaafar et al. 2005).The progesterone levels are highest during mid-pregnancy and gradually decline towards parturition (Convey 1974).It is involved in regulating the estrous cycle and preparing the uterus for pregnancy.It helps to maintain the uterine environment required for successful implantation and embryonic development (Lonergan 2011).Progesterone levels in goats can be used for pregnancy diagnosis and determination.Low progesterone levels can indicate that a doe is not pregnant, while high levels alone do not confirm pregnancy but rather indicate the presence of progesterone (Rawlings and Ward 1977).In addition to its role in reproduction, progesterone also plays a crucial role in synchronizing estrus in goats.Progestogens, which are synthetic derivatives of progesterone, have been used for estrus synchronization in goats (Rawlings and Ward 1977).They help to regulate and control the timing of estrus, allowing for more controlled breeding practices.P 4 plays an important role in uterine growth promotion and myometrium contractility suppression, oocyte maturation, implantation facilitation, and pregnancy maintenance in the uterus and ovary (Huang et al. 2005).It provided the lobular-alveolar development in the mammary gland to prepare for milk production and reduce milk protein synthesis before parturition (Woo and Shadel 2011). Our results revealed that the levels of progesterone (P 4 ) in the peripheral plasma of pregnant goats increased after mating and remained high from day 28 to day 148 of pregnancy.These findings are consistent with those of Thorburn and Schneider (1972) who found that throughout early pregnancy, plasma progesterone concentrations remain constant from day 8 to day 60, then increase between days 60 and 70, and remain stable until just before parturition.In the pregnant goat, the ovary is the primary location of progesterone production, while production by the placenta is minimal and unlikely to alter the level of this hormone in maternal circulation. In the present results, levels of PAG1 and P 4 produced and released by the placenta and corpus luteum, respectively, were sufficient for pregnancy maintenance in Zaraibi goats.According to Sawada et al. (1994), the levels of P 4 and PAG1 in the serum began to rise after 10 days of mating and persisted up until 140 days of pregnancy before rapidly declining one day before parturition.This is consistent with our current results, with the difference that each of the PAGs and P 4 reached their highest average on day 88 after mating, then began to decrease slowly and gradually until the date of birth.On the other hand, the PAG and P 4 levels in our data were different from those reported by González et al. (2004) and Chentouf et al. (2008).We attribute this to the homologous or heterologous goat breed variants that affect P 4 synthesis, and this reflects the genetic strategy for maintaining a pregnancy under severe conditions (Sousa et al. 1999).PAGs and P 4 synthesis differ by breed, which reflects a genetic strategy for maintaining a pregnancy under the severe conditions of our Egyptian farm, according to their current levels. Physiologically, the reactive oxygen species (ROS), which are produced during pregnancy because of metabolic changes in the mother and fetus, are required for the proliferation, differentiation, and maturation of developing cells.This is because the development of the fetal organs during pregnancy requires an appropriate supply of nutrients and oxygen (Bak and Roszkowski 2013). The placenta of the mother is dyspeptic with mitochondria, which are the primary source of energy since they create and release pro-oxygenates, earning them the moniker "ROS factories" and/or "powerhouses."The superoxide anion radical, which is formed in vast amounts, is a generator of more reactive oxygen, such as hydrogen peroxide and hydroxyl free radicals.Their production increases as the pregnancy progresses, which is mostly due to an increase in placental mass (Toboła-Wróbel et al. 2020). During normal pregnancy, the phenomenon of the mother's immunological tolerance to the fetus' antigens, which permits the kid to develop in the uterus despite the pregnant female's ability to reject the foreign antigen, is a crucial factor in a pregnancy that is progressing normally (Toboła-Wróbel et al. 2020).The creation of ROS is reduced in a normally functioning pregnant organism due to the reduced activities of the immune system (Moore et al. 2019).Low levels of ROS work physiologically as a defensive mechanism against pathogenic pathogens (Puertollano et al. 2011) as in non-aborted goats as mentioned above. High bioaccumulation of serum Pb content reflects its high concentrations in various tissues; they exacerbate oxidative stress in tissues because of the overproduction of ROS by direct action in the mitochondrial electron transport chain, resulting in cellular peroxidation of lipids, proteins, and DNA (Belyaeva et al. 2008), resulting in a cycle of cellular or molecular damage (Bouayed and Bohn 2010) and inflammation in the placenta, which can affect the synthesis and secretion of PAGs and other hormones Mason et al. 2014).On the other hand, Pb accumulates in the tissues of the fetus throughout certain developmental phases of pregnancy, when it then displays its harmful effects (Mason et al. 2014).Additionally, abnormal high Pb accumulation can impair the expression and function of placental transporters, such as amino acid transporters and glucose transporters, which are essential for fetal nutrition and development (Collin et al. 2022). As a response to the oxidative stress, it may be suggested that the abortion in goats could be attributed to the reduction of essential elements such as P, Fe, and Zn (Casas and Sordo 2006); the reduction of total proteins (Collin et al. 2022) required for the synthesis of progesterone (P 4 ); and the alteration in gene expression related to enzymatic and hormonal codes because of the overproduction of ROS (Hernández-Coro et al. 2021). According to the current findings, P 4 levels in the blood of aborted goats were significantly reduced and had an inverse relationship with serum levels of Pb.This may be attributed to the partial or complete block of protein synthesis as shown in current data, which consequently reduces the process of protein synthesis needed for progesterone (P 4 ) synthesis.This assumption is confirmed by the severe drop in the level of P 4 in the blood to a level not enough to fix the attachment of embryos to the placenta, causing abortion.This may be attributed to Pb accumulation which might enhance the overproduction of ROS, leading to oxidative damage of the mitochondria and endoplasmic reticulum, which are responsible for energy production and protein synthesis, respectively.In our current data on aborted goats, the threshold level of P 4 to induce abortion was ≤ 0.48 ng/ dL.The inhibition of P 4 may also be referred to as the toxicity of Pb that alter and/or disturb the gene expression of endogenous antioxidants (Mao et al. 2018).Hamed et al. (2012) reported that Pb caused severe damage to DNA in the brain, liver, kidney, and reproductive tissues, leading to the production of abnormal strands of mRNA that control the synthesis of P 4 , consequently reducing its production.Pb bioaccumulation caused a significant disturbance in DNA molecular structure that, of course, altered the gene expression of mRNA, causing a significant depletion in the synthesis of proteins in the cells and leading to the shortage of protein precursors required for P 4 synthesis. In conclusion, the current results affirmed the following: 1.The lead ions accumulated in the serum of aborted goats were significantly higher than those of non-aborted goats.2. The levels of PAG1 and P 4 in the blood of goats that gave birth to twins were significantly higher than those of goats that gave birth to single kids, and both were markedly higher than those of aborted goats.3.In aborted goats, the time of gestation exhibited a significant direct exponential relationship with serum Pb content, accompanied by a significant positive correlation coefficient of + 0.98.In contrast, the levels of serum PAG1 and P 4 showed a significant inverse power and exponential relationship with the time of gestation, with significant negative correlation coefficients of − 0.78 and − 0.94.4. The serum Pb content in aborted goats exhibited a significant inverse relationship with each of the PAG1 and P 4 levels, and these were associated with significant correlation coefficients of − 0.88 and − 0.77 respectively.This indicates that Pb accumulation is the main dependent factor that severely reduces the levels of serum PAGs and P 4 , which in turn causes abortion.5.The threshold level of serum Pb content required to cause abortion was ≥ 32.08 μg/dl, whereas serum PAG1 and P 4 were ≤ 0.95 ng/ml and ≤ 0.48 ng/ml, respectively.The threshold levels ≥ 12.34 ng/ml and ≥ 31.52 ng/ml for P 4 and PAG1, respectively, were needed to deliver twins.6. PAG1 and P 4 levels are also key factors in determining whether Zaraibi goats will give birth to twins. 7. The results of the current study shed light on pollutants and the extent of their impact on livestock in the Arab Republic of Egypt, which requires us to carry out more research that must work to combat pollution in all its forms, biologically and chemically, in order to advance livestock, through which we can fill the food gap as well as limit the economic deterioration of livestock, which is considered the food artery for the masses of Egyptian people who suffer from the deterioration of livestock. Based on our current field studies, we hope that the gentlemen responsible for managing animal farms will follow up on the different kinds of environmental pollution that have horrific effects on the productivity of those farms and work to treat and avoid destructive factors to avoid heavy losses to Egyptian income. as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made.The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material.If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/. Table 2 Changes in serum lead (Pb, μg/dl), pregnancy-associated glycoproteins (PAG1, ng/ml), and progesterone (P 4 , ng/ml) contents of Zaraibi goats who aborted and those who gave birth to a single kid or twins during pregnancy stages(28, 46, 60, 88, 108, 128, and 148 days) Data represented as an average ± SEMThe symbols * and ■ indicate a significant difference (P < 0.05) in comparison with the corresponding aborted goats and those that birth a single kid, respectivelyIn the same column, letters a, b, c, d, e, and f indicate a significant difference (P < 0.05) in comparison with those at pregnancy stages of28, 46, 60, 88, 108, 128, and148 days of gestation, respectively R regression equation r*: indicated the correlation coefficient
v3-fos-license